UNIT-3 (2)
UNIT-3 (2)
UNIT-3 (2)
USABILITY METRICS
3.1 Meaning -Usability Metrics:
Usability metrics are quantitative measures used to evaluate the user-friendliness and
effectiveness of a product, system, or service. These metrics assess how efficiently users can
complete tasks, how satisfied they feel with the experience, and how prone they are to errors
while interacting with the system. Key usability metrics include task success rate, which
measures the percentage of tasks successfully completed by users, and error rate, which
tracks the frequency of mistakes made during use. Time on task evaluates how long it takes
users to complete specific actions, while user satisfaction is typically gauged through surveys
or questionnaires. Learnability examines how easily new users can accomplish basic tasks,
and efficiency reflects the speed and ease with which experienced users perform actions.
Additionally, retention rate measures how well users remember how to use the system over
time. Together, these metrics provide valuable insights into user behaviour and experience,
guiding improvements to make systems more intuitive, accessible, and enjoyable.
Some key usability metrics include:
1. Effectiveness: How accurately users can complete a task. This is often measured by
the success rate or task completion rate.
2. Efficiency: How quickly users can complete a task. This metric is often measured by
the time it takes to perform a task or the number of steps needed to complete it.
3. Satisfaction: How users feel about their experience with the system. It can be
measured using surveys, feedback, or tools like the System Usability Scale (SUS).
4. Error Rate: The number of errors users make while performing a task. A higher error
rate typically indicates a usability issue.
5. Learnability: How easy it is for users to learn to use the system. This is often
measured by how quickly users can become proficient with the system.
6. Retention: How well users remember how to use the system after a period of time.
Performance metrics have several key features that make them valuable tools for evaluating
and improving the efficiency and effectiveness of systems, processes, or organizations. These
features include:
Issue Frequency: Tracks how often specific issues occur over a given period,
providing a sense of recurring problems.
Issue Severity: Measures the impact or criticality of each issue, helping prioritize
those with the most significant effect on performance or outcomes.
Resolution Time: Captures the time it takes to resolve an issue, reflecting the
efficiency of the problem-solving process.
Root Cause Analysis Metrics: Evaluates the underlying causes of issues, focusing on
patterns or systemic problems that need addressing.
Reopen Rate: Measures the percentage of issues reopened after being marked as
resolved, indicating the effectiveness of initial solutions.
Impact Metrics: Assesses how issues affect overall performance, customer
satisfaction, or other critical outcomes.
Detection Rate: Monitors how quickly issues are identified, highlighting the
efficiency of monitoring and detection processes.
Escalation Rate: Tracks the proportion of issues escalated to higher levels of
management or technical support, reflecting the complexity of problems.
Preventive Action Metrics: Evaluates the effectiveness of measures taken to prevent
similar issues in the future.
Trend Analysis: Identifies patterns or trends in issue occurrence, offering insights
into potential long-term risks or opportunities for improvement.
6. Prioritize Improvements
Identify and prioritize critical issues that require immediate attention.
Provide actionable recommendations to improve usability.
7. Ensure Accessibility
Verify that the system is usable for a diverse audience, including individuals with
disabilities.
Test compliance with accessibility standards (e.g., WCAG).
8. Optimize User Engagement
Evaluate how well the system holds users' attention and encourages continued use.
Identify factors contributing to user retention or abandonment.
9. Support Business Goals
Align the usability of the product with broader business objectives, such as increasing
customer satisfaction, reducing support costs, or driving conversions.
10. Inform Future Development
Gather insights to guide future design and development efforts.
Establish a benchmark for comparing usability improvements over time.
6. Collect Data
Interviews: Speak directly with potential or existing users to gather insights into their
goals.
Surveys and Questionnaires: Use structured questions to understand user
preferences and needs.
Observation: Observe users as they interact with a similar product or prototype.
7. Usability Testing
Test the system with real users, keeping their goals in mind.
Gather feedback on whether the system helps them achieve their objectives efficiently
and effectively.
Evaluate task success rates, time taken to complete tasks, and user satisfaction.
By focusing on user goals during your usability study, you ensure that the product is designed
with the end user in mind, ultimately leading to a more intuitive and effective experience.
1. Effectiveness
Measures the accuracy and completeness with which users achieve their goals.
o Task Success Rate: Percentage of tasks completed correctly.
o Error Rate: Number and type of errors users make during tasks.
2. Efficiency
Measures the resources expended in relation to the accuracy and completeness of
goals achieved.
o Time on Task: Average time taken to complete a specific task.
o Time to First Action: Time taken for users to make their first interaction after
encountering a task.
o Navigation Steps: Number of clicks or steps required to complete a task.
3. Satisfaction
Measures user perceptions and satisfaction with the system.
o System Usability Scale (SUS): A standardized questionnaire providing a
usability score.
o Net Promoter Score (NPS): Measures how likely users are to recommend the
product.
o User Satisfaction Rating: Collected via post-task or post-test surveys, often
on a Likert scale.
4. Learnability
Assesses how easy it is for new users to learn the system.
o Time to Learn: Time taken to become proficient in completing key tasks.
o Number of Help Requests: Frequency of consulting help materials or
support.
5. Error Frequency and Severity
Evaluates how often users encounter issues and the impact of those errors.
o Number of Errors: Total errors made by users during tasks.
o Recovery Time: Time taken to recover from an error.
6. Engagement
Evaluates how users interact with the system over time.
o Retention Rate: Percentage of users returning to the system after initial use.
o Task Abandonment Rate: Percentage of tasks started but not completed.
1. Quantitative Evaluation
Focuses on collecting numerical data to measure performance.
o A/B Testing: Comparing two versions of a system to determine which
performs better.
o Analytics Tools: Use tools like Google Analytics to track user interactions
and behavior.
o Benchmarking: Comparing metrics to industry standards or competitor
products.
2. Qualitative Evaluation
Gathers subjective feedback and insights about user experience.
o Think-Aloud Protocol: Users verbalize thoughts while interacting with the
system.
o Contextual Inquiry: Observing users in their natural environment.
o Interviews and Focus Groups: In-depth discussions about user experiences
and challenges.
3. Usability Testing
o Moderated Testing: A facilitator guides users through tasks and collects
feedback.
o Unmoderated Testing: Users complete tasks on their own, often using online
platforms like UsabilityHub or UserTesting.
o Remote Testing: Testing conducted via online tools to observe and record
user interactions.
4. Surveys and Questionnaires
o Tools like SUS, NPS, or custom surveys help gather user satisfaction and
qualitative feedback.
o Ideal for post-test evaluations to understand overall impressions.
5. Heuristic Evaluation
o Expert evaluators assess the system against established usability principles
(heuristics).
o Identifies usability issues without direct user involvement.
6. Eye Tracking and Heatmaps
o Measures where users look on the screen and how they navigate the interface.
o Useful for evaluating visual design and information hierarchy.
7. Cognitive Walkthrough
o Evaluators simulate the user's thought process to identify potential usability
issues.
Characteristics of Participants
a. Quantitative Data
b. Think-Aloud Protocol
c. Usability Testing
e. Analytics Tools
f. Interviews
Data Analysis:
Data Analysis in usability studies involves interpreting the collected data to identify patterns,
insights, and areas for improvement. It combines quantitative metrics with qualitative
feedback to provide a holistic view of the user experience.
Quantitative Data: Compile metrics like task completion rates, time on task, and
error rates into spreadsheets or databases.
Qualitative Data: Transcribe notes, audio recordings, and observations. Organize
responses into themes or categories.
Tools: Use software like Excel, SPSS, NVivo, or R for data management and
analysis.
Cross-reference findings:
o If a task shows a low success rate (quantitative), examine user feedback or
recordings (qualitative) to understand why.
o Pair time-on-task data with observations of user hesitations or errors.
Purpose: Assess how well users can complete specific tasks using the system.
Metrics:
o Task Success Rate: Percentage of tasks completed successfully.
o Time on Task: Average time taken to complete a task.
o Error Rate: Number of errors made per task.
o Help Requests: Frequency of participants seeking assistance.
o Task Satisfaction Rating: User-reported ease of completing tasks.
4. Accessibility Testing
Purpose: Evaluate how easily new users can learn and use the system.
Metrics:
o Time to First Task Completion: Time taken to complete the first assigned
task.
o Error Rates During First Use: Mistakes made by first-time users.
o Retention Rate: Percentage of users who remember learned processes in
follow-up tests.
o Satisfaction After First Use: User-reported ease and comfort with initial use.
To identify the most effective design for usability and user experience.
To determine how different designs impact user behavior, satisfaction, and
performance.
To inform decisions about the final design based on evidence rather than assumptions
a. A/B Testing
Description: Compare two versions of a single variable (e.g., Design A vs. Design
B).
Use Case: Determine which version performs better in terms of task success, user
preference, or other metrics.
Example: Testing two layouts for a product page to see which leads to higher
conversion rates.
b. Multivariate Testing
c. Side-by-Side Comparisons
Description: Present users with multiple designs at once for direct comparison.
Use Case: Collect qualitative feedback on which design users find more appealing or
intuitive.
Example: Comparing two dashboard designs for clarity and ease of use.
d. Sequential Testing
Description: Present users with designs in sequence rather than side by side.
Use Case: Evaluate designs independently to reduce bias from direct comparisons.
Example: Testing one version of an app interface, followed by another, and asking
users to rate each.
e. Iterative Comparisons
Quantitative Metrics:
o Task completion rates.
o Time on task.
o Error rates or frequency.
o Conversion rates (e.g., sign-ups, purchases).
o System Usability Scale (SUS) scores.
Qualitative Metrics:
o User preferences and opinions.
o Verbal feedback from think-aloud sessions.
o Observations of user behavior and frustration points.
Specify what you want to compare and the success criteria (e.g., ease of navigation,
visual appeal, task efficiency).
d. Randomize Presentation
Randomize the order in which designs are shown to prevent order bias.
Use metrics like completion time alongside user feedback for a holistic evaluation.
Use statistical analysis for quantitative data and thematic analysis for qualitative
insights.
a. Usability Metrics
b. User Experience
d. Customer Support
e. Emotional Impact
a. Participant Recruitment
e. A/B Testing
Present participants with your product and a competitor’s version to directly compare
usability and features.
Usability metrics are critical in assessing how effectively users can complete tasks or
transactions within a system, such as a digital banking platform or any user interface. These
metrics help gauge the ease, efficiency, and satisfaction with which users interact with a
system. Key usability metrics typically include:
These usability metrics are essential for optimizing a system and ensuring that users have a
smooth, efficient, and satisfying experience when completing tasks or transactions.
Evaluating the impact of subtle changes in a system or interface (such as a digital banking
platform) involves understanding how small modifications influence user experience,
behavior, and overall performance. These changes can range from minor design tweaks, such
as altering button placement, font size, or color schemes, to slight adjustments in the process
flow or features.
What are the goals of the changes? This could be improving task efficiency,
reducing errors, enhancing visual appeal, or increasing user satisfaction.
What specific metrics will be used to measure impact? These can include usability
metrics like task success rate, time on task, error rate, or satisfaction.
What is A/B testing? A/B testing involves comparing two versions of a system or
interface: the original (A) and a modified version (B).
How does it work? Randomly assign users to either the original or the new version,
and track their behavior and performance across the selected usability metrics.
Analysis: Measure the differences between the two groups and determine whether the
changes had a statistically significant impact on performance or user experience.
3. Usability Testing
User Feedback: Conduct usability testing where users are asked to complete tasks
using both the original and modified versions of the system. Observe their behaviors
and ask for subjective feedback (e.g., which version they prefer and why).
Behavioral Metrics: Track the time it takes to complete tasks, error rates, and the
number of successful transactions on each version.
4. Cognitive Walkthroughs
System Usability Scale (SUS): This is a widely-used survey to evaluate the perceived
usability of the system after a subtle change. Users rate their experience, and the
results can show if there has been an improvement or decline in overall satisfaction.
Net Promoter Score (NPS): Measures users' likelihood to recommend the service to
others, indicating overall satisfaction and how changes affect user loyalty.
6. Behavioral Analytics
Tracking User Interactions: Use analytics tools to track how users interact with the
interface, focusing on how the subtle changes affect user behavior. For example, did
users spend more time on certain steps? Did they complete tasks more efficiently?
Are there any shifts in user flow?
Heatmaps: These show where users click most often or how they move their mouse
over the interface, helping identify areas that are either confusing or more engaging
after a subtle change.
7. Benchmarking Pre- and Post-Change Data
Retention and Repeat Usage: In some cases, the impact of subtle changes may not
be immediate. Assess long-term retention rates or repeated task completion to see if
the changes lead to more sustained user engagement.
User Behavior Trends: Observe whether there are gradual changes in how users
interact with the platform over time, particularly if the subtle changes influence their
trust or familiarity with the system.
User Interviews: Conduct interviews with users to gain qualitative insights into their
experience with the subtle changes. This helps uncover deeper insights into how the
changes affected their overall perception of the system, beyond just task completion
metrics.
Contextual Inquiry: Observe users in real-world settings as they interact with the
system. This can provide insights into whether the subtle changes are working as
intended or if unexpected problems arise.
Conclusion
The impact of subtle changes can often be significant, though not always immediately
apparent. A combination of quantitative (A/B testing, usability metrics) and qualitative (user
feedback, interviews) approaches will provide a comprehensive view of how the changes
have influenced the user experience. By carefully measuring both the objective outcomes and
subjective perceptions, you can assess whether the changes improve the overall usability and
effectiveness of the system.