UNIT-3 (2)

Download as pdf or txt
Download as pdf or txt
You are on page 1of 24

UNIT-III

USABILITY METRICS
3.1 Meaning -Usability Metrics:
Usability metrics are quantitative measures used to evaluate the user-friendliness and
effectiveness of a product, system, or service. These metrics assess how efficiently users can
complete tasks, how satisfied they feel with the experience, and how prone they are to errors
while interacting with the system. Key usability metrics include task success rate, which
measures the percentage of tasks successfully completed by users, and error rate, which
tracks the frequency of mistakes made during use. Time on task evaluates how long it takes
users to complete specific actions, while user satisfaction is typically gauged through surveys
or questionnaires. Learnability examines how easily new users can accomplish basic tasks,
and efficiency reflects the speed and ease with which experienced users perform actions.
Additionally, retention rate measures how well users remember how to use the system over
time. Together, these metrics provide valuable insights into user behaviour and experience,
guiding improvements to make systems more intuitive, accessible, and enjoyable.
Some key usability metrics include:
1. Effectiveness: How accurately users can complete a task. This is often measured by
the success rate or task completion rate.
2. Efficiency: How quickly users can complete a task. This metric is often measured by
the time it takes to perform a task or the number of steps needed to complete it.
3. Satisfaction: How users feel about their experience with the system. It can be
measured using surveys, feedback, or tools like the System Usability Scale (SUS).
4. Error Rate: The number of errors users make while performing a task. A higher error
rate typically indicates a usability issue.
5. Learnability: How easy it is for users to learn to use the system. This is often
measured by how quickly users can become proficient with the system.
6. Retention: How well users remember how to use the system after a period of time.

3.2 Performance metrics:


Performance metrics are quantitative measures used to assess the effectiveness and efficiency
of a system, process, or organization in achieving its goals. These metrics provide insights
into how well resources are utilized, how objectives are met, and where improvements are
needed. Common performance metrics include productivity, which measures the output
generated relative to the input used, and efficiency, which assesses the ability to achieve
results with minimal waste of time or resources. Accuracy evaluates the correctness of
outputs, while speed or response time measures how quickly a task or process is completed.
Other important metrics include throughput, which tracks the volume of work processed
within a given timeframe, and reliability, which gauges the consistency of performance over
time. In organizational contexts, performance metrics may also include financial indicators
like revenue, profit margins, and return on investment (ROI). These metrics are essential for
monitoring progress, identifying areas for improvement, and ensuring that objectives are
achieved effectively and sustainably.

Performance metrics have several key features that make them valuable tools for evaluating
and improving the efficiency and effectiveness of systems, processes, or organizations. These
features include:

 Measurability: Performance metrics are quantifiable, allowing for objective


assessment and comparison. They provide numerical data that can be tracked over
time.
 Relevance: Metrics are aligned with specific goals and objectives, ensuring they
measure what truly matters for the success of a system or organization.
 Specificity: Good metrics focus on a particular aspect of performance, providing clear
and actionable insights rather than vague or generalized information.
 Timeliness: Metrics can be measured at appropriate intervals (real-time, daily,
weekly, etc.) to provide on going insights and facilitate timely decision-making.
 Accuracy: Reliable data collection and analysis ensure that performance metrics are
precise and reflect actual performance levels.
 Comparability: Metrics allow for comparisons across time periods, teams, or
industry standards, enabling benchmarking and identifying areas for improvement.
 Actionability: Effective performance metrics provide insights that lead to informed
decisions, helping to address inefficiencies, capitalize on strengths, or correct course
where needed.
 Scalability: Metrics can be applied at various levels (individual, team, organization)
and scales (small projects to large systems).
 Consistency: Metrics are consistently defined and measured to ensure reliability and
build trust in the data.
 Transparency: Metrics are easily understandable and clearly communicated to
stakeholders, fostering accountability and collaboration.
3.3 Issue- Based Metrics:
Issue-based metrics focus on identifying, tracking, and analyzing problems or challenges
encountered during a process, project, or system operation. These metrics aim to provide
insights into the nature, frequency, and impact of issues, helping organizations prioritize and
address them effectively. They are particularly useful in quality assurance, risk management,
and continuous improvement efforts. Common features of issue-based metrics include:

 Issue Frequency: Tracks how often specific issues occur over a given period,
providing a sense of recurring problems.
 Issue Severity: Measures the impact or criticality of each issue, helping prioritize
those with the most significant effect on performance or outcomes.
 Resolution Time: Captures the time it takes to resolve an issue, reflecting the
efficiency of the problem-solving process.
 Root Cause Analysis Metrics: Evaluates the underlying causes of issues, focusing on
patterns or systemic problems that need addressing.
 Reopen Rate: Measures the percentage of issues reopened after being marked as
resolved, indicating the effectiveness of initial solutions.
 Impact Metrics: Assesses how issues affect overall performance, customer
satisfaction, or other critical outcomes.
 Detection Rate: Monitors how quickly issues are identified, highlighting the
efficiency of monitoring and detection processes.
 Escalation Rate: Tracks the proportion of issues escalated to higher levels of
management or technical support, reflecting the complexity of problems.
 Preventive Action Metrics: Evaluates the effectiveness of measures taken to prevent
similar issues in the future.
 Trend Analysis: Identifies patterns or trends in issue occurrence, offering insights
into potential long-term risks or opportunities for improvement.

3.4 Self-Reported Metrics:


Self-reported metrics are data points collected directly from individuals, typically through
surveys, interviews, or feedback forms, where users or stakeholders report their experiences,
opinions, or perceptions. These metrics are subjective but provide valuable insights into user
satisfaction, engagement, and overall experience, often complementing objective
performance data. Key aspects of self-reported metrics include:
 User Satisfaction: Measures how satisfied individuals feel about a product, service,
or process, often using tools like Likert scales or Net Promoter Scores (NPS).
 Perceived Ease of Use: Assesses how easy users believe a system or process is to
use, which can highlight areas for usability improvement.
 Reported Time on Task: Captures how long users think they spent on a task, which
can differ from actual time and provide insights into user perceptions.
 User Confidence: Reflects how confident individuals feel about completing tasks or
using a system effectively.
 Feedback on Features: Gathers opinions about specific features or functions, helping
to identify what's most useful or needs improvement.
 Emotional Responses: Tracks users' feelings, such as frustration, delight, or
satisfaction, during their interaction with a product or service.
 Intention Metrics: Measures the likelihood of future behaviour, such as intentions to
reuse, recommend, or purchase a product.
 Perceived Value: Evaluates how users perceive the worth or benefits of a product or
service relative to its cost or effort.
 Open-Ended Feedback: Collects qualitative insights that provide context or explain
user ratings and opinions.
 Demographic Information: Gathers self-reported details like age, location, or
preferences, which can help segment and analyse user groups.
 While self-reported metrics offer rich, qualitative insights into user experiences and
perceptions, they can be influenced by biases like memory errors or emotional states.
For this reason, they are often used in conjunction with objective metrics to provide a
more comprehensive understanding of performance and user satisfaction.

3.5 Planning and Performing Usability Study:


A usability study is a structured process for evaluating how effectively and efficiently users
interact with a product, system, or service. Proper planning and execution are crucial to
ensure meaningful and actionable results. Below are the key steps:
I. Planning Phase
1. Define Objectives
o Identify the purpose of the study, such as improving user experience,
identifying pain points, or validating design changes.
o Specify the metrics to be measured (e.g., task success rate, error rate,
satisfaction).
2. Understand the Target Audience
o Define user personas based on demographics, behavior, and needs.
o Choose representative participants who align with the intended user base.
3. Scope the Study
o Determine the tasks or scenarios to be tested (e.g., logging in, completing a
purchase).
o Decide the scope: focus on a specific feature, the overall system, or
comparative analysis.
4. Select the Study Type
o Moderated Testing: In-person or remote, guided by a facilitator.
o Unmoderated Testing: Participants perform tasks independently.
o Exploratory Testing: For early-stage feedback.
o Comparative Testing: To evaluate multiple designs or systems.
5. Develop Test Materials
o Prepare task scenarios with clear instructions.
o Design pre- and post-test questionnaires to gather background and feedback.
o Create a script for consistency in moderation.
6. Set Up the Environment
o Choose the location (lab, remote, or field).
o Ensure the necessary tools, such as screen-recording software, cameras, or
usability platforms, are available.
7. Recruit Participants
o Screen participants to ensure they match the target user profile.
o Offer incentives, if appropriate, to encourage participation.

II. Performing Phase


1. Pilot Testing
o Conduct a trial run to refine tasks, instructions, and tools.
2. Facilitate the Study
o Guide participants through the test scenarios without leading or influencing
their behavior.
o Observe and record interactions, noting any difficulties, errors, or delays.
3. Capture Data
o Use tools to record metrics like task completion time, error rates, and screen
interactions.
o Collect qualitative data through think-aloud protocols, where participants
verbalize their thoughts.
4. Administer Questionnaires
o Pre-test surveys gather demographic and background information.
o Post-test surveys measure satisfaction, ease of use, and overall impressions.
5. Maintain Neutrality
o Avoid prompting or assisting participants unless absolutely necessary to
proceed.
o Ensure a non-judgmental environment where participants feel comfortable.
3.6 Planning and Performing a Usability - Study Goals:
1. Identify Usability Issues
 Detect barriers that prevent users from achieving their goals effectively.
 Uncover interface design problems, such as confusing navigation or unclear
instructions.
2. Measure User Performance
 Assess how efficiently and accurately users complete tasks.
 Quantify key usability metrics like task success rate, time on task, and error rates.
3. Evaluate User Satisfaction
 Understand how users feel about the product through surveys, interviews, or ratings.
 Measure perceptions of ease of use, aesthetics, and overall experience.
4. Validate Design Decisions
 Confirm whether the current design aligns with user needs and expectations.
 Compare design alternatives to choose the most effective solution.
5. Enhance Learnability and Efficiency
 Assess how quickly new users can learn to use the product.
 Ensure experienced users can perform tasks efficiently and consistently.

6. Prioritize Improvements
 Identify and prioritize critical issues that require immediate attention.
 Provide actionable recommendations to improve usability.
7. Ensure Accessibility
 Verify that the system is usable for a diverse audience, including individuals with
disabilities.
 Test compliance with accessibility standards (e.g., WCAG).
8. Optimize User Engagement
 Evaluate how well the system holds users' attention and encourages continued use.
 Identify factors contributing to user retention or abandonment.
9. Support Business Goals
 Align the usability of the product with broader business objectives, such as increasing
customer satisfaction, reducing support costs, or driving conversions.
10. Inform Future Development
 Gather insights to guide future design and development efforts.
 Establish a benchmark for comparing usability improvements over time.

3.7 Planning and Performing a Usability - User Goals:


When planning and performing a usability study, the first key step is to define and understand
the user goals. These goals are the specific objectives that users aim to achieve when
interacting with a product or system. By identifying user goals, you can ensure that the
product meets the users’ needs, offering a better overall experience. Here's how you can
approach it:

1. Identify Target Users


 Demographics: Understand who your users are—age, gender, education, profession,
and experience level.
 User Segments: Divide users into segments based on behaviors, needs, or tasks they
are trying to accomplish.

2. Define User Goals


 Functional Goals: What tasks do users want to accomplish? For instance, users may
want to complete a purchase, sign up for an account, or retrieve specific information.
 Experiential Goals: How do users want the interaction to feel? This could include
ease of use, speed, or pleasure.
 Contextual Goals: Consider the environment and situations in which users will
interact with the system (e.g., using a mobile app while commuting).

3. Create User Personas


 Based on the target audience, create personas to represent different user types.
 These personas help clarify user goals and preferences by creating a fictional, yet
representative, user for each group.

4. Map Out the User Journey


 Tasks: Identify the tasks users need to complete to achieve their goals.
 Pain Points: Highlight potential obstacles or challenges users might face during the
process.
 Touch points: Identify points of interaction with the system, such as buttons, forms,
or notifications.

5. Set Priorities for User Goals


 Not all goals are of equal importance. Prioritize them based on frequency, importance,
and how they impact the user's success in using the system.
 Focus on the most critical tasks that allow users to achieve their main objectives.

6. Collect Data

 Interviews: Speak directly with potential or existing users to gather insights into their
goals.
 Surveys and Questionnaires: Use structured questions to understand user
preferences and needs.
 Observation: Observe users as they interact with a similar product or prototype.

7. Usability Testing

 Test the system with real users, keeping their goals in mind.
 Gather feedback on whether the system helps them achieve their objectives efficiently
and effectively.
 Evaluate task success rates, time taken to complete tasks, and user satisfaction.

By focusing on user goals during your usability study, you ensure that the product is designed
with the end user in mind, ultimately leading to a more intuitive and effective experience.

3.8 Metrics and Evaluation Method:


Metrics and Evaluation Methods are critical in assessing the success and usability of a
system. These help in quantifying the user experience, identifying areas for improvement,
and ensuring the product meets user goals effectively. Here's an overview:

Key Metrics for Usability

1. Effectiveness
Measures the accuracy and completeness with which users achieve their goals.
o Task Success Rate: Percentage of tasks completed correctly.
o Error Rate: Number and type of errors users make during tasks.
2. Efficiency
Measures the resources expended in relation to the accuracy and completeness of
goals achieved.
o Time on Task: Average time taken to complete a specific task.
o Time to First Action: Time taken for users to make their first interaction after
encountering a task.
o Navigation Steps: Number of clicks or steps required to complete a task.
3. Satisfaction
Measures user perceptions and satisfaction with the system.
o System Usability Scale (SUS): A standardized questionnaire providing a
usability score.
o Net Promoter Score (NPS): Measures how likely users are to recommend the
product.
o User Satisfaction Rating: Collected via post-task or post-test surveys, often
on a Likert scale.
4. Learnability
Assesses how easy it is for new users to learn the system.
o Time to Learn: Time taken to become proficient in completing key tasks.
o Number of Help Requests: Frequency of consulting help materials or
support.
5. Error Frequency and Severity
Evaluates how often users encounter issues and the impact of those errors.
o Number of Errors: Total errors made by users during tasks.
o Recovery Time: Time taken to recover from an error.
6. Engagement
Evaluates how users interact with the system over time.
o Retention Rate: Percentage of users returning to the system after initial use.
o Task Abandonment Rate: Percentage of tasks started but not completed.

3.9 Evaluation Methods

1. Quantitative Evaluation
Focuses on collecting numerical data to measure performance.
o A/B Testing: Comparing two versions of a system to determine which
performs better.
o Analytics Tools: Use tools like Google Analytics to track user interactions
and behavior.
o Benchmarking: Comparing metrics to industry standards or competitor
products.
2. Qualitative Evaluation
Gathers subjective feedback and insights about user experience.
o Think-Aloud Protocol: Users verbalize thoughts while interacting with the
system.
o Contextual Inquiry: Observing users in their natural environment.
o Interviews and Focus Groups: In-depth discussions about user experiences
and challenges.
3. Usability Testing
o Moderated Testing: A facilitator guides users through tasks and collects
feedback.
o Unmoderated Testing: Users complete tasks on their own, often using online
platforms like UsabilityHub or UserTesting.
o Remote Testing: Testing conducted via online tools to observe and record
user interactions.
4. Surveys and Questionnaires
o Tools like SUS, NPS, or custom surveys help gather user satisfaction and
qualitative feedback.
o Ideal for post-test evaluations to understand overall impressions.
5. Heuristic Evaluation
o Expert evaluators assess the system against established usability principles
(heuristics).
o Identifies usability issues without direct user involvement.
6. Eye Tracking and Heatmaps
o Measures where users look on the screen and how they navigate the interface.
o Useful for evaluating visual design and information hierarchy.
7. Cognitive Walkthrough
o Evaluators simulate the user's thought process to identify potential usability
issues.

3.10 Usability –Participants:

Characteristics of Participants

 Representative of Target Users: Choose participants whose demographics,


behaviors, and needs align with the intended users of the system.
 Experience Level:
o Novice Users: To test learnability and identify barriers for new users.
o Experienced Users: To assess efficiency and advanced functionality.
 Diversity:
o Include users with different levels of familiarity with similar products.
o Ensure inclusivity in terms of age, gender, cultural background, and
accessibility needs.

3.11 Data Collection:

Data Collection in usability studies involves gathering qualitative and quantitative


information from participants to evaluate and improve a product's usability. A well-structured
data collection process ensures you obtain reliable, actionable insights. Here's how to
approach it:

1. Types of Data Collected

a. Quantitative Data

 Provides measurable, numeric insights about usability.


 Examples:
o Task completion rates.
o Time taken to complete tasks.
o Number of errors or attempts.
o System Usability Scale (SUS) scores.
o Click or navigation paths.
b. Qualitative Data

 Captures subjective experiences, opinions, and feedback.


 Examples:
o Verbal feedback during think-aloud protocols.
o Observations of user behavior (hesitations, confusion).
o Written responses in post-test surveys.

2. Data Collection Methods


a. Direct Observation

 Watch participants as they interact with the product.


 Record behaviors, challenges, and strategies used.
 Use tools like notebooks or pre-defined observation sheets.

b. Think-Aloud Protocol

 Ask participants to verbalize their thoughts while performing tasks.


 Helps uncover user reasoning, expectations, and frustrations.

c. Usability Testing

 Participants complete a series of tasks, either:


o Moderated: With a facilitator present to guide and observe.
o Unmoderated: Conducted independently by participants using tools like
UsabilityHub or UserTesting.
 Collect task performance metrics and feedback.

d. Surveys and Questionnaires

 Administer before, during, or after the usability test.


 Examples:
o Pre-test surveys: Gather background information about users.
o Post-test surveys: Assess satisfaction (e.g., SUS, NPS).
o Open-ended questions: Collect detailed feedback.

e. Analytics Tools

 Use heatmaps, click tracking, or session recording tools.


 Tools like Google Analytics, Hotjar, or Crazy Egg can provide insights on user
behavior patterns.

f. Interviews

 Conduct one-on-one interviews to delve deeper into user experiences.


 Can be done before or after the test for additional context.

g. Screen and Video Recording

 Record user interactions with the interface to analyze later.


 Include facial expressions and voice (if relevant) for emotional cues.

h. Logs and System Data

 Collect system-generated data such as:


o Error logs.
o Time stamps.
o Interaction counts.

Tools for Data Collection

 Screen Recording: Lookback, Camtasia, Zoom.


 Surveys: Google Forms, Typeform, SurveyMonkey.
 Analytics: Hotjar, Crazy Egg, FullStory.
 Task Metrics: Morae, Optimal Workshop.
 Eye-Tracking Tools: Tobii, GazePoint for visual behavior.

Data Analysis:

Data Analysis in usability studies involves interpreting the collected data to identify patterns,
insights, and areas for improvement. It combines quantitative metrics with qualitative
feedback to provide a holistic view of the user experience.

1. Organizing Data for Analysis

 Quantitative Data: Compile metrics like task completion rates, time on task, and
error rates into spreadsheets or databases.
 Qualitative Data: Transcribe notes, audio recordings, and observations. Organize
responses into themes or categories.
 Tools: Use software like Excel, SPSS, NVivo, or R for data management and
analysis.

2. Quantitative Data Analysis

 Focuses on measurable, numeric data to identify trends and performance indicators.


 Key Steps:
1. Calculate Descriptive Statistics:
 Task success rate (percentage of completed tasks).
 Average time on task.
 Frequency and severity of errors.
2. Identify Outliers:
 Detect anomalies in task completion times or error rates.
3. Compare Against Benchmarks:
 Use industry standards or previous studies as a baseline.
4. Statistical Analysis (if needed):
 Perform tests like t-tests or ANOVA for comparing groups (e.g.,
novice vs. expert users).

3. Qualitative Data Analysis

 Captures user perceptions, emotions, and experiences.


 Key Techniques:
1. Thematic Analysis:
 Identify recurring themes or patterns in user feedback.
 Example: “Confusion about navigation” might emerge as a theme.
2. Affinity Mapping:
 Group related observations and comments to identify common issues.
3. Sentiment Analysis:
 Categorize feedback as positive, negative, or neutral to gauge
satisfaction.
4. Behavioral Insights:
 Analyze think-aloud data to understand user reasoning and challenges.
4. Combining Quantitative and Qualitative Data

 Cross-reference findings:
o If a task shows a low success rate (quantitative), examine user feedback or
recordings (qualitative) to understand why.
o Pair time-on-task data with observations of user hesitations or errors.

5. Identifying Usability Issues

 Categorize problems based on:


o Frequency: How often the issue occurs.
o Severity: Impact on user experience (e.g., critical, major, minor).
o Persistence: Whether the issue affects all users or specific groups.
 Use a Priority Matrix:
o High Frequency + High Severity = Top Priority for resolution.

3.12 Typical studies of usability studies and their corresponding metrics

1. Task-Based Usability Testing

 Purpose: Assess how well users can complete specific tasks using the system.
 Metrics:
o Task Success Rate: Percentage of tasks completed successfully.
o Time on Task: Average time taken to complete a task.
o Error Rate: Number of errors made per task.
o Help Requests: Frequency of participants seeking assistance.
o Task Satisfaction Rating: User-reported ease of completing tasks.

2. Exploratory Usability Testing

 Purpose: Understand user behaviors, expectations, and challenges in early-stage


designs or prototypes.
 Metrics:
o Navigation Paths: Routes users take to accomplish a task.
o Think-Aloud Insights: Observations of user thought processes.
o Success Rate for Undefined Tasks: How intuitively users approach
ambiguous scenarios.
o Feature Discovery Rate: How many features users identify on their own.
3. Comparative Usability Testing

 Purpose: Compare the usability of two or more designs, products, or systems.


 Metrics:
o Task Completion Time: Compare average time across versions.
o Error Rates: Frequency of mistakes for each version.
o Satisfaction Scores: System Usability Scale (SUS) or Net Promoter Score
(NPS) for each option.
o Preference Rankings: User-stated preferences between systems.

4. Accessibility Testing

 Purpose: Ensure the product is usable for people with disabilities.


 Metrics:
o Screen Reader Compatibility: Percentage of tasks completed using assistive
tools.
o Keyboard Navigation Efficiency: Time and effort using only keyboard
inputs.
o Error Rates for Assistive Technology Users: Errors encountered by users
with disabilities.
o WCAG Compliance: Percentage adherence to Web Content Accessibility
Guidelines.

5. Remote Usability Testing

 Purpose: Test usability in users’ natural environments without a moderator.


 Metrics:
o Completion Rates: Tasks completed successfully in unsupervised conditions.
o Time on Task: Average time per task.
o Click Tracking: Number of clicks or taps needed to complete a task.
o Abandonment Rate: Percentage of users who quit tasks before completion.

6. First-Time Use or Learnability Testing

 Purpose: Evaluate how easily new users can learn and use the system.
 Metrics:
o Time to First Task Completion: Time taken to complete the first assigned
task.
o Error Rates During First Use: Mistakes made by first-time users.
o Retention Rate: Percentage of users who remember learned processes in
follow-up tests.
o Satisfaction After First Use: User-reported ease and comfort with initial use.

3.13 Comparative alternative designs

Comparative Alternative Designs refer to evaluating and comparing different design


options or iterations of a product or interface to determine which one best meets user needs
and achieves usability goals. This approach is commonly used during the design phase to
refine and optimize solutions.

1. Purpose of Comparative Alternative Designs

 To identify the most effective design for usability and user experience.
 To determine how different designs impact user behavior, satisfaction, and
performance.
 To inform decisions about the final design based on evidence rather than assumptions

2. Types of Comparative Studies

a. A/B Testing

 Description: Compare two versions of a single variable (e.g., Design A vs. Design
B).
 Use Case: Determine which version performs better in terms of task success, user
preference, or other metrics.
 Example: Testing two layouts for a product page to see which leads to higher
conversion rates.

b. Multivariate Testing

 Description: Test multiple variables or design elements simultaneously (e.g., layout,


color, navigation).
 Use Case: Explore interactions between design elements and their combined impact.
 Example: Comparing combinations of button color, text style, and page layout.

c. Side-by-Side Comparisons
 Description: Present users with multiple designs at once for direct comparison.
 Use Case: Collect qualitative feedback on which design users find more appealing or
intuitive.
 Example: Comparing two dashboard designs for clarity and ease of use.

d. Sequential Testing

 Description: Present users with designs in sequence rather than side by side.
 Use Case: Evaluate designs independently to reduce bias from direct comparisons.
 Example: Testing one version of an app interface, followed by another, and asking
users to rate each.

e. Iterative Comparisons

 Description: Test successive iterations of a single design over time.


 Use Case: Optimize a design incrementally based on user feedback.
 Example: Refining a search feature through three iterations of testing.

3. Key Metrics for Comparison

 Quantitative Metrics:
o Task completion rates.
o Time on task.
o Error rates or frequency.
o Conversion rates (e.g., sign-ups, purchases).
o System Usability Scale (SUS) scores.
 Qualitative Metrics:
o User preferences and opinions.
o Verbal feedback from think-aloud sessions.
o Observations of user behavior and frustration points.

4. Best Practices for Comparative Design Testing

a. Define Clear Objectives

 Specify what you want to compare and the success criteria (e.g., ease of navigation,
visual appeal, task efficiency).

b. Use Controlled Variables


 Focus on one or a few design differences to isolate their effects (e.g., button
placement or font size).

c. Select the Right Participants

 Ensure participants represent your target audience for reliable insights.

d. Randomize Presentation

 Randomize the order in which designs are shown to prevent order bias.

e. Collect Both Quantitative and Qualitative Data

 Use metrics like completion time alongside user feedback for a holistic evaluation.

f. Analyze Data Objectively

 Use statistical analysis for quantitative data and thematic analysis for qualitative
insights.

3.14 Comparing with competition

1. Purpose of Comparing with Competition

 Benchmarking: Establish a baseline for usability by measuring your product against


industry standards.
 Competitive Analysis: Understand how competitors address similar user needs.
 Improvement Identification: Discover areas where your product can outperform
competitors.
 Market Positioning: Highlight unique value propositions in your product's design or
functionality.

2. Key Aspects to Compare

a. Usability Metrics

 Task success rate.


 Task completion time.
 Error rates.
 System Usability Scale (SUS) scores.
 Learnability for first-time users.

b. User Experience

 Visual design appeal.


 Navigation ease and clarity.
 Accessibility features.
 Responsiveness and performance.

c. Features and Functionality

 Range of features offered.


 Efficiency of workflows (e.g., checkout process, account setup).
 Customizability and personalization.

d. Customer Support

 Availability of help resources (FAQs, chatbots, support teams).


 Ease of finding and using support.
 User satisfaction with support quality.

e. Emotional Impact

 Overall user satisfaction.


 Likelihood of recommendation (Net Promoter Score - NPS).
 Emotional connection with the brand or interface.

3. Study Design for Competitive Analysis

a. Participant Recruitment

 Recruit participants who represent your target audience.


 Ensure participants are familiar with competitors’ products for more informed
feedback.

b. Define Comparable Tasks

 Choose similar tasks across products to ensure a fair comparison.


 Example: Searching for a product, signing up for an account, or completing a
purchase.
c. Collect Metrics

 Use the same usability metrics for all products.


 Record task performance, user preferences, and satisfaction ratings.

d. Use Surveys and Interviews

 Collect qualitative feedback on user perceptions of each product.


 Ask participants to compare the ease of use, design appeal, and overall satisfaction.

e. A/B Testing

 Present participants with your product and a competitor’s version to directly compare
usability and features.

3.15 COMPLETING A TASK OR TRANSACTIONS -USABILITY METRICS

Usability metrics are critical in assessing how effectively users can complete tasks or
transactions within a system, such as a digital banking platform or any user interface. These
metrics help gauge the ease, efficiency, and satisfaction with which users interact with a
system. Key usability metrics typically include:

1. Task Success Rate: Measures the percentage of tasks or transactions completed


successfully by users. A higher rate indicates that the system is easy to use and users
can achieve their goals.
2. Time on Task: The amount of time it takes for users to complete a task or
transaction. Shorter times usually imply that the system is efficient, although in some
cases, longer times may reflect a need for thoroughness or complexity.
3. Error Rate: The frequency of errors made by users during the task or transaction. A
lower error rate is indicative of an intuitive and user-friendly design. It could include
things like incorrect inputs, navigating to the wrong page, or failing to complete a
transaction.
4. Cognitive Load: Measures the mental effort required by users to complete a task.
This can be gauged using techniques like NASA-TLX (Task Load Index) or
subjective user ratings. Systems that minimize cognitive load allow users to perform
tasks without requiring too much mental effort.
5. User Satisfaction: This is often measured through surveys or interviews after a task is
completed, asking users about their experience. Satisfaction can be quantified through
tools like the System Usability Scale (SUS) or Net Promoter Score (NPS).
6. Task Abandonment Rate: The percentage of users who begin but do not finish a task
or transaction. High abandonment rates can indicate that users are frustrated or find
the process too complex.
7. Learnability: How easily new users can learn to use the system or complete a task.
This metric can be evaluated by tracking the time it takes for users to perform tasks
correctly on their first attempt.
8. Retention Rate: In the context of transactions, this measures how many users return
to the system to perform tasks or transactions after an initial use. High retention
indicates users find value in the service.

These usability metrics are essential for optimizing a system and ensuring that users have a
smooth, efficient, and satisfying experience when completing tasks or transactions.

Evaluating the impact of subtle changes in a system or interface (such as a digital banking
platform) involves understanding how small modifications influence user experience,
behavior, and overall performance. These changes can range from minor design tweaks, such
as altering button placement, font size, or color schemes, to slight adjustments in the process
flow or features.

Here’s a structured approach to evaluate the impact of subtle changes:

1. Define Clear Objectives

 What are the goals of the changes? This could be improving task efficiency,
reducing errors, enhancing visual appeal, or increasing user satisfaction.
 What specific metrics will be used to measure impact? These can include usability
metrics like task success rate, time on task, error rate, or satisfaction.

2. A/B Testing (Split Testing)

 What is A/B testing? A/B testing involves comparing two versions of a system or
interface: the original (A) and a modified version (B).
 How does it work? Randomly assign users to either the original or the new version,
and track their behavior and performance across the selected usability metrics.
 Analysis: Measure the differences between the two groups and determine whether the
changes had a statistically significant impact on performance or user experience.

3. Usability Testing

 User Feedback: Conduct usability testing where users are asked to complete tasks
using both the original and modified versions of the system. Observe their behaviors
and ask for subjective feedback (e.g., which version they prefer and why).
 Behavioral Metrics: Track the time it takes to complete tasks, error rates, and the
number of successful transactions on each version.

4. Cognitive Walkthroughs

 What is a cognitive walkthrough? This is a method where usability experts simulate


the user's journey through the system to identify potential problems caused by
changes.
 How does it work? Evaluators step through the interface and assess whether the
change has made it easier or more difficult for users to accomplish their goals. They
focus on identifying cognitive load, confusion, or obstacles introduced by the subtle
changes.

5. Surveys and Post-Task Questionnaires

 System Usability Scale (SUS): This is a widely-used survey to evaluate the perceived
usability of the system after a subtle change. Users rate their experience, and the
results can show if there has been an improvement or decline in overall satisfaction.
 Net Promoter Score (NPS): Measures users' likelihood to recommend the service to
others, indicating overall satisfaction and how changes affect user loyalty.

6. Behavioral Analytics

 Tracking User Interactions: Use analytics tools to track how users interact with the
interface, focusing on how the subtle changes affect user behavior. For example, did
users spend more time on certain steps? Did they complete tasks more efficiently?
Are there any shifts in user flow?
 Heatmaps: These show where users click most often or how they move their mouse
over the interface, helping identify areas that are either confusing or more engaging
after a subtle change.
7. Benchmarking Pre- and Post-Change Data

 Before and After Comparison: Collect baseline data (pre-change) on key


performance indicators (KPIs) such as task success rate, user satisfaction, and time on
task. Then, after implementing the change, compare the new data to the baseline.
 Statistical Significance: Use statistical methods (e.g., t-tests, chi-square tests) to
ensure that the differences observed are not due to random variation, especially when
dealing with subtle changes.

8. Long-Term Impact Assessment

 Retention and Repeat Usage: In some cases, the impact of subtle changes may not
be immediate. Assess long-term retention rates or repeated task completion to see if
the changes lead to more sustained user engagement.
 User Behavior Trends: Observe whether there are gradual changes in how users
interact with the platform over time, particularly if the subtle changes influence their
trust or familiarity with the system.

9. Contextual Inquiry or Interviews

 User Interviews: Conduct interviews with users to gain qualitative insights into their
experience with the subtle changes. This helps uncover deeper insights into how the
changes affected their overall perception of the system, beyond just task completion
metrics.
 Contextual Inquiry: Observe users in real-world settings as they interact with the
system. This can provide insights into whether the subtle changes are working as
intended or if unexpected problems arise.

Conclusion

The impact of subtle changes can often be significant, though not always immediately
apparent. A combination of quantitative (A/B testing, usability metrics) and qualitative (user
feedback, interviews) approaches will provide a comprehensive view of how the changes
have influenced the user experience. By carefully measuring both the objective outcomes and
subjective perceptions, you can assess whether the changes improve the overall usability and
effectiveness of the system.

You might also like