0% found this document useful (0 votes)
33 views15 pages

UNIT-4

The document discusses efficient test suite management in software testing, outlining the composition, types, and benefits of software test suites. It emphasizes the importance of minimizing test suites and prioritizing test cases to enhance efficiency and effectiveness in detecting faults. Additionally, it covers software quality metrics and debugging processes, providing insights into measuring software quality and resolving errors.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views15 pages

UNIT-4

The document discusses efficient test suite management in software testing, outlining the composition, types, and benefits of software test suites. It emphasizes the importance of minimizing test suites and prioritizing test cases to enhance efficiency and effectiveness in detecting faults. Additionally, it covers software quality metrics and debugging processes, providing insights into measuring software quality and resolving errors.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 15

UNIT-4

Efficient Test Suite Management

What is a Software Test Suite?


A test suite is a methodical arrangement of test cases that are developed
to validate specific functionalities. Individual test cases in a suite are
created to verify a particular functionality or performance goal. All the
test cases in a test suite ultimately used to verify the quality, and
dependability of a software.

What is Software Test Suite Composed of?


A test suite is composed of the items listed below −

 Test Cases − They describe the particular input situations, steps to


execute, expected results to test a specific feature of the software.
 Test Scripts − They describe a group of automated sequences of
commands that are needed to execute a test case. They can be
developed using multiple languages, and are utilized to automate
the testing activities.
 Test Data − They constitute the set of inputs required at the time
of test execution. They play a very important role in verifying
multiple scenarios, and situations.

Types of Software Test Suites


The different types of software test suites are listed below −

1. Functional Test Suites


They are used to verify if a particular functionality in the software is
working as expected. For example, the payment functionality in a
software.

2. Regression Test Suites


They are re-executed each time there are new code changes. They verify
if those changes have not impacted the existing functionalities of the

1
software. For example, the regression test suite is executed at the end
each sprint.

3. Smoke Test Suites


They are executed to verify the basic functionalities on a new build of the
software and to confirm whether the same build can be used for further
testing.

4. Integration Test Suites


They are executed to verify the communications between various
modules after they have been integrated. For example, the changes
made in the front end of the software should reflect in the backend as
well.

Minimizing the Test Suite and its Benefits


A test suite can sometimes grow to an extent that it is nearly impossible to
execute. In this case, it becomes necessary to minimize the test cases such
that they are executed for maximum coverage of the software. The following
are the reasons why minimization is important:

 Release date of the product is near


 Limited staff to execute all the test cases
 Limited test equipment or unavailability of testing tools

When test suites are run repeatedly for every change in the program, it is of
enormous advantage to have as small a set of test, cases as possible.
Minimizing a test suite has the following benefits:

 Sometimes, as the test suite grows, it can become prohibitively


expensive to execute on new versions of the program. These test suites
will often contain test cases that are no longer needed to satisfy the

2
coverage criteria, because they are now obsolete or redundant.
Through minimization, these redundant test cases will be eliminated.
 The sizes of test sets have a direct bearing on the cost of the project.
Test suite minimization techniques lower costs by reducing a test suite
to a minimal subset.
 Reducing the size of a test suite decreases both the overhead of
maintaining the test suite and the number of test cases that must be
rerun after changes are made to the software, thereby, reducing the
cost of regression testing.

Thus, it is of great practical advantage to reduce the size of test cases.

Test Case Prioritization in Software


Testing
As the name suggests, test case prioritization refers to prioritizing
test cases in the test suite based on different factors. Factors could
be code coverage, risk/critical modules, functionality, features, etc.

Why should test cases be prioritized?


As the size of software increases, the test suite also grows bigger
and requires more effort to maintain the test suite. To detect bugs in
software as early as possible, it is important to prioritize test cases
so that important test cases can be executed first.
Types of Test Case Prioritization :
 General Prioritization: In this type of prioritization, test cases
that will be useful for the subsequent modified versions of the
product are prioritized. It does not require any information
regarding modifications made to the product.
 Version-Specific Prioritization: Test cases can also be
prioritized such that they are useful on a specific version of the
product. This type of prioritization requires knowledge about
changes that have been introduced in the product.
Prioritization Techniques

3
1. Coverage-based Test Case Prioritization :
This type of prioritization is based on code coverage i.e. test cases
are prioritized based on their code coverage.
 Total Statement Coverage Prioritization – In this technique, the
total number of statements covered by a test case is used as a
factor to prioritize test cases. For example, a test case covering
10 statements will be given higher priority than a test case
covering 5 statements.
 Additional Statement Coverage Prioritization – This technique
involves iteratively selecting a test case with maximum statement
coverage, then selecting a test case that covers statements that
were left uncovered by the previous test case. This process is
repeated till all statements have been covered.
 Total Branch Coverage Prioritization – Using total branch
coverage as a factor for ordering test cases, prioritization can be
achieved. Here, branch coverage refers to coverage of each
possible outcome of a condition.
 Additional Branch Coverage Prioritization – Similar to the
additional statement coverage technique, it first selects a text
case with maximum branch coverage and then iteratively selects
a test case that covers branch outcomes that were left uncovered
by the previous test case.
 Total Fault-Exposing-Potential Prioritization – Fault-exposing-
potential (FEP) refers to the ability of the test case to expose
fault. Statement and Branch Coverage Techniques do not take
into account the fact that some bugs can be more easily detected
than others and also that some test cases have more potential to
detect bugs than others.
 FEP depends on:
1. Whether test cases cover faulty statements or not.
2. Probability that faulty statement will cause test case to fail.
2. Risk-based Prioritization
This technique uses risk analysis to identify potential problem areas
which if failed, could lead to bad consequences. Therefore, test
cases are prioritized keeping in mind potential problem areas. In risk
analysis, the following steps are performed :

4
 List potential problems.
 Assigning probability of occurrence for each problem.
 Calculating the severity of impact for each problem.
After performing the above steps, a risk analysis table is formed to
present results. The table consists of columns like Problem ID,
Potential problem identified, Severity of Impact, Risk exposure, etc.
3. Prioritization using Relevant Slice
In this type of prioritization, slicing technique is used – when
program is modified, all existing regression test cases are executed
in order to make sure that program yields same result as before,
except where it has been modified. For this purpose, we try to find
part of program which has been affected by modification, and then
prioritization of test cases is performed for this affected part. There
are 3 parts to slicing technique :
 Execution slice – The statements executed under test case form
execution slice.
 Dynamic slice – Statements executed under test case that might
impact program output.
 Relevant Slice – Statements that are executed under test case
and don’t have any impact on the program output but may impact
output of test case.
4. Requirements – based Prioritization
Some requirements are more important than others or are more
critical in nature, hence test cases for such requirements should be
prioritized first. The following factors can be considered while
prioritizing test cases based on requirements :
 Customer assigned priority – The customer assigns weight to
requirements according to his need or understanding of
requirements of product.
 Developer perceived implementation complexity – Priority is
assigned by developer on basis of efforts or time that would be
required to implement that requirement.
 Requirement volatility – This factor determines frequency of
change of requirement.

5
 Fault proneness of requirements – Priority is assigned based
on how error-prone requirement has been in previous versions of
software.
Metric for measuring Effectiveness of Prioritized
Test Suite
For measuring how effective prioritized test suite is, we can use
metric called APFD (Average Percentage of Faults Detected). The
formula for APFD is given by :
APFD = 1 - ( (TF1 + TF2 + ....... + TFm) / nm ) + 1 / 2n

where,
TFi = position of first Test case in Test suite T that
exposes Fault i
m = total number of Faults exposed under T
n = total number of Test cases in T
AFPD value can range from 0 to 100. The higher APFD value, faster
faults detection rate. So simply put, APFD indicates of how quickly
test suite can identify faults or bugs in software. If test suite can
detect faults quickly, then it is considered to be more effective and
reliable.
Test Case Prioritization in Software Testing
What is Software Quality?
Software Quality shows how good and reliable a product is. To
convey an associate degree example, think about functionally
correct software. It performs all functions as laid out in the SRS
document. But, it has an associate degree virtually unusable
program. even though it should be functionally correct, we tend not
to think about it to be a high-quality product.
Another example is also that of a product that will have everything
that the users need but has an associate degree virtually
incomprehensible and not maintainable code. Therefore, the normal
construct of quality as “fitness of purpose” for code merchandise isn’t
satisfactory.
Factors of Software Quality

6
The modern read of high-quality associates with software many
quality factors like the following:

1. Portability: A software is claimed to be transportable, if it may be


simply created to figure in several package environments, in
several machines, with alternative code merchandise, etc.
2. Usability: A software has smart usability if completely different
classes of users (i.e. knowledgeable and novice users) will simply
invoke the functions of the merchandise.
3. Reusability: A software has smart reusability if completely
different modules of the merchandise will simply be reused to
develop new merchandise.
4. Correctness: Software is correct if completely different needs as
laid out in the SRS document are properly enforced.
5. Maintainability: A software is reparable, if errors may be simply
corrected as and once they show up, new functions may be
simply added to the merchandise, and therefore the functionalities
of the merchandise may be simply changed, etc
6. Reliability: Software is more reliable if it has fewer failures. Since
software engineers do not deliberately plan for their software to
fail, reliability depends on the number and type of mistakes they
make. Designers can improve reliability by ensuring the software
is easy to implement and change, by testing it thoroughly, and
also by ensuring that if failures occur, the system can handle
them or can recover easily.
7. Efficiency. The more efficient software is, the less it uses of
CPU-time, memory, disk space, network bandwidth, and other
resources. This is important to customers in order to reduce their
costs of running the software, although with today’s powerful
computers, CPU time, memory and disk usage are less of a
concern than in years gone by.

7
Definition of Software Quality Metrics
Software quality metrics are standardized measurements used to assess the
quality of software products or processes. These metrics provide a quantitative
basis to evaluate various aspects of software development and maintenance, such
as functionality, reliability, performance, maintainability, and usability. By
analyzing these metrics, teams can identify areas for improvement and ensure the
software meets specified quality standards.

Role of Software Quality Metrics in Software Quality Management

Software quality metrics play a crucial role in ensuring high-quality software


development and delivery by supporting several key aspects of software quality
management:

1. Measurement and Evaluation

 Metrics provide quantifiable data to assess software quality attributes like


defect density, code complexity, or response time.
 They help evaluate whether the software aligns with user requirements and
organizational quality standards.
2. Process Improvement

 By tracking metrics, teams can identify inefficiencies or bottlenecks in the


development process.
 Metrics like defect injection rate or mean time to failure (MTTF) guide
improvements in testing, coding, and deployment phases.
3. Decision-Making

 Metrics offer actionable insights for informed decision-making regarding


resource allocation, project timelines, and risk management.
 Example: If the defect density is high, more resources can be allocated for
testing.

8
4. Performance Monitoring

 They enable continuous monitoring of team and process performance over


time.
 Metrics like sprint velocity or test coverage ensure the development team
stays on track with goals.
5. Defect Prevention and Control

 Metrics such as defect leakage or root cause analysis help identify recurring
issues and prevent similar defects in the future.
 Teams can implement targeted actions based on these insights to reduce
future defect rates.
6. Stakeholder Communication

 Metrics provide clear and concise data to communicate progress, risks, and
quality status to stakeholders.
 Example: Metrics like customer satisfaction (CSAT) scores can indicate end-
user happiness with the software.
7. Compliance and Benchmarking

 Metrics ensure the software complies with industry standards, regulations,


or predefined benchmarks.
 Example: ISO/IEC 9126 metrics evaluate software quality across dimensions
such as maintainability and portability.

Examples of Software Quality Metrics

1. Product Metrics: Measure software attributes like functionality,


complexity, and maintainability.
o Defect Density
o Code Coverage
o Cyclomatic Complexity

9
2. Process Metrics: Assess the efficiency of development and testing
processes.
o Sprint Velocity
o Defect Removal Efficiency (DRE)
3. Project Metrics: Evaluate overall project performance.
o Schedule Variance (SV)
o Cost Performance Index (CPI)
By integrating these metrics into the software quality management process,
organizations can ensure robust, reliable, and user-centric software delivery.

Debugging
Debugging in Software Engineering is the process of identifying
and resolving errors or bugs in a software system. It’s a critical
aspect of software development, ensuring quality, performance,
and user satisfaction. Despite being time-consuming,
effective debugging is essential for reliable and competitive
software products.
Process of Debugging
Debugging is a crucial skill in programming. Here’s a simple, step-
by-step explanation to help you understand and execute
the debugging process effectively:
Step 1: Reproduce the Bug
 To start, you need to recreate the conditions that caused the
bug. This means making the error happen again so you can see it
firsthand.
 Seeing the bug in action helps you understand the problem better
and gather important details for fixing it.
Step 2: Locate the Bug
 Next, find where the bug is in your code. This involves looking
closely at your code and checking any error messages or logs.
 Developers often use debugging tools to help with this step.

10
Step 3: Identify the Root Cause
 Now, figure out why the bug happened. Examine the logic and
flow of your code and see how different parts interact under the
conditions that caused the bug.
 This helps you understand what went wrong.
Step 4: Fix the Bug
 Once you know the cause, fix the code. This involves making
changes and then testing the program to ensure the bug is gone.
 Sometimes, you might need to try several times, as initial fixes
might not work or could create new issues.
 Using a version control system helps track changes and undo any
that don’t solve the problem.
Step 5: Test the Fix
After fixing the bug, run tests to ensure everything works correctly.
These tests include:
 Unit Tests: Check the specific part of the code that was changed.
 Integration Tests: Verify the entire module where the bug was
found.
 System Tests: Test the whole system to ensure overall
functionality.
 Regression Tests: Make sure the fix didn’t cause any new
problems elsewhere in the application.
Step 6: Document the Process
 Finally, record what you did. Write down what caused the bug,
how you fixed it, and any other important details.
 This documentation is helpful if similar issues occur in the future.

Debugging Approaches/Strategies
1. Brute Force: Study the system for a longer duration to
understand the system. It helps the debugger to construct
different representations of systems to be debugged depending
on the need. A study of the system is also done actively to find
recent changes made to the software.
2. Backtracking: Backward analysis of the problem which involves
tracing the program backward from the location of the failure

11
message to identify the region of faulty code. A detailed study of
the region is conducted to find the cause of defects.
3. Forward analysis of the program involves tracing the program
forwards using breakpoints or print statements at different points
in the program and studying the results. The region where the
wrong outputs are obtained is the region that needs to be focused
on to find the defect.
4. Using A debugging experience with the software debug the
software with similar problems in nature. The success of this
approach depends on the expertise of the debugger.
5. Cause elimination: it introduces the concept of binary
partitioning. Data related to the error occurrence are organized to
isolate potential causes.
6. Static analysis: Analyzing the code without executing it to
identify potential bugs or errors. This approach involves analyzing
code syntax, data flow, and control flow.
7. Dynamic analysis: Executing the code and analyzing its
behavior at runtime to identify errors or bugs. This approach
involves techniques like runtime debugging and profiling.
8. Collaborative debugging: Involves multiple developers working
together to debug a system. This approach is helpful in situations
where multiple modules or components are involved, and the root
cause of the error is not clear.
9. Logging and Tracing: Using logging and tracing tools to identify
the sequence of events leading up to the error. This approach
involves collecting and analyzing logs and traces generated by
the system during its execution.
10. Automated Debugging: The use of automated tools and
techniques to assist in the debugging process. These tools can
include static and dynamic analysis tools, as well as tools that use
machine learning and artificial intelligence to identify errors and
suggest fixes.

Correcting Bugs in Software Testing

Bug correction, or defect fixing, is the process of identifying, analyzing, and resolving issues in
software to ensure it functions as expected. This process is critical in maintaining software

12
quality and delivering a reliable product. Below is a detailed explanation of bug correction in
software testing:

1. Steps in Bug Correction

1.1 Identification

 Bugs are identified during various testing phases (e.g., unit, integration, system, or user
acceptance testing).
 Common sources include:
o Test execution results.
o Automated test reports.
o End-user feedback (post-deployment).

1.2 Logging

 Bugs are logged in a bug-tracking system like Jira, Bugzilla, or Redmine.


 A proper bug report includes:
o Description: A clear summary of the issue.
o Steps to Reproduce: Detailed actions leading to the bug.
o Expected vs. Actual Behavior: How the software should behave versus how it behaves.
o Severity and Priority: Impact of the bug on the application.

1.3 Analysis

 Developers analyze the bug to:


o Understand the root cause.
o Assess its scope and impact on the system.
 Techniques like debugging, log analysis, and error tracing are used.

1.4 Fixing

 Developers modify the code to address the bug.


 They ensure that:
o The fix resolves the issue without introducing new defects.
o The code adheres to coding standards and guidelines.

1.5 Verification

 Fixed bugs are re-tested by the testing team.


 Types of testing performed:
o Regression Testing: Ensures the fix hasn’t affected other parts of the system.
o Confirmation Testing: Confirms the specific bug is resolved.

13
1.6 Closure

 Once the fix is verified, the bug is marked as closed in the tracking system.
 If the bug persists, it is reopened and the cycle repeats.

2. Challenges in Bug Correction

2.1 Miscommunication

 Lack of clear communication between testers and developers can lead to misunderstandings
about the bug.

2.2 Complex Dependencies

 Fixing a bug might inadvertently break another part of the application due to tightly coupled
modules.

2.3 Environment Mismatch

 Bugs might appear in production but not in the test environment due to differences in
configurations.

2.4 Time Constraints

 Deadlines might pressure developers to implement quick fixes, increasing the risk of introducing
new defects.

2.5 Debugging Issues

 Some bugs, especially intermittent ones, are hard to reproduce and fix.

3. Best Practices for Bug Correction

3.1 Clear and Detailed Bug Reports

 Provide comprehensive details to minimize ambiguity for developers.

3.2 Prioritization

 Fix critical and high-priority bugs first to ensure system stability.

14
3.3 Root Cause Analysis (RCA)

 Identify and address the underlying cause, not just the symptoms, to prevent recurrence.

3.4 Regression Testing

 Test the entire application after fixes to ensure other functionalities remain intact.

3.5 Continuous Integration/Continuous Deployment (CI/CD)

 Automate the build and testing process to catch issues early.

3.6 Collaboration

 Foster effective communication between testers, developers, and stakeholders.

3.7 Documentation

 Document the fix and its impact for future reference and knowledge sharing.

4. Tools for Bug Tracking and Correction


Tool Features

Jira Bug tracking, project management, reporting, and integration with CI tools.

Bugzilla Open-source bug tracking with detailed workflows and history tracking.

Redmine Issue tracking, project management, and customizable workflows.

MantisBT Simple bug tracking with email notifications and access control.

Trello Visual task management with bug tracking for small projects.

5. Importance of Correcting Bugs

 Improved Software Quality: Eliminates issues that degrade performance or functionality.


 Enhanced User Satisfaction: Fixes ensure a smoother user experience.
 Cost Reduction: Addressing bugs early avoids expensive fixes later in the development cycle.
 Compliance: Fixes ensure adherence to industry and regulatory standards.
 Reputation Management: Timely fixes protect the product’s reputation in the market.

15

You might also like