UNIT-4
UNIT-4
1
software. For example, the regression test suite is executed at the end
each sprint.
When test suites are run repeatedly for every change in the program, it is of
enormous advantage to have as small a set of test, cases as possible.
Minimizing a test suite has the following benefits:
2
coverage criteria, because they are now obsolete or redundant.
Through minimization, these redundant test cases will be eliminated.
The sizes of test sets have a direct bearing on the cost of the project.
Test suite minimization techniques lower costs by reducing a test suite
to a minimal subset.
Reducing the size of a test suite decreases both the overhead of
maintaining the test suite and the number of test cases that must be
rerun after changes are made to the software, thereby, reducing the
cost of regression testing.
3
1. Coverage-based Test Case Prioritization :
This type of prioritization is based on code coverage i.e. test cases
are prioritized based on their code coverage.
Total Statement Coverage Prioritization – In this technique, the
total number of statements covered by a test case is used as a
factor to prioritize test cases. For example, a test case covering
10 statements will be given higher priority than a test case
covering 5 statements.
Additional Statement Coverage Prioritization – This technique
involves iteratively selecting a test case with maximum statement
coverage, then selecting a test case that covers statements that
were left uncovered by the previous test case. This process is
repeated till all statements have been covered.
Total Branch Coverage Prioritization – Using total branch
coverage as a factor for ordering test cases, prioritization can be
achieved. Here, branch coverage refers to coverage of each
possible outcome of a condition.
Additional Branch Coverage Prioritization – Similar to the
additional statement coverage technique, it first selects a text
case with maximum branch coverage and then iteratively selects
a test case that covers branch outcomes that were left uncovered
by the previous test case.
Total Fault-Exposing-Potential Prioritization – Fault-exposing-
potential (FEP) refers to the ability of the test case to expose
fault. Statement and Branch Coverage Techniques do not take
into account the fact that some bugs can be more easily detected
than others and also that some test cases have more potential to
detect bugs than others.
FEP depends on:
1. Whether test cases cover faulty statements or not.
2. Probability that faulty statement will cause test case to fail.
2. Risk-based Prioritization
This technique uses risk analysis to identify potential problem areas
which if failed, could lead to bad consequences. Therefore, test
cases are prioritized keeping in mind potential problem areas. In risk
analysis, the following steps are performed :
4
List potential problems.
Assigning probability of occurrence for each problem.
Calculating the severity of impact for each problem.
After performing the above steps, a risk analysis table is formed to
present results. The table consists of columns like Problem ID,
Potential problem identified, Severity of Impact, Risk exposure, etc.
3. Prioritization using Relevant Slice
In this type of prioritization, slicing technique is used – when
program is modified, all existing regression test cases are executed
in order to make sure that program yields same result as before,
except where it has been modified. For this purpose, we try to find
part of program which has been affected by modification, and then
prioritization of test cases is performed for this affected part. There
are 3 parts to slicing technique :
Execution slice – The statements executed under test case form
execution slice.
Dynamic slice – Statements executed under test case that might
impact program output.
Relevant Slice – Statements that are executed under test case
and don’t have any impact on the program output but may impact
output of test case.
4. Requirements – based Prioritization
Some requirements are more important than others or are more
critical in nature, hence test cases for such requirements should be
prioritized first. The following factors can be considered while
prioritizing test cases based on requirements :
Customer assigned priority – The customer assigns weight to
requirements according to his need or understanding of
requirements of product.
Developer perceived implementation complexity – Priority is
assigned by developer on basis of efforts or time that would be
required to implement that requirement.
Requirement volatility – This factor determines frequency of
change of requirement.
5
Fault proneness of requirements – Priority is assigned based
on how error-prone requirement has been in previous versions of
software.
Metric for measuring Effectiveness of Prioritized
Test Suite
For measuring how effective prioritized test suite is, we can use
metric called APFD (Average Percentage of Faults Detected). The
formula for APFD is given by :
APFD = 1 - ( (TF1 + TF2 + ....... + TFm) / nm ) + 1 / 2n
where,
TFi = position of first Test case in Test suite T that
exposes Fault i
m = total number of Faults exposed under T
n = total number of Test cases in T
AFPD value can range from 0 to 100. The higher APFD value, faster
faults detection rate. So simply put, APFD indicates of how quickly
test suite can identify faults or bugs in software. If test suite can
detect faults quickly, then it is considered to be more effective and
reliable.
Test Case Prioritization in Software Testing
What is Software Quality?
Software Quality shows how good and reliable a product is. To
convey an associate degree example, think about functionally
correct software. It performs all functions as laid out in the SRS
document. But, it has an associate degree virtually unusable
program. even though it should be functionally correct, we tend not
to think about it to be a high-quality product.
Another example is also that of a product that will have everything
that the users need but has an associate degree virtually
incomprehensible and not maintainable code. Therefore, the normal
construct of quality as “fitness of purpose” for code merchandise isn’t
satisfactory.
Factors of Software Quality
6
The modern read of high-quality associates with software many
quality factors like the following:
7
Definition of Software Quality Metrics
Software quality metrics are standardized measurements used to assess the
quality of software products or processes. These metrics provide a quantitative
basis to evaluate various aspects of software development and maintenance, such
as functionality, reliability, performance, maintainability, and usability. By
analyzing these metrics, teams can identify areas for improvement and ensure the
software meets specified quality standards.
8
4. Performance Monitoring
Metrics such as defect leakage or root cause analysis help identify recurring
issues and prevent similar defects in the future.
Teams can implement targeted actions based on these insights to reduce
future defect rates.
6. Stakeholder Communication
Metrics provide clear and concise data to communicate progress, risks, and
quality status to stakeholders.
Example: Metrics like customer satisfaction (CSAT) scores can indicate end-
user happiness with the software.
7. Compliance and Benchmarking
9
2. Process Metrics: Assess the efficiency of development and testing
processes.
o Sprint Velocity
o Defect Removal Efficiency (DRE)
3. Project Metrics: Evaluate overall project performance.
o Schedule Variance (SV)
o Cost Performance Index (CPI)
By integrating these metrics into the software quality management process,
organizations can ensure robust, reliable, and user-centric software delivery.
Debugging
Debugging in Software Engineering is the process of identifying
and resolving errors or bugs in a software system. It’s a critical
aspect of software development, ensuring quality, performance,
and user satisfaction. Despite being time-consuming,
effective debugging is essential for reliable and competitive
software products.
Process of Debugging
Debugging is a crucial skill in programming. Here’s a simple, step-
by-step explanation to help you understand and execute
the debugging process effectively:
Step 1: Reproduce the Bug
To start, you need to recreate the conditions that caused the
bug. This means making the error happen again so you can see it
firsthand.
Seeing the bug in action helps you understand the problem better
and gather important details for fixing it.
Step 2: Locate the Bug
Next, find where the bug is in your code. This involves looking
closely at your code and checking any error messages or logs.
Developers often use debugging tools to help with this step.
10
Step 3: Identify the Root Cause
Now, figure out why the bug happened. Examine the logic and
flow of your code and see how different parts interact under the
conditions that caused the bug.
This helps you understand what went wrong.
Step 4: Fix the Bug
Once you know the cause, fix the code. This involves making
changes and then testing the program to ensure the bug is gone.
Sometimes, you might need to try several times, as initial fixes
might not work or could create new issues.
Using a version control system helps track changes and undo any
that don’t solve the problem.
Step 5: Test the Fix
After fixing the bug, run tests to ensure everything works correctly.
These tests include:
Unit Tests: Check the specific part of the code that was changed.
Integration Tests: Verify the entire module where the bug was
found.
System Tests: Test the whole system to ensure overall
functionality.
Regression Tests: Make sure the fix didn’t cause any new
problems elsewhere in the application.
Step 6: Document the Process
Finally, record what you did. Write down what caused the bug,
how you fixed it, and any other important details.
This documentation is helpful if similar issues occur in the future.
Debugging Approaches/Strategies
1. Brute Force: Study the system for a longer duration to
understand the system. It helps the debugger to construct
different representations of systems to be debugged depending
on the need. A study of the system is also done actively to find
recent changes made to the software.
2. Backtracking: Backward analysis of the problem which involves
tracing the program backward from the location of the failure
11
message to identify the region of faulty code. A detailed study of
the region is conducted to find the cause of defects.
3. Forward analysis of the program involves tracing the program
forwards using breakpoints or print statements at different points
in the program and studying the results. The region where the
wrong outputs are obtained is the region that needs to be focused
on to find the defect.
4. Using A debugging experience with the software debug the
software with similar problems in nature. The success of this
approach depends on the expertise of the debugger.
5. Cause elimination: it introduces the concept of binary
partitioning. Data related to the error occurrence are organized to
isolate potential causes.
6. Static analysis: Analyzing the code without executing it to
identify potential bugs or errors. This approach involves analyzing
code syntax, data flow, and control flow.
7. Dynamic analysis: Executing the code and analyzing its
behavior at runtime to identify errors or bugs. This approach
involves techniques like runtime debugging and profiling.
8. Collaborative debugging: Involves multiple developers working
together to debug a system. This approach is helpful in situations
where multiple modules or components are involved, and the root
cause of the error is not clear.
9. Logging and Tracing: Using logging and tracing tools to identify
the sequence of events leading up to the error. This approach
involves collecting and analyzing logs and traces generated by
the system during its execution.
10. Automated Debugging: The use of automated tools and
techniques to assist in the debugging process. These tools can
include static and dynamic analysis tools, as well as tools that use
machine learning and artificial intelligence to identify errors and
suggest fixes.
Bug correction, or defect fixing, is the process of identifying, analyzing, and resolving issues in
software to ensure it functions as expected. This process is critical in maintaining software
12
quality and delivering a reliable product. Below is a detailed explanation of bug correction in
software testing:
1.1 Identification
Bugs are identified during various testing phases (e.g., unit, integration, system, or user
acceptance testing).
Common sources include:
o Test execution results.
o Automated test reports.
o End-user feedback (post-deployment).
1.2 Logging
1.3 Analysis
1.4 Fixing
1.5 Verification
13
1.6 Closure
Once the fix is verified, the bug is marked as closed in the tracking system.
If the bug persists, it is reopened and the cycle repeats.
2.1 Miscommunication
Lack of clear communication between testers and developers can lead to misunderstandings
about the bug.
Fixing a bug might inadvertently break another part of the application due to tightly coupled
modules.
Bugs might appear in production but not in the test environment due to differences in
configurations.
Deadlines might pressure developers to implement quick fixes, increasing the risk of introducing
new defects.
Some bugs, especially intermittent ones, are hard to reproduce and fix.
3.2 Prioritization
14
3.3 Root Cause Analysis (RCA)
Identify and address the underlying cause, not just the symptoms, to prevent recurrence.
Test the entire application after fixes to ensure other functionalities remain intact.
3.6 Collaboration
3.7 Documentation
Document the fix and its impact for future reference and knowledge sharing.
Jira Bug tracking, project management, reporting, and integration with CI tools.
Bugzilla Open-source bug tracking with detailed workflows and history tracking.
MantisBT Simple bug tracking with email notifications and access control.
Trello Visual task management with bug tracking for small projects.
15