8 -Testing
8 -Testing
8 -Testing
Testing 1
Background
Main objectives of a project: High Quality
& High Productivity (Q&P)
Quality has many dimensions
reliability, maintainability, interoperability etc.
Reliability is perhaps the most important
Reliability: The chances of software failing
More defects => more chances of failure
=> lesser reliability
Hence Q goal: Have as few defects as
possible in the delivered software
Testing 2
Faults & Failure
Failure: A software failure occurs if the
behavior of the s/w is different from
expected/specified.
Fault: cause of software failure
Fault = bug = defect
Failure implies presence of defects
A defect has the potential to cause
failure.
Definition of a defect is environment,
project specific Testing 3
Role of Testing
Reviews are human processes - can not catch
all defects
Hence there will be requirement defects,
design defects and coding defects in code
These defects have to be identified by testing
Therefore testing plays a critical role in
ensuring quality.
All defects remaining from before as well as
new ones introduced have to be identified by
testing.
Testing 4
Detecting defects in
Testing
During testing, a program is executed
with a set of test cases
Failure during testing => defects are
present
No failure => confidence grows, but can
not say “defects are absent”
Testing ensures presence of defects but
not their absence
Defects detected through failures
To detect defects, must cause failures
during testing Testing 5
Test Oracle
To check if a failure has occurred
when executed with a test case,
we need to know the correct
behavior
I.e. need a test oracle, which is
often a human
Human oracle makes each test
case expensive as someone has to
check the correctness of its output
Testing 6
Role of Test cases
Ideally would like the following for test cases
No failure implies “no defects” or “high quality”
If defects present, then some test case causes a failure
Psychology of testing is important
should be to ‘reveal’ defects(not to show that it works!)
test cases must be “destructive”
Aim at breaking the system
Role of test cases is clearly very critical
Only if test cases are “good”, the confidence
increases after testing
Testing 7
Test case design
During test planning, have to design a
set of test cases that will detect defects
present
Some criteria needed to guide test case
selection
Two approaches to design test cases
functional or black box
structural or white box
Both are complementary; we discuss a
few approaches/criteria for both
Testing 8
Black Box testing
Testing 11
Equivalence Class
partitioning
Divide the input space into equivalent
classes
If the software works for a test case from a
class then it is likely to work for all
Can reduce the set of test cases if such
equivalent classes can be identified
Getting ideal equivalent classes is
impossible
Approximate it by identifying classes for
which different behavior is specified
Testing 12
Equivalence class partitioning…
Testing 13
Equivalent class
partitioning..
Every condition specified as input is an
equivalent class
Define invalid equivalent classes also
E.g. range 0< value<Max specified
one range is the valid class
input < 0 is an invalid class
Testing 15
Equivalence class…
Once eq classes selected for each of
the inputs, test cases have to be
selected
Select each test case covering as many
valid eq classes as possible
Or, have a test case that covers at
most one valid class for each input
Plus a separate test case for each
invalid class
Testing 16
Example
Consider a program that takes 2
inputs – a string s and an integer n
Program determines n most
frequent characters
Tester believes that programmer
may deal with diff types of chars
separately
A set of valid and invalid
equivalence classes is given
Testing 17
Example..
Input Valid Eq Class Invalid Eq class
Testing 18
Example…
Test cases (i.e. s , n) with first method
s : str of len < N with lower case, upper case,
numbers, and special chars, and n=5
Plus test cases for each of the invalid eq classes
Total test cases: 1+3= 4
With the second approach
A separate str for each type of char (i.e. a str of
numbers, one of lower case, …) + invalid cases
Total test cases will be 5 + 2 = 7
Testing 19
Boundary value analysis
Testing 22
Types of structural testing
Testing 23
Control flow based criteria
Testing 25
Branch coverage
Criterion: Each edge should be
traversed at least once during testing
i.e. each decision must evaluate to both
true and false during testing
Branch coverage implies stmt coverage
If multiple conditions in a decision, then
all conditions need not be evaluated to
T and F
Testing 26
Control flow based…
There are other criteria too - path
coverage, predicate coverage,
cyclomatic complexity based, ...
None is sufficient to detect all types of
defects (e.g. a program missing some
paths cannot be detected)
They provide some quantitative handle
on the breadth of testing
More used to evaluate the level of
testing rather than selecting test cases
Testing 27
Testing Process
Testing 28
Testing
Testing only reveals the presence of defects
Does not identify nature and location of
defects
Identifying & removing the defect => role of
debugging and rework
Preparing test cases, performing testing,
defects identification & removal all consume
effort
Overall testing becomes very expensive :
30-50% development cost
Testing 29
Incremental Testing
Goals of testing: detect as many defects as
possible, and keep the cost low
Both frequently conflict - increasing testing can
catch more defects, but cost also goes up
Incremental testing - add untested parts
incrementally to tested portion
For achieving goals, incremental testing
essential
helps catch more defects
helps in identification and removal
Testing of large systems is always incremental
Testing 30
Integration and Testing
Incremental testing requires incremental
‘building’ I.e. incrementally integrate parts
to form system
Integration & testing are related
During coding, different modules are coded
separately
Integration - the order in which they should
be tested and combined
Integration is driven mostly by testing needs
Testing 31
Top-down and Bottom-up
System : Hierarchy of modules
Modules coded separately
Integration can start from bottom or top
Bottom-up requires test drivers
Top-down requires stubs
Both may be used, e.g. for user interfaces
top-down; for services bottom-up
Drivers and stubs are code pieces written
only for testing
Testing 32
Levels of Testing
The code contains requirement defects,
design defects, and coding defects
Nature of defects is different for
different injection stages
One type of testing will be unable to
detect the different types of defects
Different levels of testing are used to
uncover these defects
Testing 33
User needs Acceptance testing
Testing 35
Integration Testing
Focuses on interaction of modules in a
subsystem
Unit tested modules combined to form
subsystems
Test cases to “exercise” the interaction
of modules in different ways
May be skipped if the system is not too
large
Testing 36
System Testing
Entire software system is tested
Focus: does the software implement
the requirements?
Validation exercise for the system with
respect to the requirements
Generally the final testing stage
before the software is delivered
May be done by independent people
Defects removed by developers
Most time consuming
Testing test phase 37
Acceptance Testing
Focus: Does the software satisfy user needs?
Generally done by end users/customer in
customer environment, with real data
Only after successful AT software is
deployed
Any defects found,are removed by
developers
Acceptance test plan is based on the
acceptance test criteria in the SRS
Testing 38
Other forms of testing
Performance testing
tools needed to “measure” performance
Stress testing
load the system to peak, load generation tools
needed
Regression testing
test that previous functionality works alright
important when changes are made
Previous test records are needed for
comparisons
Prioritization of testcases needed when complete
test suite cannot be executed for a change
Testing 39
Test Plan
Testing usually starts with test plan and
ends with acceptance testing
Test plan is a general document that
defines the scope and approach for
testing for the whole project
Inputs are SRS, project plan, design
Test plan identifies what levels of
testing will be done, what units will be
tested, etc in the project
Testing 40
Test Plan…
Test plan usually contains
Test unit specs: what units need to be
tested separately
Features to be tested: these may
include functionality, performance,
usability,…
Approach: criteria to be used, when to
stop, how to evaluate, etc
Test deliverables
Schedule and task allocation
Testing 41
Test case specifications
Test plan focuses on approach; does not
deal with details of testing a unit
Test case specification has to be done
separately for each unit
Based on the plan (approach,
features,..) test cases are determined
for a unit
Expected outcome also needs to be
specified for each test case
Testing 42
Test case specifications…
Together the set of test cases should
detect most of the defects
Would like the set of test cases to detect
any defects, if it exists
Would also like set of test cases to be
small - each test case consumes effort
Determining a reasonable set of test
case is the most challenging task of
testing
Testing 43
Test case specifications…
The effectiveness and cost of testing depends on
the set of test cases
Q: How to determine if a set of test cases is
good? I.e. the set will detect most of the defects,
and a smaller set cannot catch these defects
No easy way to determine goodness; usually the
set of test cases is reviewed by experts
This requires test cases be specified before
testing – a key reason for having test case specs
Test case specs are essentially a table
Testing 44
Test case specifications…
Testing 45
Defect logging and
tracking
A large software may have thousands of
defects, found by many different people
Often person who fixes (usually the
coder) is different from who finds
Due to large scope, reporting and fixing
of defects cannot be done informally
Defects found are usually logged in a
defect tracking system and then tracked
to closure
Defect logging and tracking is one of
the best practices in industry
Testing 46
Defect logging…
A defect in a software project has a
life cycle of its own, like
Found by someone, sometime and
logged along with info about it
(submitted)
Job of fixing is assigned; person debugs
and then fixes (fixed)
The manager or the submitter verifies
that the defect is indeed fixed (closed)
More elaborate life cycles possible
Testing 47
Defect logging…
Testing 48
Defect logging…
During the life cycle, info about defect
is logged at diff stages to help debug
as well as analysis
Defects generally categorized into a
few types, and type of defects is
recorded
ODC is one classification
Some std categories: Logic, standards, UI,
interface, performance, documentation,..
Testing 49
Defect logging…
Severity of defects in terms of its
impact on sw is also recorded
Severity useful for prioritization of
fixing
One categorization
Critical: Show stopper
Major: Has a large impact
Minor: An isolated defect
Cosmetic: No impact on functionality
Testing 50
Defect logging and
tracking…
Ideally, all defects should be closed
Sometimes, organizations release
software with known defects (hopefully
of lower severity only)
Organizations have standards for when
a product may be released
Defect log may be used to track the
trend of how defect arrival and fixing is
happening
Testing 51