8 -Testing

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 51

Software Testing

Testing 1
Background
 Main objectives of a project: High Quality
& High Productivity (Q&P)
 Quality has many dimensions

reliability, maintainability, interoperability etc.
 Reliability is perhaps the most important
 Reliability: The chances of software failing
 More defects => more chances of failure
=> lesser reliability
 Hence Q goal: Have as few defects as
possible in the delivered software

Testing 2
Faults & Failure
 Failure: A software failure occurs if the
behavior of the s/w is different from
expected/specified.
 Fault: cause of software failure
 Fault = bug = defect
 Failure implies presence of defects
 A defect has the potential to cause
failure.
 Definition of a defect is environment,
project specific Testing 3
Role of Testing
 Reviews are human processes - can not catch
all defects
 Hence there will be requirement defects,
design defects and coding defects in code
 These defects have to be identified by testing
 Therefore testing plays a critical role in
ensuring quality.
 All defects remaining from before as well as
new ones introduced have to be identified by
testing.
Testing 4
Detecting defects in
Testing
 During testing, a program is executed
with a set of test cases
 Failure during testing => defects are
present
 No failure => confidence grows, but can
not say “defects are absent”
 Testing ensures presence of defects but
not their absence
 Defects detected through failures
 To detect defects, must cause failures
during testing Testing 5
Test Oracle
 To check if a failure has occurred
when executed with a test case,
we need to know the correct
behavior
 I.e. need a test oracle, which is
often a human
 Human oracle makes each test
case expensive as someone has to
check the correctness of its output
Testing 6
Role of Test cases
 Ideally would like the following for test cases
 No failure implies “no defects” or “high quality”
 If defects present, then some test case causes a failure
 Psychology of testing is important
 should be to ‘reveal’ defects(not to show that it works!)
 test cases must be “destructive”
 Aim at breaking the system
 Role of test cases is clearly very critical
 Only if test cases are “good”, the confidence
increases after testing
Testing 7
Test case design
 During test planning, have to design a
set of test cases that will detect defects
present
 Some criteria needed to guide test case
selection
 Two approaches to design test cases
 functional or black box
 structural or white box
 Both are complementary; we discuss a
few approaches/criteria for both
Testing 8
Black Box testing

 Software tested to be treated as a black


box
 Specification for the black box is given
 The expected behavior of the system is
used to design test cases
 i.e test cases are determined solely from
specification.
 Internal structure of code not used for
test case design
Testing 9
Black box Testing…
 Premise: Expected behavior is
specified.
 Hence just test for specified
expected behavior
 How it is implemented is not an issue.
 For modules,specification produced in
design specify expected behavior
 For system testing, SRS specifies
expected behaviorTesting 10
Black Box Testing…

 Most thorough functional testing -


exhaustive testing
 Software is designed to work for an input space
 Test the software with all elements in the input
space
 Infeasible - too high a cost
 Need better method for selecting test cases
 Different approaches have been proposed

Testing 11
Equivalence Class
partitioning
 Divide the input space into equivalent
classes
 If the software works for a test case from a
class then it is likely to work for all
 Can reduce the set of test cases if such
equivalent classes can be identified
 Getting ideal equivalent classes is
impossible
 Approximate it by identifying classes for
which different behavior is specified
Testing 12
Equivalence class partitioning…

 Rationale: specification requires same


behavior for elements in a class
 Software likely to be constructed such
that it either fails for all or for none.
 E.g. if a function was not designed for
negative numbers then it will fail for all
the negative numbers
 For robustness, should form equivalent
classes for invalid inputs also

Testing 13
Equivalent class
partitioning..
 Every condition specified as input is an
equivalent class
 Define invalid equivalent classes also
 E.g. range 0< value<Max specified
 one range is the valid class
 input < 0 is an invalid class

 input > max is an invalid class


 Whenever that entire range may not be
treated uniformly - split into classes
Testing 14
Equivalent class
partitioning..
 Should consider eq. classes in outputs
also and then give test cases for
different classes
 E.g.: Compute rate of interest given
loan amount, monthly installment, and
number of months
 Equivalent classes in output: + rate, rate =
0 ,-ve rate
 Have test cases to get these outputs

Testing 15
Equivalence class…
 Once eq classes selected for each of
the inputs, test cases have to be
selected
 Select each test case covering as many
valid eq classes as possible
 Or, have a test case that covers at
most one valid class for each input
 Plus a separate test case for each
invalid class
Testing 16
Example
 Consider a program that takes 2
inputs – a string s and an integer n
 Program determines n most
frequent characters
 Tester believes that programmer
may deal with diff types of chars
separately
 A set of valid and invalid
equivalence classes is given
Testing 17
Example..
Input Valid Eq Class Invalid Eq class

S 1: Contains numbers 1: non-ascii char


2: Lower case letters 2: str len > N
3: upper case letters
4: special chars
5: str len between 0-
N(max)
N 6: Int in valid range 3: Int out of range

Testing 18
Example…
 Test cases (i.e. s , n) with first method
 s : str of len < N with lower case, upper case,
numbers, and special chars, and n=5
 Plus test cases for each of the invalid eq classes
 Total test cases: 1+3= 4
 With the second approach
 A separate str for each type of char (i.e. a str of
numbers, one of lower case, …) + invalid cases
 Total test cases will be 5 + 2 = 7

Testing 19
Boundary value analysis

 Programs often fail on special values


 These values often lie on boundary of
equivalence classes
 Test cases that have boundary values
have high yield
 These are also called extreme cases
 A BV test case is a set of input data that
lies on the edge of a eq class of
input/output
Testing 20
BVA...
 For each equivalence class

choose values on the edges of the class

choose values just outside the edges
 E.g. if 0 <= x <= 1.0

0.0 , 1.0 are edges inside

-0.1,1.1 are just outside
 E.g. a bounded list - have a null list , a
maximum value list
 Consider outputs also and have test cases
generate outputs on the boundary
Testing 21
White box testing

 Black box testing focuses only on functionality


 What the program does; not how it is implemented
 White box testing focuses on implementation
 Aim is to exercise different program structures with
the intent of uncovering errors
 Is also called structural testing
 Various criteria exist for test case design
 Test cases have to be selected to satisfy
coverage criteria

Testing 22
Types of structural testing

 Control flow based criteria


 looks at the coverage of the control flow
graph
 Data flow based testing
 looks at the coverage in the definition-use
graph

Testing 23
Control flow based criteria

 Considers the program as control flow


graph
 Nodes represent code blocks – i.e. set of
statements always executed together
 An edge (i,j) represents a possible transfer
of control from i to j
 Assume a start node and an end node
 A path is a sequence of nodes from start
to end
Testing 24
Statement Coverage
Criterion
 Criterion: Each statement is executed at least
once during testing
 I.e. set of paths executed during testing should
include all nodes
 Limitation: does not require a decision to
evaluate to false if no else clause
 E.g. : abs (x) : if ( x>=0) x = -x; return(x)

The set of test cases {x = 0} achieves 100%
statement coverage, but error not detected
 Guaranteeing 100% coverage not always
possible due to possibility of unreachable nodes

Testing 25
Branch coverage
 Criterion: Each edge should be
traversed at least once during testing
 i.e. each decision must evaluate to both
true and false during testing
 Branch coverage implies stmt coverage
 If multiple conditions in a decision, then
all conditions need not be evaluated to
T and F

Testing 26
Control flow based…
 There are other criteria too - path
coverage, predicate coverage,
cyclomatic complexity based, ...
 None is sufficient to detect all types of
defects (e.g. a program missing some
paths cannot be detected)
 They provide some quantitative handle
on the breadth of testing
 More used to evaluate the level of
testing rather than selecting test cases

Testing 27
Testing Process

Testing 28
Testing
 Testing only reveals the presence of defects
 Does not identify nature and location of
defects
 Identifying & removing the defect => role of
debugging and rework
 Preparing test cases, performing testing,
defects identification & removal all consume
effort
 Overall testing becomes very expensive :
30-50% development cost
Testing 29
Incremental Testing
 Goals of testing: detect as many defects as
possible, and keep the cost low
 Both frequently conflict - increasing testing can
catch more defects, but cost also goes up
 Incremental testing - add untested parts
incrementally to tested portion
 For achieving goals, incremental testing
essential

helps catch more defects

helps in identification and removal
 Testing of large systems is always incremental
Testing 30
Integration and Testing
 Incremental testing requires incremental
‘building’ I.e. incrementally integrate parts
to form system
 Integration & testing are related
 During coding, different modules are coded
separately
 Integration - the order in which they should
be tested and combined
 Integration is driven mostly by testing needs

Testing 31
Top-down and Bottom-up
 System : Hierarchy of modules
 Modules coded separately
 Integration can start from bottom or top
 Bottom-up requires test drivers
 Top-down requires stubs
 Both may be used, e.g. for user interfaces
top-down; for services bottom-up
 Drivers and stubs are code pieces written
only for testing
Testing 32
Levels of Testing
 The code contains requirement defects,
design defects, and coding defects
 Nature of defects is different for
different injection stages
 One type of testing will be unable to
detect the different types of defects
 Different levels of testing are used to
uncover these defects

Testing 33
User needs Acceptance testing

Requirement System testing


specification

Design Integration testing

code Unit testing


Testing 34
Unit Testing
 Different modules tested separately
 Focus: defects injected during coding
 Essentially a code verification
technique, covered in previous chapter
 UT is closely associated with coding
 Frequently the programmer does UT;
coding phase sometimes called “coding
and unit testing”

Testing 35
Integration Testing
 Focuses on interaction of modules in a
subsystem
 Unit tested modules combined to form
subsystems
 Test cases to “exercise” the interaction
of modules in different ways
 May be skipped if the system is not too
large

Testing 36
System Testing
 Entire software system is tested
 Focus: does the software implement
the requirements?
 Validation exercise for the system with
respect to the requirements
 Generally the final testing stage
before the software is delivered
 May be done by independent people
 Defects removed by developers
 Most time consuming
Testing test phase 37
Acceptance Testing
 Focus: Does the software satisfy user needs?
 Generally done by end users/customer in
customer environment, with real data
 Only after successful AT software is
deployed
 Any defects found,are removed by
developers
 Acceptance test plan is based on the
acceptance test criteria in the SRS

Testing 38
Other forms of testing
 Performance testing

tools needed to “measure” performance
 Stress testing

load the system to peak, load generation tools
needed
 Regression testing

test that previous functionality works alright

important when changes are made

Previous test records are needed for
comparisons

Prioritization of testcases needed when complete
test suite cannot be executed for a change

Testing 39
Test Plan
 Testing usually starts with test plan and
ends with acceptance testing
 Test plan is a general document that
defines the scope and approach for
testing for the whole project
 Inputs are SRS, project plan, design
 Test plan identifies what levels of
testing will be done, what units will be
tested, etc in the project

Testing 40
Test Plan…
 Test plan usually contains
 Test unit specs: what units need to be
tested separately
 Features to be tested: these may
include functionality, performance,
usability,…
 Approach: criteria to be used, when to
stop, how to evaluate, etc
 Test deliverables
 Schedule and task allocation
Testing 41
Test case specifications
 Test plan focuses on approach; does not
deal with details of testing a unit
 Test case specification has to be done
separately for each unit
 Based on the plan (approach,
features,..) test cases are determined
for a unit
 Expected outcome also needs to be
specified for each test case
Testing 42
Test case specifications…
 Together the set of test cases should
detect most of the defects
 Would like the set of test cases to detect
any defects, if it exists
 Would also like set of test cases to be
small - each test case consumes effort
 Determining a reasonable set of test
case is the most challenging task of
testing
Testing 43
Test case specifications…
 The effectiveness and cost of testing depends on
the set of test cases
 Q: How to determine if a set of test cases is
good? I.e. the set will detect most of the defects,
and a smaller set cannot catch these defects
 No easy way to determine goodness; usually the
set of test cases is reviewed by experts
 This requires test cases be specified before
testing – a key reason for having test case specs
 Test case specs are essentially a table

Testing 44
Test case specifications…

Seq.No Condition Test Data


Expected successful
to be tested result

Testing 45
Defect logging and
tracking
 A large software may have thousands of
defects, found by many different people
 Often person who fixes (usually the
coder) is different from who finds
 Due to large scope, reporting and fixing
of defects cannot be done informally
 Defects found are usually logged in a
defect tracking system and then tracked
to closure
 Defect logging and tracking is one of
the best practices in industry
Testing 46
Defect logging…
 A defect in a software project has a
life cycle of its own, like
 Found by someone, sometime and
logged along with info about it
(submitted)
 Job of fixing is assigned; person debugs
and then fixes (fixed)
 The manager or the submitter verifies
that the defect is indeed fixed (closed)
 More elaborate life cycles possible
Testing 47
Defect logging…

Testing 48
Defect logging…
 During the life cycle, info about defect
is logged at diff stages to help debug
as well as analysis
 Defects generally categorized into a
few types, and type of defects is
recorded

ODC is one classification

Some std categories: Logic, standards, UI,
interface, performance, documentation,..
Testing 49
Defect logging…
 Severity of defects in terms of its
impact on sw is also recorded
 Severity useful for prioritization of
fixing
 One categorization
 Critical: Show stopper
 Major: Has a large impact
 Minor: An isolated defect
 Cosmetic: No impact on functionality
Testing 50
Defect logging and
tracking…
 Ideally, all defects should be closed
 Sometimes, organizations release
software with known defects (hopefully
of lower severity only)
 Organizations have standards for when
a product may be released
 Defect log may be used to track the
trend of how defect arrival and fixing is
happening
Testing 51

You might also like