0% found this document useful (0 votes)
23 views

SE-Module-4 Testing

Uploaded by

Spandan Sahu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views

SE-Module-4 Testing

Uploaded by

Spandan Sahu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 90

Software Testing

Module – 4

Dr. Saumyaranjan Dash


Associate Professor, CSE
1
Software Coding

2
Coding Phase
§ During coding phase:
§ Every module identified in the design document is
§ Coded &
§ Unit tested

§ Unit testing:
§ Testing of different modules/units of a system
in isolation

3
Unit Testing

§ Test each module in isolation first

§ Later integrate the modules and test the set of modules


§ This makes debugging easier

§ If an error is detected when several modules are being tested


together
§ It would be difficult to determine which module has the
error
4
Coding
§ The Inputs to the Coding phase:

§ The Design documents


§ Structure chart of the system
§ Module Specification (data structures & algorithms)

§ Modules in the design document are coded


§ As per module spec

5
Coding
§ Objective of coding phase:

§ Transform design into code

§ Unit test the code

6
Coding Standards
§ Good software development organizations require their
programmers to:
§ Adhere to some standard style of coding called coding
standards
§ Some formulate their own coding standards & follow them
rigorously

§ Advantages:
§ Gives an uniform appearance to the codes written by different
engineers
§ Enhances code understandability
§ Encourages good programming practices 7
Coding Standards &
guidelines

§ Sets standard ways of:

§ variables naming & Initializations


§ commenting (header, blocks)
§ code layout
§ do’s & don’ts
§ max source lines allowed per function etc.

8
Example Coding Standards
§ Rules for limiting the use of global variables

§ Naming conventions for


§ Global variables, local variables and constants

§ Content & format of headers for different modules


§ Name of the module
§ date created
§ author's name
§ modification history
§ Description
§ input/output parameters etc.. 9
Example Coding Standards

§ Error / Exception handling mechanisms

§ Code should be easy to understand & maintainable

§ Avoid unpredictable side effects

§ Do not use an identifier for multiple purposes


§ Leads to confusion and annoyance for anybody trying to
understand the code
§ Also makes maintenance difficult
10
Coding Guidelines

§ Code should be well-documented

§ Rules of thumb:
§ On average there must be at least
§ One comment line for every three source lines

§ Do not use goto statements. make a program unstructured


& difficult to understand

11
Code inspection vs
code walk throughs

§ After completion of coding of the module :


§ Code inspection & code walk throughs are carried out

§ Objectives
§ To ensure coding standards are followed
§ To detect as many errors as possible before testing
§ Detected errors require less effort for correction at this
stage
§ Much higher effort needed if errors were to be detected
during integration or system testing 12
(1) Code Walk Through

Ø Is an informal code analysis technique

Ø Some members (3-7) of project team select important test


cases & simulate execution of the code manually

Ø This technique is based on personal experience, common


sense etc..

Ø Discussion should focus on discovery of errors & not on how


to fix them
Ø Avoid the developer to feel evaluated
13
(2) Code Inspection
§ Aims mainly at discovery of commonly made errors

§ The code is examined for the presence of certain kinds of


errors (Ex: GOTO statements, memory deallocation ...)

§ Good software development companies:


§ Collect statistics of errors committed by their engineers
§ Identify the types of errors most frequently committed

§ A list of common errors:


§ Can be used during code inspection to look out for possible
errors 14
Commonly made errors

§ Adherence to coding standards


§ Use of uninitialized variables
§ Non terminating loops
§ Array indices out of bounds
§ Incompatible assignments
§ Improper storage allocation and de-allocation
§ Actual and formal parameter mismatch in proc calls
§ Jumps into loops
15
Testing

16
What is testing ?

Testing a program:

Ø Providing the program with a set of test inputs


Ø Observing if the program behaves as expected

Ø If the program fails,


Ø Then the conditions are noted for later debugging &
correction

17
Testing

ü Aim: to identify all defects in a s/w product

ü But, even after thorough testing, one cannot guarantee that


the s/w is error-free

ü Vary large input data domain makes it impractical to test for each
input value
ü But testing exposes & reduces many defects in the system
ü Increases the users' confidence
ü Testing is very important: Needs most time/effort of all phases
ü As much challenging as other phases & needs creativity
18
Commonly used terms

Ø Test case:
This is the Triplet [I,S,O], where I is the data input to the
system, S is the state of the system, and O is the
expected output of the system

Ø Test suite:
This is the set of all test cases with which a given
software product is to be tested
19
Verification vs Validation

ü Verification is the process of determining:

ü Whether output of one phase of development conforms


to its previous phase
Ex: Code -> Design Docs -> SRS Doc
ü Concerned with phase containment of error

ü Validation is the process of determining

ü Whether a fully developed system conforms to its SRS


document
ü Concerned with product to be error free 20
Design of Test Cases

ü Exhaustive testing of any system is impractical:


ü As input data domain is extremely large

ü Goal: Design an optimal test suite:


ü Of reasonable size &
ü Uncovers as many errors as possible
ü Systematic approaches are required to design an optimal test suite:
ü Each test case in the suite should detect different errors

ü Two main approaches of designing test cases are:


1. Black Box testing approach
2. White box testing approach

21
Black-box Testing

ØTest cases are designed without any


knowledge of the internal structure of the
module or software.
Ø Ways of designing white-box test cases:
üEquivalence class partitioning
üBoundary value analysis 22
White-box Testing
ü Designing white-box test cases:
ü Requires knowledge about the internal structure of
software or module
ü White-box testing is also called structural testing

ü Ways of designing white-box test cases:


ü Statement Coverage
ü Branch coverage
ü Condition coverage
ü Path coverage
ü Control flow graph (CFG)
23
Black-box Test case preparation
approach

ü There are essentially two main approaches to design black


box test cases:
1. Equivalence class partitioning
üInput values are partitioned into equivalence classes.
üPartitioning is done such that: program behaves in
similar ways to each input value in an equivalence class

2. Boundary value analysis


Selects test cases at the boundaries of different equivalence
classes
24
Example
ü A program reads an input value in the range of 1 and 5000:
ü Computes the square root of the input number

SQRT O

ü There are three equivalence classes:


ü The set of negative integers,
ü Set of integers in the range of 1 and 5000,
ü integers larger than 5000.
1 500
0 25
Example (cont.)

ü The test suite must include:

ü Representatives from each of the three equivalence


classes:

ü A possible test suite can be: {-5,500,6000}.

26
Boundary Value Analysis

ü Some typical programming errors occur at boundaries of


equivalence classes as Programmers often fail to see
ü Special processing is required at the boundaries
ü Ex: Programmers may improperly use < instead of <=
Boundary value analysis:
ü Selects test cases at the boundaries of different
equivalence classes.

ü Ex: For a function that computes the square root of an


integer in the range of 1 and 5000:
ü Test cases must include the values: {0,1,5000,5001}
27
White-Box Testing - Methods

1. Statement coverage
2. Branch coverage
3. Condition coverage
4. Path Coverage
5. CFG – Cyclomatic complexity
28
(1) Statement Coverage

üDesign test cases so that

ü Every statement in a program is executed at least once

ü Unless a statement is executed, we have no way of


knowing if an error exists in that statement

29
Example: Design statement coverage-based
test suite for the GCD computation program

int computeGCD(int x,int y)


{
1 while (x != y){
2 if (x>y) then
3 x=x-y;
4 else y=y-x;
5}
6 return x;
}
z To design the test cases for the statement coverage, the
conditional expression of the while statement needs to be made
true and the conditional expression of the if statement needs to be
made both true and false.
z By choosing the test set {(x = 3, y = 3), (x = 4, y = 3), (x = 3,
y = 4)}, all statements of the program would be executed at least
once.
30
(2) Branch Coverage

§ Test cases are designed such that:


§ All branch conditions are given true and false
values in turn
§ If (condition) Then
§ ------
§ ------
§ Else
§ ------
§ ------

31
Example: Design Branch coverage-based
test suite for the GCD computation program

int computeGCD(int x,int y)


{
1 while (x != y){
2 if (x>y) then
3 x=x-y;
4 else y=y-x;
5}
6 return x;
}

z The test suite {(x = 3, y = 3), (x = 3, y = 2), (x =


4, y = 3), (x = 3, y = 4)} achieves branch coverage.

32
(3) Condition Coverage

üTest cases are designed such that:


üEach component of a composite conditional
expression is
üGiven both true & false values

EX:
• Consider the conditional expression
• ((c1.and.c2).or.c3):
• Each of c1, c2, and c3 are given true and false values

33
(4) Path Coverage
ü Design test cases such that:
ü All linearly independent paths are executed at least
once
ü linearly independent paths are determined from
Control flow graph (CFG) of a program

ü A Control Flow Graph (CFG) describes:


• The sequence in which, the different instructions of a
program get executed
• The way control flows through the program

34
Construction of CFG

z In order to draw the control flow graph of a program,


we need to first number all the statements of a
program.

z The different numbered statements serve as nodes of


the control flow graph.

z There exists an edge from one node to another, if the


execution of the statement representing the first node
can result in the transfer of control to the other node.
35
Example of CFG Construction

36
Linearly independent set of
paths (or basis path set)

z A set of paths for a given program is called linearly


independent set of paths (or the set of basis paths or simply
the basis set), if
Ø each path in the set introduces at least one new edge
that is not included in any other path in the set

An alternative definition:
Ø If a set of paths is linearly independent of each other,
then no path in the set can be obtained through any
linear operations (i.e., additions or subtractions) on
the other paths in the set.

37
McCabe’s Cyclomatic Complexity
Metric

z It defines an upper bound on the number of independent


paths in a program.
z Three different ways to compute the cyclomatic complexity :
Method 1: Given a control flow graph G of a program, the cyclomatic
complexity V(G) can be computed as:
V(G) = E – N + 2
where, N is the number of nodes of the control flow graph and E is the
number of edges in the control flow graph.
Method 2: V(G) = Number of bounded area +1

Method3: V(G) = N + 1
Where N is the number of decision and loop statements of a program

38
Example 1: McCabe's Cyclomatic
Complexity

Formulae :
1. V(G) = E – N + 2 (9-7+2=4)
2. V(G) = the number of bounded areas + 1 = 3 + 1 = 4
3. V(G) = number of decision and loop statements + 1 = 3 + 1 = 4
Example 1

while (i<n-1) do :1 2
while (j<n) do :2
if A[i]<A[j] then :3
swap(A[i], A[j]); :4 3
end do; :5
i=i+1; :6
end do; :7 7 4 5

6
Ø Basis path Set (No. of Independent paths)
Ø 1, 7 1
Ø 1, 2, 6, 1, 7
Ø 1, 2, 3, 4, 5, 2, 6, 1, 7 2
Ø 1, 2, 3, 5, 2, 6, 1, 7

7 4 5

6
Example 2: McCabe's
Cyclomatic Complexity

z V(G) = E – N + 2 = 7 – 6 + 2 = 3
z V(G) =No.of bounded area + 1 = 2 + 1 =3
z V(G) = No.of decision and loop statements + 1 = 2+ 1 = 3
41
Example-2
1
9
2
4
3
5 6
7
8
What is V(G)?
Example-3
Steps to carry out path
coverage-based testing

1.Draw control flow graph for the program.


2. Determine the McCabe’s metric V(G).
3. Determine the cyclomatic complexity. This gives the minimum
number of test cases required to achieve path coverage.
4. Repeat
Test using a randomly designed set of test cases.
Perform dynamic analysis to check the path coverage
achieved.
until at least 90 per cent path coverage is achieved.

44
Basis Path Testing

• Also called control structure testing

• Is a white box method for designing test cases

• It analyzes the control-flow graph of a program to find a set of


linearly independent paths (Basis paths) of execution

• Uses McCabe' cyclomatic complexity to determine the no. of


linearly independent paths & generates test cases for each path

• Basis path testing guarantees complete branch coverage (all


edges of the control-flow graph)
Reliability Testing

• Reliability Testing is an important software testing technique that is


performed by the team to ensure that the software is performing &
functioning consistently in each environmental condition as well as in
a specified period.

• It ensures that product is fault free and is reliable for its intended purpose.

• The objective of reliability testing is:

Ø To find the perpetual structure of repeating failures.


Ø To find the number of failures occurring in the specific period of time.
Ø To discover the main cause of failure.
Security Testing

• Security testing is a testing process intended to reveal flaws in


the security mechanisms of a system

• Typical security requirements includes:


• Confidentiality, integrity, authentication, availability &
authorization

• Actual security testing depends on the security requirements


implemented by the system.
Web Application Testing

• Web application testing, a s/w testing technique exclusively adopted to


test the applications that are hosted on web in which the application
interfaces and other functionalities are tested.

• The steps of Web Application Testing are :

Step 1: Functionality Testing. ...


Step 2: Usability Testing. ...
Step 3: Interface Testing. ...
Step 4: Compatibility Testing. ...
Step 5: Performance Testing. ...
Step 6: Security Testing. ...
Testing in the Large versus
Testing in the Small

z A software product is normally tested in three levels or stages:


Ø Unit testing
Ø Integration testing
Ø System testing
z Unit testing is referred to as testing in the small, whereas
integration and system testing are referred to as testing in
the large.
z During unit testing, the individual functions (or units) of a program
are tested.
z After testing all the units individually, the units are slowly integrated
and tested after each step of integration (integration testing).
z Finally, the fully integrated system is tested (system testing).

49
UNIT TESTING

z Unit testing refers to the testing of individual


functions (or units) of a program in isolation.
z Unit testing is undertaken after a module has been
coded and reviewed.
z This activity is typically undertaken by the coder of the
module himself in the coding phase.
z Before carrying out unit testing, the unit test
cases have to be designed and the test
environment for the unit under test has to be
developed.

50
Driver and stub modules

z To test a single module, we need a complete


environment to provide all relevant code that is
necessary for the execution of the module.
z Besides the module under test, the following are needed
to test the module:
Ø The procedures belonging to other modules that
the module under test calls.
Ø N on- l oc al d ata s tr uc tur e s that t h e m o d u l e
accesses.
Ø A procedure to call the functions of the module
under test with appropriate parameters.
51
Driver and stub modules

z Stubs and drivers are designed to provide the


complete environment for a module so that testing
can be carried out.
Stub:
z A stub procedure is a dummy procedure that has the same
I/O parameters as the function called by the unit under test
but has a highly simplified behaviour.
Driver:
z A driver module should contain the non-local data structures
accessed by the module under test.
z It should also have the code to call the different functions of
the unit under test with appropriate parameter values for
testing. 52
Driver and stub modules
INTEGRATION TESTING

z Integration testing is carried out after all (or at least


some of ) the modules have been unit tested.
z The primary objective of integration testing is to
test the module interfaces, i.e., there are no
errors in parameter passing, when one module
invokes the functionality of another module.
z During integration testing, different modules of a
system are integrated in a planned manner using
an integration plan.
z The integration plan specifies the steps and the order in
which modules are combined to realize the full system.
54
INTEGRATION TESTING

z An important factor that guides the integration plan is


the module dependency graph (Structure Chart).
z By examining the structure chart, the integration plan
can be developed by any one (or a mixture) of the
following approaches :
Ø Big-bang approach to integration testing
Ø Top-down approach to integration testing
Ø Bottom-up approach to integration testing
Ø Mixed (also called sandwiched ) approach to
integration testing

55
Big-bang approach to
integration testing

z In this approach, all the modules making up a


system are integrated in a single step.
z All the unit tested modules of the system are simply
linked together and tested.

z This technique can be used only for very small systems.


z The main problem with this approach is that it is very
difficult to localize the error.
56
Bottom-up approach to
integration testing
z Large software products are often made up of several
subsystems and a subsystem might consist of many
modules.
z In bottom-up integration testing, first the modules for
the each subsystem are integrated.
z The subsystems are integrated separately and independently
to test whether the interfaces among various modules
making up the subsystem work satisfactorily or not.

57
Bottom-up approach to
integration testing

Advantages:
Ø Several disjoint subsystems can be tested
simultaneously.
Ø The low-level modules get tested thoroughly, since they
are exercised in each integration step.
Disadvantages:
Ø Complexity that occurs when the system is made up of a
large number of small subsystems that are at the same
level.

58
Top-down approach to
integration testing

z Top-down integration testing starts with the root


module in the structure chart and one or two
subordinate modules of the root module.
z After the top-level ‘skeleton’ has been tested, the
modules that are at the immediately lower layer of
the ‘skeleton’ are combined with it and tested.

59
Mixed approach to
integration testing

z The mixed (also called sandwiched ) integration


testing follows a combination of top-down and
bottom-up testing approaches.
z In top down approach, testing can start only after the
top-level modules have been coded and unit tested.
z Bottom-up testing can start only after the bottom level
modules are ready.
z In the mixed testing approach, testing can start
as and when modules become available after unit
testing.
z In this approach, both stubs and drivers are required to
be designed. 60
Phased versus Incremental
Integration Testing

z In Integration Testing, modules can be integrated either


in a phased or incremental manner.

z In incremental integration testing, only one new


module is added to the partially integrated
system each time.

z In phased integration, a group of related modules


are added to the partial system each time.

61
SYSTEM TESTING

z After all the units of a program have been integrated


together and tested, system testing is taken up.

z System tests are designed to validate a fully


developed system to assure that it meets its
requirements.

z The test cases are therefore designed based on the SRS


document.

62
System Testing
ü There are three main kinds of system testing:

ü Alpha Testing:
ü Performed by the test team within the developing organization
ü Beta Testing
ü Performed by a select group of friendly customers
ü Acceptance Testing
ü Performed by the customer himself to determine whether the
system should be accepted or rejected

63
Regression testing

z Whenever software is changed to either fix a bug or


enhance or remove a feature or enhance functionality
and performance, regression testing is carried out.

z Regression testing is the practice of running an old


test suite after each change to the system or after
each bug fix

Ø to ensure that no new bug has been introduced due to


the change or the bug fix.

64
Mutation Testing

z Mutation testing is a fault-based testing technique.


z Mutation test cases are designed to detect specific types
of faults in a program.
z In mutation testing, a program is first tested by using an
initial test suite designed by using various white box
testing strategies.
z After the initial testing is complete, mutation testing
can be taken up.
z The idea behind mutation testing is to make a few arbitrary
changes to a program at a time.
z Each time the program is changed, it is called a mutated
program and the change effected is called a mutant.
65
Mutation Testing

z A mutated program is tested against the original test suite of


the program.
z If there exists at least one test case in the test suite for
which a mutated program yields an incorrect result, then the
mutant is said to be dead
Ø since the error introduced by the mutation operator
has successfully been detected by the test suite.
z If a mutant remains alive even after all the test cases have
been exhausted, the test suite is enhanced to kill the mutant.
z NOTE: Mutation testing is most suitable to be used in conjunction
with some testing tool that should automatically generate the
mutants and run the test suite automatically on each mutant.
66
Stress testing
§ Evaluates system performance
§ When stressed for short periods of time

§ Stress testing
§ also known as endurance testing

§ Stress tests are black box tests:


§ Designed to impose a range of abnormal and
even illegal input conditions
§ so as to stress the capabilities of the software
Stress Testing

§ If the requirements is to handle a specified


number of users, or devices:

§ Stress testing evaluates system performance


when all users or devices are busy
simultaneously
Stress Testing

§ If an OS is supposed to support 15
multiprogrammed jobs,
§ the system is stressed by attempting to run 15 or
more jobs simultaneously.

§ A real-time system might be tested


§ to determine the effect of simultaneous arrival of
several high-priority interrupts.
Stress Testing

§ Stress testing usually involves an element


of time or size,
§ such as the number of records transferred per
unit time,
§ the maximum number of users active at any
time, input data size, etc.

§ Therefore stress testing may not be


applicable to many types of systems.
Volume Testing

§ Addresses handling large amounts of data in


the system:

§ whether data structures (e.g. queues, stacks,


arrays, etc.) are large enough to handle all
possible situations

§ Fields, records, and files are stressed to check if


their size can accommodate all possible data
volumes.
Configuration Testing

§ Analyze system behaviour:

§ in various hardware and software configurations


specified in the requirements

§ sometimes systems are built in various


configurations for different users

§ for instance, a minimal system may serve a single


user, other configurations for additional users.
Compatibility Testing

§ These tests are needed when the system interfaces


with other systems:
§ Check whether the interface functions as required.

§ If a system is to communicate with a large database


system to retrieve information:
§ a compatibility test examines speed and accuracy of
retrieval.
Recovery Testing

§ These tests check response to:

§ presence of faults or to the loss of data, power,


devices, or services

§ subject system to loss of resources


§ check if the system recovers properly.
Maintenance Testing

§ Diagnostic tools and procedures:

§ help find source of problems


§ It may be required to supply
§ memory maps
§ diagnostic programs
§ traces of transactions,
§ circuit diagrams, etc.
Maintenance Testing

zVerify that:
yall required artefacts for maintenance exist
ythey function properly
Documentation tests

zCheck that required documents exist and


are consistent:

yuser guides,

ymaintenance guides,

ytechnical documents
Documentation tests

zSometimes requirements specify:


yformat and audience of specific documents
ydocuments are evaluated for compliance
Usability tests

zAll aspects of user interfaces are tested:


yDisplay screens
ymessages
yreport formats
ynavigation and selection problems
Environmental test

zThese tests check the system’s ability to


perform at the installation site.
zRequirements might include tolerance for
yheat
yhumidity
ychemical presence
yportability
yelectrical or magnetic fields
ydisruption of power, etc.
Debugging

z After a failure has been detected, it is necessary to


first identify the program statement(s) that are in
error and are responsible for the failure, the error can
then be fixed.
z Debugging Approaches:
1) Brute force method
2) Backtracking
3) Cause elimination method
4) Program slicing
z Each of these approaches has its own advantages and
disadvantages and therefore each will be useful in
appropriate circumstances.

81
Brute force method

z This is the most common method of debugging but is


the least efficient method.

z In this approach, print statements are inserted


throughout the program to print the intermediate
values with the hope that some of the printed
values will help to identify the statement in error.

z This approach becomes more systematic with the use of


a symbolic debugger (also called a source code
debugger ) 82
Backtracking
z This is also a fairly common approach for debugging.

z In this approach, starting from the statement at


which an error symptom has been observed, the
source code is traced backwards until the error is
discovered.

z Unfortunately, as the number of source lines to be traced


back increases
Ø the number of potential backward paths increases

Ø become unmanageably large for complex programs, limiting the use of


this approach 83
Cause elimination method

z In this approach, once a failure is observed, the


symptoms of the failure are noted.

z Based on the failure symptoms:


Ø the causes which could possibly have contributed to the
symptom is developed

Ø tests are conducted to eliminate each


z A related technique of identification of the error from
the error symptom is the software fault tree analysis.
84
Program slicing

z This technique is similar to back tracking.


z In the backtracking approach, one has to examine a
large number of statements.
z However, Search space is reduced by defining slices.
z A slice of a program for a particular variable and
at a particular statement is the set of source lines
preceding this statement that can influence the
value of that variable.
z Program slicing makes use of the fact that an
error in the value of a variable can be caused by
the statements on which it is data dependent.
85
Debugging Guidelines

z Debugging usually requires a thorough understanding of


the program design.
z Debugging may sometimes require full redesign of the
system.
z A common mistake novice programmers often make:
Ø not fixing the error but the error symptoms.
z Be aware of the possibility:
Ø an error correction may introduce new errors
z After every round of error-fixing:
Ø regression testing must be carried out
Error Seeding

z The error seeding technique can be used to estimate


the number of residual errors in a software.
z Error seeding involves seeding the code with some known
errors.
z In other words, some artificial errors are introduced
(seeded) into the program.
z The number of these seeded errors and the number of
unseeded errors detected during testing can be used to
predict the following aspects of a program :
Ø The number of errors remaining in the product.
Ø The effectiveness of the testing strategy.

87
Error Seeding

Let: N be the total number of errors in the system


yn of these errors be found by testing
yS be the total number of seeded errors
ys of the seeded errors be found during testing
n/N = s/S
Therefore N = S * n/s

Defects still remaining in the program after testing can


be given by:
remaining defects: N - n = n * ((S - s)/ s)
Example

z100 errors were introduced.


z90 of these errors were found during
testing
z50 other errors were also found.
zRemaining errors=
50 * (100-90)/90 = 6
THE END

90

You might also like