0% found this document useful (0 votes)
57 views59 pages

Unit-III Dynamic Testing Ii

Download as docx, pdf, or txt
Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1/ 59

Unit-III

DYNAMIC TESTING II:


White-Box Testing: need, Logic coverage criteria, Basis path testing, Graph matrices, Loop
testing, data flow testing, mutation testing Static Testing: inspections, Structured
Walkthroughs, Technical reviews.
Dynamic Testing: White-Box Testing Techniques
White-box testing is another effective testing technique in dynamic testing. It is also known
as glass-box testing, as everything that is required to implement the software is visible. The
entire design, structure, and code of the software have to be studied for this type of testing. It
is obvious that the developer is very close to this type of testing. Often, developers use white-
box testing techniques to test their own design and code. This testing is also known as
structural or development testing. In white-box testing, structure means the logic of the
program which has been implemented in the language code. The intention is to test this logic
so that required results or functionalities can be achieved. Thus, white-box testing ensures
that the internal parts of the software are adequately tested.
5.1 NEED OF WHITE-BOX TESTING
Is white-box testing really necessary? Can’t we write the code and simply test the software
using black-box testing techniques? The supporting reasons for white-box testing are given
below:
1. In fact, white-box testing techniques are used for testing the module for initial stage
testing. Black-box testing is the second stage for testing the software. Though test cases for
black box can be designed earlier than white-box test cases, they cannot be executed until the
code is produced and checked using white-box testing techniques. Thus, white-box testing is
not an alternative but an essential stage.
2. Since white-box testing is complementary to black-box testing, there are categories of
bugs which can be revealed by white-box testing, but not through black-box testing. There
may be portions in the code which are not checked when executing functional test cases, but
these will be executed and tested by white-box testing.
3. Errors which have come from the design phase will also be reflected in the code, therefore
we must execute white-box test cases for verification of code (unit verification).
4. We often believe that a logical path is not likely to be executed when, in fact, it may be
executed on a regular basis. White-box testing explores these paths too.
5. Some typographical errors are not observed and go undetected and are not covered by
black-box testing techniques. White-box testing techniques help detect these errors.
5.2 LOGIC COVERAGE CRITERIA
Structural testing considers the program code, and test cases are designed based on the logic
of the program such that every element of the logic is covered. Therefore the intention in
white-box testing is to cover the whole logic. Discussed below are the basic forms of logic
coverage.
Statement Coverage The first kind of logic coverage can be identified in the form of
statements. It is assumed that if all the statements of the module are executed once, every bug
will be notified. Consider the following code segment shown in Fig. 5.1.

If we want to cover every statement in the above code, then the following test cases must be
designed:

These test cases will cover every statement in the code segment, however statement coverage
is a poor criteria for logic coverage. We can see that test case 3 and 4 are sufficient to execute
all the statements in the code. But, if we execute only test case 3 and 4, then conditions and
paths in test case 1 will never be tested and errors will go undetected. Thus, statement
coverage is a necessary but not a sufficient criteria for logic coverage.
Decision or Branch Coverage
Branch coverage states that each decision takes on all possible outcomes (True or False) at
least once. In other words, each branch direction must be traversed at least once. In the
previous sample code shown in Figure 5.1, while and if statements have two outcomes: True
and False. So test cases must be designed such that both outcomes for while and if statements
are tested. The test cases are designed as:
Decision/condition Coverage
Condition coverage in a decision does not mean that the decision has been covered. If the
decision
if (A && B)
is being tested, the condition coverage would allow one to write two test cases:
Test case 1: A is True, B is False.
Test case 2: A is False, B is True.
But these test cases would not cause the THEN clause of the IF to execute (i.e. execution of
decision). The obvious way out of this dilemma is a criterion called decision/condition
coverage. It requires sufficient test cases such that each condition in a decision takes on all
possible outcomes at least once, each decision takes on all possible outcomes at least once,
and each point of entry is invoked at least once [2].
Multiple condition coverage In case of multiple conditions, even decision/ condition
coverage fails to exercise all outcomes of all conditions. The reason is that we have
considered all possible outcomes of each condition in the decision, but we have not taken all
combinations of different multiple conditions. Certain conditions mask other conditions. For
example, if an AND condition is False, none of the subsequent conditions in the expression
will be evaluated. Similarly, if an OR condition is True, none of the subsequent conditions
will be evaluated. Thus, condition coverage and decision/condition coverage need not
necessarily uncover all the errors.
Therefore, multiple condition coverage requires that we should write sufficient test cases
such that all possible combinations of condition outcomes in each decision and all points of
entry are invoked at least once. Thus, as in decision/condition coverage, all possible
combinations of multiple conditions should be considered. The following test cases can be
there:
Test case 1: A = True, B = True
Test case 2: A = True, B = False
Test case 3: A = False, B = True
Test case 4: A = False, B = False
5.3 BASIS PATH TESTING
Basis path testing is the oldest structural testing technique. The technique is based on the
control structure of the program. Based on the control structure, a fl ow graph is prepared and
all the possible paths can be covered and executed during testing. Path coverage is a more
general criterion as compared to other coverage criteria and useful for detecting more errors.
But the problem with path criteria is that programs that contain loops may have an infi nite
number of possible paths and it is not practical to test all the paths. Some criteria should be
devised such that selected paths are executed for maximum coverage of logic. Basis path
testing is the technique of selecting the paths that provide a basis set of execution paths
through the program.
The guidelines for effectiveness of path testing are discussed below:
1. Path testing is based on control structure of the program for which fl ow graph is prepared.
2. Path testing requires complete knowledge of the program’s structure.
3. Path testing is closer to the developer and used by him to test his module.
4. The effectiveness of path testing gets reduced with the increase in size of software under
test [9]. 5. Choose enough paths in a program such that maximum logic coverage is achieved.
5.3.1 CONTROL FLOW GRAPH
The control flow graph is a graphical representation of control structure of a program. F low
graphs can be prepared as a directed graph. A directed graph (V, E) consists of a set of
vertices V and a set of edges E that are ordered pairs of elements of V. Based on the concepts
of directed graph, following notations are used for a flow graph:
Node It represents one or more procedural statements. The nodes are denoted by a circle.
These are numbered or labeled.
Edges or links They represent the fl ow of control in a program. This is denoted by an arrow
on the edge. An edge must terminate at a node.
Decision node A node with more than one arrow leaving it is called a decision node.
Junction node A node with more than one arrow entering it is called a junction.
Regions Areas bounded by edges and nodes are called regions. When counting the regions,
the area outside the graph is also considered a region.
5.3.2 FLOW GRAPH NOTATIONS FOR DIFFERENT PROGRAMMING
CONSTRUCTS
Since a flow graph is prepared on the basis of control structure of a program, some
fundamental graphical notations are shown here (see Fig. 5.2) for basic programming
constructs.
Using the above notations, a flow graph can be constructed. Sequential statements having no
conditions or loops can be merged in a single node. That is why, the flow graph is also known
as decision-to-decision-graph or DD graph.

5.3.3 PATH TESTING TERMINOLOGY


Path A path through a program is a sequence of instructions or statements that starts at an
entry, junction, or decision and ends at another, or possibly the same, junction, decision, or
exit. A path may go through several junctions, processes, or decisions, one or more times.
Segment Paths consist of segments. The smallest segment is a link, that is, a single process
that lies between two nodes (e.g., junction-process-junction, junction process-decision,
decision-process-junction, and decision-process-decision). A direct connection between two
nodes, as in an unconditional GOTO, is also called a process by convention, even though no
actual processing takes place.
Path segment A path segment is a succession of consecutive links that belongs to some
path.
Length of a path The length of a path is measured by the number of links in it and not by the
number of instructions or statements executed along the path. An alternative way to measure
the length of a path is by the number of nodes traversed. This method has some analytical and
theoretical benefits. If programs are assumed to have an entry and an exit node, then the
number of links traversed is just one less than the number of nodes traversed.
Independent path An independent path is any path through the graph that introduces at least
one new set of processing statements or new conditions. An independent path must move
along at least one edge that has not been traversed before the path is defined [9,28].
5.3.4 CYCLOMATIC COMPLEXITY
McCabe [24] has given a measure for the logical complexity of a program by considering its
control fl ow graph. His idea is to measure the complexity by considering the number of paths
in the control graph of the program. But even for simple programs, if they contain at least one
cycle, the number of paths is infi nite. Therefore, he considers only independent paths.
This is also known as Miller’s theorem. We assume that a k-way decision point contributes
for k−1 choice points.
The program may contain several procedures also. These procedures can be represented as
separate fl ow graphs. These procedures can be called from any point but the connections for
calling are not shown explicitly. The cyclomatic number of the whole graph is then given by
the sum of the numbers of each graph. It is easy to demonstrate that, if p is the number of
graphs and e and n are referred to as the whole graph, the cyclomatic number is given by
V(G) = e – n + 2p
And Miller’s theorem becomes V(G) = d + p
Formulae Based on Cyclomatic Complexity
Based on the cyclomatic complexity, the following formulae are being summarized.
Cyclomatic complexity number can be derived through any of the following three
formulae
1. V(G) = e – n + 2p where e is number of edges, n is the number of nodes in the graph, and
p is number of components in the whole graph.
2. V(G) = d + p where d is the number of decision nodes in the graph.
3. V(G) = number of regions in the graph
Calculating the number of decision nodes for Switch-Case/Multiple If-Else
When a decision node has exactly two arrows leaving it, then we count it as a single decision
node. However, switch-case and multiple if-else statements have more than two arrows
leaving a decision node, and in these cases, the formula to calculate the number of nodes is
d = k – 1, where k is the number of arrows leaving the node.
Calculating the cyclomatic complexity number of the program having many connected
components
Let us say that a program P has three components: X, Y, and Z. Then we prepare the flow
graph for P and for components, X, Y, and Z. The complexity number of the whole program
is
V(G) = V(P) + V(X) + V(Y) + V(Z)
We can also calculate the cyclomatic complexity number of the full program with the fi rst
formula by counting the number of nodes and edges in all the components of the program
collectively and then applying the formula
V(G) = e – n + 2P
The complexity number derived collectively will be same as calculated above. Thus,
V(P U X U Y U Z) = V(P ) + V(X) + V(Y) + V(Z)
Guidelines for Basis Path Testing
We can use the cyclomatic complexity number in basis path testing. Cyclomatic number,
which defi nes the number of independent paths, can be utilized as an upper bound for the
number of tests that must be conducted to ensure that all the statements have been executed at
least once. Thus, independent paths are prepared according to the upper limit of the
cyclomatic number. The set of independent paths becomes the basis set for the fl ow graph of
the program. Then test cases can be designed according to this basis set.
The following steps should be followed for designing test cases using path testing:
 Draw the fl ow graph using the code provided for which we have to write test cases.
 Determine the cyclomatic complexity of the fl ow graph.
 Cyclomatic complexity provides the number of independent paths. Determine a basis
set of independent paths through the program control structure.
 The basis set is in fact the base for designing the test cases. Based on every
independent path, choose the data such that this path is executed.
 Put the sequential statements in one node. For example, statements 1, 2, and 3 have
been put inside one node.
 Put the edges between the nodes according to their flow of execution.
 Put alphabetical numbering on each node like A, B, etc.
The DD graph of the program is shown in Figure 5.4.
(ii) V(G) = Number of predicate nodes + 1
= 3 (Nodes B, C, and F) + 1 = 4
(iii) V(G) = Number of regions
= 4(R1, R2, R3, R4)
(c) Independent paths
Since the cyclomatic complexity of the graph is 4, there will be 4 independent paths in the
graph as shown below:
(i) A-B-F-H
(ii) A-B-F-G-H
(iii) A-B-C-E-B-F-G-H
(iv) A-B-C-D-F-H
(d) Test case design from the list of independent paths
(a) Draw the DD graph for the program.
(b) Calculate the cyclomatic complexity of the program using all the methods.
(c) List all independent paths.
(d) Design test cases from independent paths.
(b) Cyclomatic complexity
(i) V(G) = e – n + 2 * P
= 10 – 8 + 2 = 4
(ii) V(G) = Number of predicate nodes + 1
= 3 (Nodes B, C) + 1 =4
Node C is a multiple IF-THEN-ELSE, so for fi nding out the number of predicate nodes for
this case, follow the following formula:
Number of predicated nodes = Number of links out of main node − 1 = 3 – 1 = 2 (For node
C)
(iii) V(G) = Number of regions =4
(c) Independent paths
Since the cyclomatic complexity of the graph is 4, there will be 4 independent paths in the
graph as shown below:
(i) A-B-H
(ii) A-B-C-D-G-B-H
(iii) A-B-C-E-G-B-H
(iv) A-B-C-F-G-B-H
(a) Draw the DD graph for the program.
(b) Calculate the cyclomatic complexity of the program using all the methods.
(c) List all independent paths.
(d) Design test cases from independent paths.
(a) Draw the DD graph for the program.
(b) Calculate the cyclomatic complexity of the program using all the methods.
(c) List all independent paths.
(d) Design test cases from independent paths.
(b) Cyclomatic complexity
1. V(G) = e – n + 2p = 19 – 15 +2 = 6
2. V(G) = Number of predicate nodes + 1 = 5 + 1 = 6
3. V(G) = Number of regions = 6
(c) Independent paths
Since the cyclomatic complexity of the graph is 6,
there will be 6 independent paths in the graph as shown below:
1. 1-2-3-2-4-5-6-7-8-9-6-10-11-4-12-13-14-13-15
2. 1-2-3-2-4-5-6-7-9-6-10-11-4-12-13-14-13-15
3. 1-2-3-2-4-5-6-10-11-4-12-13-14-13-15
4. 1-2-3-2-4-12-13-14-13-15 (path not feasible)
5. 1-2-4-12-13-15
6. 1-2-3-2-4-12-13-15 (path not feasible)
Example 5.5
Consider the program for calculating the factorial of a number. It consists of main() program
and the module fact(). Calculate the individual cyclomatic complexity number for main() and
fact() and then, the cyclomatic complexity for the whole program.
5.3.5 APPLICATIONS OF PATH TESTING
Path testing has been found better suitable as compared to other testing methods. Some of its
applications are discussed below.
Thorough testing / more coverage
Path testing provides us the best code coverage, leading to a thorough testing. Path coverage
is considered better as compared to statement or branch coverage methods because the basis
path set provides us the number of test cases to be covered which ascertains the number of
test cases that must be executed for full coverage. Generally, branch coverage or other criteria
gives us less number of test cases as compared to path testing. Cyclomatic complexity along
with basis path analysis employs more comprehensive scrutiny of code structure and control
flow, providing a far superior coverage technique.
Unit testing Path testing is mainly used for structural testing of a module. In unit testing,
there are chances of errors due to interaction of decision outcomes or control fl ow problems
which are hidden with branch testing. Since each decision outcome is tested independently,
path testing uncovers these errors in module testing and prepares them for integration.
Integration testing Since modules in a program may call other modules or be called by some
other module, there may be chances of interface errors during calling of the modules. Path
testing analyses all the paths on the interface and explores all the errors.
Maintenance testing Path testing is also necessary with the modifi ed version of the
software. If you have earlier prepared a unit test suite, it should be run on the modifi ed
software or a selected path testing can be done as a part of regression testing. In any case,
path testing is still able to detect any security threats on the interface with the called modules.
Testing effort is proportional to complexity of the software Cyclomatic complexity number
in basis path testing provides the number of tests to be executed on the software based on the
complexity of the software. It means the number of tests derived in this way is directly
proportional to the complexity of the software. Thus, path testing takes care of the complexity
of the software and then derives the number of tests to be carried out.
Basis path testing effort is concentrated on error-prone software Since basis path testing
provides us the number of tests to be executed as a measure of software cyclomatic
complexity, the cyclomatic number signifies that the testing effort is only on the error-prone
part of the software, thus minimizing the testing effort.
5.4 GRAPH MATRICES
Flow graph is an effective aid in path testing as seen in the previous section. However, path
tracing with the use of fl ow graphs may be a cumbersome and time-consuming activity.
Moreover, as the size of graph increases, manual path tracing becomes diffi cult and leads to
errors. A link can be missed or covered twice. So the idea is to develop a software tool which
will help in basis path testing.
Graph matrix, a data structure, is the solution which can assist in developing a tool for
automation of path tracing. The reason being the properties of graph matrices are
fundamental to test tool building. Moreover, testing theory can be explained on the basis of
graphs. Graph theorems can be proved easily with the help of graph matrices. So graph
matrices are very useful for understanding the testing theory.
5.4.1 GRAPH MATRIX
A graph matrix is a square matrix whose rows and columns are equal to the number of nodes
in the flow graph. Each row and column identifies a particular node and matrix entries
represent a connection between the nodes.
The following points describe a graph matrix:
 Each cell in the matrix can be a direct connection or link between one node to another
node.
 If there is a connection from node ‘a’ to node ‘b’, then it does not mean that there is
connection from node ‘b’ to node ‘a’.
 Conventionally, to represent a graph matrix, digits are used for nodes and letter
symbols for edges or connections.
Example 5.6

Example 5.7
5.4.4 USE OF GRAPH MATRIX FOR FINDING SET OF ALL PATHS
Another purpose of developing graph matrices is to produce a set of all paths between all
nodes. It may be of interest in path tracing to fi nd k-link paths from one node. For example,
how many 2-link paths are there from one node to another node? This process is done for
every node resulting in the set of all paths. This set can be obtained with the help of matrix
operations. The main objective is to use matrix operations to obtain the set of all paths
between all nodes. The set of all paths between all nodes is easily expressed in terms of
matrix operations.
The power operation on matrix expresses the relation between each pair of nodes via
intermediate nodes under the assumption that the relation is transitive (mostly, relations used
in testing are transitive). For example, the square of matrix represents path segments that are
2-links long. Similarly, the cube power of matrix represents path segments that are 3-links
long.
Generalizing, we can say that mth power of the matrix represents path segments that are m-
links long.
Example 5.8

Example 5.9
Consider the following graph. Derive its graph matrix and find 2-link and 3-link set of paths.
It can be generalized that for n number of nodes, we can get the set of all paths of (n − 1)
links length with the use of matrix operations. These operations can be programmed and can
be utilized as a software testing tool.
5.5 LOOP TESTING
Loop testing can be viewed as an extension to branch coverage. Loops are important in the
software from the testing viewpoint. If loops are not tested properly, bugs can go undetected.
This is the reason that loops are covered in this section exclusively. Loop testing can be done
effectively while performing development testing (unit testing by the developer) on a module.
Suffi cient test cases should be designed to test every loop thoroughly. There are four
different kinds of loops. How each kind of loop is tested, is discussed below.
Simple loops Simple loops mean, we have a single loop in the flow, as shown in Fig. 5.9.
The following test cases should be considered for simple loops while testing them [9]:
 Check whether you can bypass the loop or not. If the test case for bypassing the loop
is executed and, still you enter inside the loop, it means there is a bug.
 Check whether the loop control variable is negative.
 Write one test case that executes the statements inside the loop.
 Write test cases for a typical number of iterations through the loop.
 Write test cases for checking the boundary values of the maximum and minimum
number of iterations defined (say min and max) in the loop. It means we should test
for min, min+1, min−1, max−1, max, and max+1 number of iterations through the
loop.
Nested loops
When two or more loops are embedded, it is called a nested loop, as shown in Fig. 5.10. If we
have nested loops in the program, it becomes difficult to test. If we adopt the approach of
simple tests to test the nested loops, then the number of possible test cases grows
geometrically. Thus, the strategy is to start with the innermost loops while holding outer
loops to their minimum values. Continue this outward in this manner until all loops have been
covered [9].

Concatenated loops
The loops in a program may be concatenated (Fig. 5.11). Two loops are concatenated if it is
possible to reach one after exiting the other, while still on a path from entry to exit. If the two
loops are not on the same path, then they are not concatenated. The two loops on the same
path may or may not be independent. If the loop control variable for one loop is used for
another loop, then they are concatenated, but nested loops should be treated like nested only.
Unstructured loops
This type of loops is really impractical to test and they must be redesigned or at least
converted into simple or concatenated loops.
5.6 DATA FLOW TESTING
In path coverage, the stress was to cover a path using statement or branch coverage.
However, data and data integrity is as important as code and code integrity of a module. We
have checked every possibility of the control flow of a module. But what about the dataflow
in the module? Has every data object been initialized prior to use? Have all defined data
objects been used for something? These questions can be answered if we consider data
objects in the control flow of a module.
Data flow testing is a white-box testing technique that can be used to detect improper use of
data values due to coding errors. Errors may be unintentionally introduced in a program by
programmers. For instance, a programmer might use a variable without defining it. Moreover,
he may define a variable, but not initialize it and then use that variable in a predicate.
For example, int a; if(a == 67) { }
In this way, data flow testing gives a chance to look out for inappropriate data definition, its
use in predicates, computations, and termination. It identifies potential bugs by examining the
patterns in which that piece of data is used. For example, if an out-of-scope data is being used
in a computation, then it is a bug. There may be several patterns like this which indicate data
anomalies.
To examine the patterns, the control fl ow graph of a program is used. This test strategy
selects the paths in the module’s control fl ow such that various sequences of data objects can
be chosen. The major focus is on the points at which the data receives values and the places
at which the data initialized has been referenced. Thus, we have to choose enough paths in
the control fl ow to ensure that every data is initialized before use and all the defi ned data
have been used somewhere. Data fl ow testing closely examines the state of the data in the
control fl ow graph, resulting in a richer test suite than the one obtained from control fl ow
graph based path testing strategies like branch coverage, all statement coverage, etc.

5.6.1 STATE OF A DATA OBJECT A data object can be in the following states:
Defined (d) A data object is called defined when it is initialized, i.e. when it is on the left side
of an assignment statement. Defined state can also be used to mean that a fi le has been
opened, a dynamically allocated object has been allocated, something is pushed onto the
stack, a record written, and so on [9].
Killed/Undefi ned/Released (k)
When the data has been reinitialized or the scope of a loop control variable finishes, i.e.
exiting the loop or memory is released dynamically or a fi le has been closed.
Usage (u) When the data object is on the right side of assignment or used as a control
variable in a loop, or in an expression used to evaluate the control flow of a case statement, or
as a pointer to an object, etc. In general, we say that the usage is either computational use (c-
use) or predicate use (p-use).
5.6.2 D ATA-FLOW A NOMALIES
Data-flow anomalies represent the patterns of data usage which may lead to an incorrect
execution of the code. An anomaly is denoted by a two-character sequence of actions. For
example, ‘dk’ means a variable is defined and killed without any use, which is a potential
bug. There are nine possible two-character combinations out of which only four are data
anomalies, as shown in Table 5.1.
5.6.3 TERMINOLOGY USED IN DATA FLOW TESTING
In this section, some terminology [9, 20], which will help in understanding all the concepts
related to dataflow testing, is being discussed. Suppose P is a program that has a graph G(P)
and a set of variables V. The graph has a single entry and exit node.
Definition node Defining a variable means assigning value to a variable for the very fi rst
time in a program. For example, input statements, assignment statements, loop control
statements, procedure calls, etc.
Usage node It means the variable has been used in some statement of the program. Node n
that belongs to G(P) is a usage node of variable v, if the value of variable v is used at the
statement corresponding to node n. For example, output statements, assignment statements
(right), conditional statements, loop control statements, etc.
A usage node can be of the following two types:
(i) Predicate U sage Node: If usage node n is a predicate node, then n is a predicate
usage node.
(ii) Computation Usage Node: If usage node n corresponds to a computation
statement in a program other than predicate, then it is called a computation usage
node.
Loop-free path segment It is a path segment for which every node is visited once at
most. Simple path segment It is a path segment in which at most one node is visited
twice. A simple path segment is either loop-free or if there is a loop, only one node is
involved.
Definition-use path (du-path) A du-path with respect to a variable v is a path between
the definition node and the usage node of that variable. Usage node can either be a p-
usage or a c-usage node.
Definition-clear path(dc-path)
A dc-path with respect to a variable v is a path between the definition node and the usage
node such that no other node in the path is a defining node of variable v.
The du-paths which are not dc-paths are important from testing viewpoint, as these are
potential problematic spots for testing persons. Those du-paths which are definition-clear
are easy to test in comparison to du-paths which are not dc-paths. The application of data
flow testing can be extended to debugging where a testing person finds the problematic
areas in code to trace the bug. So the du-paths which are not dc-paths need more
attention.

5.6.4 STATIC DATA FLOW TESTING


With static analysis, the source code is analysed without executing it. Let us consider an
example of an application given below
Example 5.10
Consider the program given below for calculating the gross salary of an employee in an
organization. If his basic salary is less than Rs 1500, then HRA = 10% of basic salary and
DA = 90% of the basic. If his salary is either equal to or above Rs 1500, then HRA = Rs
500 and DA = 98% of the basic salary. Calculate his gross salary
From the above static analysis, it was observed that static data flow testing for the
variable ‘hra’ discovered one bug of double definition in line number 1.
Static Analysis is not Enough
It is not always possible to determine the state of a data variable by just static analysis of
the code. For example, if the data variable in an array is used as an index for a collection
of data elements, we cannot determine its state by static analysis. Or it may be the case
that the index is generated dynamically during execution, therefore we cannot guarantee
what the state of the array element is referenced by that index. Moreover, the static data-fl
ow testing might denote a certain piece of code to be anomalous which is never executed
and hence, not completely anomalous. Thus, all anomalies using static analysis cannot be
determined and this problem is provably unsolvable.
5.6.5 DYNAMIC DATA FLOW TESTING
Dynamic dataflow testing is performed with the intention to uncover possible bugs in data
usage during the execution of the code. The test cases are designed in such a way that
every definition of data variable to each of its use is traced and every use is traced to each
of its definition. Various strategies are employed for the creation of test cases. All these
strategies are defined below.
All-du Paths (ADUP)
t states that every du-path from every definition of every variable to every use of that
definition should be exercised under some test. It is the strongest data flow testing
strategy, since it is a superset of all other data flow testing strategies. Moreover, this
strategy requires the maximum number of paths for testing.
All-uses (AU)
This states that for every use of the variable, there is a path from the defi nition of that
variable (nearest to the use in backward direction) to the use.
All-p-uses/Some-c-uses (APU + C)
This strategy states that for every variable and every definition of that variable, include at
least one dc-path from the definition to every predicate use. If there are definitions of the
variable with no p-use following it, then add computational use (c-use) test cases as
required to cover every definition.
All-c-uses/Some-p-uses (ACU + P)
This strategy states that for every variable and every definition of that variable, include at
least one dc-path from the definition to every computational use. If there are definitions of
the variable with no c-use following it, then add predicate use (c-use) test cases as
required to cover every definition.
All-Predicate-Uses (APU)
It is derived from the APU+C strategy and states that for every variable, there is a path from
every definition to every p-use of that definition. If there is a definition with no p-use
following it, then it is dropped from contention.
All-Computational-Uses (ACU)
It is derived from the strategy ACU+P strategy and states that for every variable, there is a
path from every definition to every c-use of that definition. If there is a definition with no c-
use following it, then it is dropped from contention. All-Definition (AD) It states that every
definition of every variable should be covered by at least one use of that variable, be that a
computational use or a predicate use.
Example 5.11
Consider the program given below. Draw its control fl ow graph and data fl ow graph for each
variable used in the program, and derive data fl ow testing paths with all the strategies
discussed above.
Data flow testing paths for each variable are shown in Table 5.3.
5.6.6 ORDERING OF DATA FLOW TESTING STRATEGIES
While selecting a test case, we need to analyse the relative strengths of various data flow
testing strategies. Figure 5.15 depicts the relative strength of the data flow strategies. In this
figure, the relative strength of testing strategies reduces along the direction of the arrow. It
means that all-du-paths (ADPU) is the strongest criterion for selecting the test cases.
5.7 MUTATION TESTING
Mutation testing is the process of mutating some segment of code (putting some error in the
code) and then, testing this mutated code with some test data. If the test data is able to detect
the mutations in the code, then the test data is quite good, otherwise we must focus on the
quality of test data. Therefore, mutation testing helps a user create test data by interacting
with the user to iteratively strengthen the quality of test data.
During mutation testing, faults are introduced into a program by creating many versions of
the program, each of which contains one fault. Test data are used to execute these faulty
programs with the goal of causing each faulty program to fail. Faulty programs are called
mutants of the original program and a mutant is said to be killed when a test case causes it to
fail. When this happens, the mutant is considered dead and no longer needs to remain in the
testing process, since the faults represented by that mutant have been detected, and more
importantly, it has satisfied its requirement of identifying a useful test case. Thus, the main
objective is to select efficient test data which have error-detection power. The criterion for
this test data is to differentiate the initial program from the mutant. This distinguish-ability
between the initial program and its mutant will be based on test results.
5.7.1 PRIMARY MUTANTS
Let us take an example of a C program to understand primary mutants.
This class of mutants is called secondary mutants when multiple levels of mutation are
applied on the initial program. In this case, it is very difficult to identify the initial program
from its mutants.
Example 5.12
5.7.3 MUTATION TESTING PROCESS
The mutation testing process is discussed below:
 Construct the mutants of a test program.
 Add test cases to the mutation system and check the output of the program on
each test case to see if it is correct.
 If the output is incorrect, a fault has been found and the program must be modified
and the process restarted.
 If the output is correct, that test case is executed against each live mutant.
 If the output of a mutant differs from that of the original program on the same test
case, the mutant is assumed to be incorrect and is killed.
 After each test case has been executed against each live mutant, each remaining
mutant falls into one of the following two categories.
 One, the mutant is functionally equivalent to the original program. An equivalent
mutant always produces the same output as the original program, so no test case
can kill it. ∑
 Two, the mutant is killable, but the set of test cases is insufficient to kill it. In this
case, new test cases need to be created, and the process iterates until the test set is
strong enough to satisfy the tester.
 The mutation score for a set of test data is the percentage of non-equivalent
mutants killed by that data. If the mutation score is 100%, then the test data is
called mutation-adequate.
Static Testing
We have discussed dynamic testing techniques that execute the software being built with
a number of test cases. However, it suffers from the following drawbacks:
 Dynamic testing uncovers the bugs at a later stage of SDLC and hence is costly to
debug.
 Dynamic testing is expensive and time-consuming, as it needs to create, run,
validate, and maintain test cases.
 The efficiency of code coverage decreases with the increase in size of the system.
 Dynamic testing provides information about bugs. However, debugging is not
always easy. It is difficult and time-consuming to trace a failure from a test case
back to its root cause.
 Dynamic testing cannot detect all the potential bugs.
While dynamic testing is an important aspect of any quality assurance program, it is not a
universal remedy. Thus, it alone cannot guarantee defect-free product, nor can it ensure a
sufficiently high level of software quality.
In response to the above discussion, static testing is a complimentary technique to dynamic
testing technique to acquire higher quality software. Static testing techniques do not execute
the software and they do not require the bulk of test cases. This type of testing is also known
as non-computer based testing or human testing. All the bugs cannot be caught alone by the
dynamic testing technique; static testing reveals the errors which are not shown by dynamic
testing. Static testing can be applied for most of the verification activities discussed earlier.
Since verification activities are required at every stage of SDLC till coding, static testing also
can be applied at all these phases.
Static testing techniques do not demonstrate that the software is operational or that one
function of software is working; rather they check the software product at each S DLC stage
for conformance with the required specifications or standards. Requirements, design
specifications, test plans, source code, user’s manuals, maintenance procedures are some of
the items that can be statically tested.
Static testing has proved to be a cost-effective technique of error detection. An empirical
comparison between static and dynamic testing [26, 27], proves the effectiveness of static
testing. Further, Fagan [28] reported that more than 60% of the errors in a program can be
detected using static testing. Another advantage of static testing is that it provides the exact
location of a bug, whereas dynamic testing provides no indication of the exact source code
location of the bug. In other words, we can say that static testing finds the in process errors
before they become bugs.
Static testing techniques help to produce a better product. Given below are some of the
benefits of adopting a static testing approach:
 As defects are found and fixed, the quality of the product increases.
 A more technically correct base is available for each new phase of development.
 The overall software life cycle cost is lower, since defects are found early and are
easier and less expensive to fi x.
 The effectiveness of the dynamic test activity is increased and less time needs to be
devoted for testing the product.
 Immediate evaluation and feedback to the author from his/her peers which will bring
about improvements in the quality of future products.
The objectives of static testing can be summarized as follows:
 To identify errors in any phase of SDLC as early as possible
 To verify that the components of software are in conformance with its requirements
 To provide information for project monitoring
 To improve the software quality and increase productivity
Types of Static Testing Static testing can be categorized into the following types:
 Software inspections
 Walkthroughs
 Technical reviews
6.1 INSPECTIONS
Software inspections were first introduced at IBM by Fagan in the early 1970s [43]. These
can be used to tackle software quality problems because they allow the detection and removal
of defects after each phase of the software development process. Inspection process is an in-
process manual examination of an item to detect bugs. It may be applied to any product or
partial product of the software development process, including requirements, design and
code, project management plan, SQA plan, software configuration plan (SCM plan), risk
management plan, test cases, user manual, etc. Inspections are embedded in the process of
developing products and are done in the early stages of each product’s development.
This process does not require executable code or test cases. With inspection, bugs can be
found on infrequently executed paths that are not likely to be included in test cases. Software
inspection does not execute the code, so it is machine-independent, requires no target system
resources or changes to the program’s operational behaviour, and can be used much before
the target hardware is available for dynamic testing purposes.
The inspection process is carried out by a group of peers. The group of peers first inspect the
product at the individual level. After this, they discuss the potential defects of the product
observed in a formal meeting. The second important thing about the inspection process is that
it is a formal process of verifying a software product. The documents which can be inspected
are SRS, SDD, code, and test plan.
An inspection process involves the interaction of the following elements:
 Inspection steps
 Role for participants
 Item being inspected
The entry and exit criteria are used to determine whether an item is ready to be inspected.
Entry criteria mean that the item to be inspected is mature enough to be used. For example,
for code inspection, the entry criterion is that the code has been compiled successfully. Exit
criterion is that once the item has been given for inspection, it should not be updated,
otherwise it will not know how many bugs have been reported and corrected through the
inspection process and the whole purpose of the inspection is lost.
6.1.1 INSPECTION TEAM
For the inspection process, a minimum of the following four team members are required.
Author/Owner/Producer A programmer or designer responsible for producing the program or
document. He is also responsible for fixing defects discovered during the inspection process.
Inspector A peer member of the team, i.e. he is not a manager or supervisor. He is not
directly related to the product under inspection and may be concerned with some other
product. He finds errors, omissions, and inconsistencies in programs and documents.
Moderator A team member who manages the whole inspection process. He schedules,
leads, and controls the inspection session. He is the key person with the responsibility of
planning and successful execution of the inspection.
Recorder One who records all the results of the inspection meeting?
6.1.2 INSPECTION PROCESS
A general inspection process (see Fig. 6.1) has the following stages [29,14]:
Planning During this phase, the following is executed:
 The product to be inspected is identified.
 A moderator is assigned.
 The objective of the inspection is stated, i.e. whether the inspection is to be conducted
for defect detection or something else. If the objective is defect detection, then the
type of defect detection like design error, interface error, code error must be specified.
The aim is to define an objective for the meeting so that the effort spent in inspections
is properly utilized.
During planning, the moderator performs the following activities:
 Assures that the product is ready for inspection
 Selects the inspection team and assigns their roles
 Schedules the meeting venue and time
 Distributes the inspection material like the item to be inspected, checklists, etc.
Overview In this stage, the inspection team is provided with the background information for
inspection. The author presents the rationale for the product, its relationship to the rest of the
products being developed, its function and intended use, and the approach used to develop it.
This information is necessary for the inspection team to perform a successful inspection.
The opening meeting may also be called by the moderator. In this meeting, the objective of
inspection is explained to the team members. The idea is that every member should be
familiar with the overall purpose of the inspection.
Individual preparation
After the overview, the reviewers individually prepare themselves for the inspection process
by studying the documents provided to them in the overview session. They point out potential
errors or problems found and record them in a log. This log is then submitted to the
moderator. The moderator compiles the logs of different members and gives a copy of this
compiled list to the author of the inspected item.
The inspector reviews the product for general problems as well as those related to their
specific area of expertise. Checklists are used during this stage for guidance on typical types
of defects that are found in the type of product being inspected. The product being inspected
is also checked against standard documents to assure compliance and correctness. After
reviewing, the inspectors record the defects found on a log and the time spent during
preparation. Completed preparation logs are submitted to the moderator prior to the
inspection meeting.
The moderator reviews the logs submitted by each inspector to determine whether the team is
adequately prepared. The moderator also checks for trouble spots that may need extra
attention during inspection, common defects that can be categorized quickly, and the areas of
major concern. If the logs indicate that the team is not adequately prepared, the moderator
should reschedule the inspection meeting. After this, the compiled log fi le is submitted to the
author.
Inspection meeting
Once all the initial preparation is complete, the actual inspection meeting can start. The
inspection meeting starts with the author of the inspected item who has created it. The author
fi rst discusses every issue raised by different members in the compiled log fi le. After the
discussion, all the members arrive at a consensus whether the issues pointed out are in fact
errors and if they are errors, should they be admitted by the author. It may be possible that
during the discussion on any issue, another error is found. Then, this new error is also
discussed and recorded as an error by the author.
The basic goal of the inspection meeting is to uncover any bug in the item. However, no
effort is made in the meeting to fi x the bug. It means that bugs are only being notified to the
author, which he will fi x later. If there is any clarification regarding these bugs, then it should
be asked or discussed with other members during the meeting.
Another fact regarding the inspection is that the members of the meeting should be sensitive
to the feelings of the author. The author should not be attacked by other members such that
the author feels guilty about the product. Every activity in the meeting should be a
constructive engagement so that more and more bugs can be discovered. It is the duty of the
moderator that the meeting remains focused towards its objective and the author is not
discouraged in any way.
At the end, the moderator concludes the meeting and produces a summary of the inspection
meeting. This summary is basically a list of errors found in the item that need to be resolved
by the author.
Rework The summary list of the bugs that arise during the inspection meeting needs to be
reworked by the author. The author fi xes all these bugs and reports back to the moderator.
Follow-up
It is the responsibility of the moderator to check that all the bugs found in the last meeting
have been addressed and fixed. He prepares a report and ascertains that all issues have been
resolved. The document is then approved for release. If this is not the case, then the
unresolved issues are mentioned in a report and another inspection meeting is called by the
moderator.
6.1.3 BENEFITS OF INSPECTION PROCESS
Bug reduction
The number of bugs is reduced through the inspection process. L.H. Fenton [86] reported
that through the inspection process in IBM, the number of bugs per thousand lines of code
has been reduced by two-thirds. Thus, inspection helps reduce bug injection and detection
rates. According to Capers Jones, ‘Inspection is by far the most effective way to remove
bugs.’
Bug prevention Inspections can also be used for bug prevention. Based on the experiences of
previous inspections, analysis can be made for future inspections or projects, thereby
preventing the bugs which have appeared earlier. Programmers must understand why bugs
appear and what can be done to avoid them in future. At the same time, they should provide
inspection results to the quality assurance team.
Productivity Since all phases of SDLC may be inspected without waiting for code
development and its execution, the cost of finding bugs decreases, resulting in an increase in
productivity. Moreover, the errors are found at their exact places, therefore reducing the need
of dynamic testing and debugging. In the article by Fagan [43], an increase of 23% in coding
productivity and a 25% reduction in schedules were reported.
Real-time feedback to software engineers
The inspections also benefi t software engineers/developers because they get feedback on
their products on a relatively real-time basis. Developers fi nd out the type of mistakes they
make and what is the error density. Since they get this feedback in the early stages of
development, they may improve their capability. Thus, inspections benefi t software
engineers/ developers in the sense that they can recognize their weakness and improve
accordingly, which in turn benefi ts the cost of the project.
Reduction in development resource
The cost of rework is surprisingly high if inspections are not used and errors are found
during development or testing. Therefore, techniques should be adopted such that errors are
found and fi xed as close to their place of origin as possible. Inspections reduce the effort
required for dynamic testing and any rework during design and code, thereby causing an
overall net reduction in the development resource. Without inspections, more resources may
be required during design and dynamic testing. But, with inspection, the resource requirement
is greatly reduced.
Quality improvement
We know that the direct consequence of testing is improvement in the quality of software.
The direct consequence of static testing also results in the improvement of quality of the fi nal
product. Inspections help to improve the quality by checking the standard compliance,
modularity, clarity, and simplicity.

Project management
A project needs monitoring and control. It depends on some data obtained from the
development team. However, this data cannot be relied on forever. Inspection is another
effective tool for monitoring the progress of the project.
Checking coupling and cohesion
The modules’ coupling and cohesion can be checked easily through inspection as
compared to dynamic testing. This also reduces the maintenance work.
Learning through inspection
Inspection also improves the capability of different team members, as they learn from the
discussions on various types of bugs and the reasons why they occur. It may be more
benefi cial for new members. They can learn about the project in a very short time. This
helps them in the later stages of development and testing.
Process improvement
There is always scope of learning from the results of one process. An analysis of why the
errors occurred or the frequent places where the errors occurred can be done by the
inspection team members. The analysis results can then be used to improve the inspection
process so that the current as well as future projects can benefi t. Discussed below are the
issues of process improvement.
Finding most error-prone modules
Through the inspection process, the modules can be analysed based on the error-density
of the individual module, as shown in Table 6.1.
assumes that the inspection itself takes about an hour and that each member spends 1-2
hours preparing for the inspection. Testing costs are variable and depend on the number
of faults in the software. However, the effort required for the program inspection is less
than half the effort that would be required for equivalent dynamic testing. Further, it has
been estimated that the cost of inspection can be 5–10% of the total cost of the project.
6.1.6 VARIANTS OF INSPECTION PROCESS
After Fagan’s original formal inspection concept, many researchers proposed
modifications in it. Table 6.3 lists some of the variants of the formal inspection.
Formal Technical Asynchronous Review Method ( FTArm)
In this process, the meeting phase of inspection is considered expensive and therefore,
the idea is to eliminate this phase. The inspection process is carried out without having a
meeting of the members. This is a type of asynchronous inspection [88] in which the
inspectors never have to simultaneously meet. For this process, an online version of the
document is made available to every member where they can add their comments and
point out the bugs. This process consists of the following steps, as shown in Fig. 6.3.
Setup It involves choosing the members and preparing the document for an asynchronous
inspection process. The document is prepared as a hypertext document.
Consolidation In this step, the moderator analyses the result of private and public
reviews and lists the findings and unresolved issues, if any.
Group meeting If required, any unresolved issues are discussed in this step. But the
decision to conduct a group meeting is taken in the previous step only by the moderator.
Conclusion The final report of the inspection process along with the analysis is produced
by the moderator.
Gilb Inspection
Gilb and Graham [89] defi ned this process. It differs from Fagan inspection in that the
defect detection is carried out by individual inspectors at their own level rather than in a
group. Therefore, a checking phase has been introduced.
Three different roles are defi ned in this type of inspection:
Leader is responsible for planning and running the inspection.
Author of the document
Checker is responsible for finding and reporting the defects in the document. The
inspection process consists of the following steps, as shown in Fig. 6.4.

Checking Each checker works individually and fi nds defects.


Logging Potential defects are collected and logged.
Brainstorm In this stage, process improvement suggestions are recorded based on the
reported bugs.
Edit After all the defects have been reported, the author takes the list and works
accordingly. Follow-up The leader ensures that the edit phase has been executed
properly.
Exit The inspection must pass the exit criteria as fixed for the completion of the
inspection process.
N-Fold Inspection
It is based on the idea that the effectiveness of the inspection process can be increased by
replicating it [91]. If we increase the number of teams inspecting the item, the percentage
of defects found may increase. But sometimes the cost of organizing multiple teams is
higher as compared to the number of defects found. A proper evaluation of the situation is
required. Originally, this process was used for inspecting requirement specifications, but
it can also be used for any phase.
As discussed, this process consists of many independent inspection teams. This process
needs a coordinator who coordinates various teams, collects and collates the inspection
data received by them. For this purpose, he also meets the moderator of every inspection
team. This process consists of the following stages, as shown in Fig. 6.6.
6.1.7 READING TECHNIQUES
A reading technique can be defined as a series of steps or procedures whose purpose is to
guide an inspector to acquire a deep understanding of the inspected software product.
Thus, a reading technique can be regarded as a mechanism or strategy for the individual
inspector to detect defects in the inspected product. Most of the techniques found in the
literature support individual inspection work. The various reading techniques are
discussed below.
Ad hoc method
In this method, there is no direction or guidelines provided for inspection. However, ad
hoc does not mean that inspection participants do not scrutinize the inspected product
systematically. The word ad hoc only refers to the fact that no technical support on how
to detect defects in a software artifact is given to them. In this case, defect detection fully
depends on the skills, knowledge, and experience of an inspector.
Checklists
A checklist is a list of items that focus the inspector’s attention on specific topics, such as
common defects or organizational rules, while reviewing a software document [43]. The
purpose of checklists is to gather expertise concerning the most common defects and
thereby supporting inspections. The checklists are in the form of questions which are very
general and are used by the inspection team members to verify the item.
However, checklists have some drawbacks too:
 The questions, being general in nature, are not sufficiently tailored to take into
account a particular development environment.
 Instruction about using a checklist is often missing.
 There is a probability that some defects are not taken care of. It happens in the
case of those type of defects which have not been previously detected.
Scenario-based reading Checklists are general in nature and do not cover different types of
bugs. Scenario-based reading [93] is another reading technique which stresses on fi nding
different kind of defects. It is based on scenarios, wherein inspectors are provided with more
specifi c instructions than typical checklists. Additionally, they are provided with different
scenarios, focusing on different kind of defects.
Basili et al. [94] defi ne scenario-based reading as a high-level term, which they break down
to more specifi c techniques. The original method of Porter & Votta [93] is also known as
defect-based reading. By using the defi nition by Basili et al., most of the reading techniques
are based on scenario-based techniques wherein the inspector has to actively work with the
inspected documents instead of mere straightforward reading. The following methods have
been developed based on the criteria given by scenario-based reading.
Perspective-based reading
The idea behind this method is that a software item should be inspected from the perspective
of different stakeholders [95,104,105]. Inspectors of an inspection team have to check the
software quality as well as the software quality factors of a software artifact from different
perspectives. The perspectives mainly depend upon the roles people have within the software
development or maintenance process. For each perspective, either one or multiple scenarios
are defi ned, consisting of repeatable activities an inspector has to perform, and the questions
an inspector has to answer. For example, a testing expert fi rst creates a test plan based on the
requirements specifi cation and then attempts to fi nd defects from it.
This is designed specifi cally for requirements inspection, and inspectors who participate in
inspection should have expertise in inspecting requirements of their own area of expertise.
Later, perspective-based reading has been applied to code inspections and design inspections.
Usage-based reading
This method proposed by Thelin et al. [96,97,98] is applied in design inspections. Design
documentation is inspected based on use cases, which are documented in requirements
specification. Since use-cases are the basis of inspection, the focus is on finding functional
defects, which are relevant from the users’ point of view.
Abstraction driven reading
This method given by Dunsmore et al. [99,100,101] is designed for code inspections. In this
method, an inspector reads a sequence of statements in the code and abstracts the functions
these statements compute. An inspector repeats this procedure until the final function of the
inspected code artifact has been abstracted and can be compared to the specification. Thus,
the inspector creates an abstraction level specification based on the code under inspection to
ensure that the inspector has really understood the code.
Task-driven reading
This method is also valid for code inspections proposed by Kelly & Shepard [102]. In this
method, the inspector has to create a data dictionary, a complete description of the logic and a
cross-reference between the code and the specifications.
Function-point based scenarios
This is based on scenarios for defect detection in requirements documents [103]. This
approach is based on function-point analysis (F PA). FPA defines a software system in terms
of its inputs, fi les, inquiries, and outputs. The scenarios designed around function-points are
known as function-point scenarios. It consists of questions and directs the focus of an
inspector to a specific function-point item within the inspected requirements document.
6.1.8 CHECKLISTS FOR INSPECTION PROCESS
The inspection team must have a checklist against which they detect errors. The checklist is
according to the item to be inspected. For example, the design document and the code of the
module should have different checklists. Checklists can be prepared with the points
mentioned in the verification of each phase. Checklists should be prepared in consultation
with experienced staff and regularly updated as more experience is gained by the inspection
process.
The checklist may vary according to the environment and needs of the organization. Each
organization should prepare its own checklists for every item.
6.2 STRUCTURED WALKTHROUGHS
The idea of structured walkthroughs was proposed by Yourdon [106]. It is a less formal and
less rigorous technique as compared to inspection. The common term used for static testing is
inspection but it is a very formal process. If you want to go for a less formal process having
no bars of organized meeting, then walkthroughs are a good option.
A typical structured walkthrough team consists of the following members:
Coordinator Organizes, moderates, and follows up the walkthrough activities.
Presenter/Developer Introduces the item to be inspected. This member is optional.
Scribe/Recorder Notes down the defects found and suggestion proposed by the members.
Reviewer/Tester Finds the defects in the item.
Maintenance Oracle Focuses on long-term implications and future maintenance of the
project. Standards Bearer Assesses adherence to standards.
User Representative/Accreditation Agent Reflects the needs and concerns of the user.
Walkthroughs differ signifi cantly from inspections. An inspection is a sixstep, rigorous,
formalized process. The inspection team uses the checklist approach for uncovering errors. A
walkthrough is less formal, has fewer steps and does not use a checklist to guide or a written
report to document the team’s work. Rather than simply reading the program or using error
checklists, the participants ‘play computer’. The person designated as a tester comes to the
meeting armed with a small set of paper test cases—representative sets of inputs and
expected outputs for the program or module. During the meeting, each test case is mentally
executed. That is, the test data are walked through the logic of the program. The state of the
program is monitored on a paper or any other presentation media. The walkthrough should
have a follow-up process similar to that described in the inspection process. The steps of a
walkthrough process are shown in Fig. 6.7.

6.3 TECHNICAL REVIEWS


A technical review is intended to evaluate the software in the light of development standards,
guidelines, and specifications and to provide the management with evidence that the
development process is being carried out according to the stated objectives. A review is
similar to an inspection or walkthrough, except that the review team also includes
management. Therefore, it is considered a higher-level technique as compared to inspection
or walkthrough.
A technical review team is generally comprised of management-level representatives and
project management. Review agendas should focus less on technical issues and more on
oversight than an inspection. The purpose is to evaluate the system relative to specifi cations
and standards, recording defects and defi ciencies. The moderator should gather and distribute
the documentation to all team members for examination before the review. He should also
prepare a set of indicators to measure the following points:
 Appropriateness of the problem defi nition and requirements
 Adequacy of all underlying assumptions
 Adherence to standards
 Consistency
 Completeness
 Documentation

The moderator may also prepare a checklist to help the team focus on the key points. The
result of the review should be a document recording the events of the meeting,
deficiencies identified, and review team recommendations. Appropriate actions should
then be taken to correct any deficiencies and address all recommendations.

You might also like