STM Unit-I Notes
STM Unit-I Notes
STM Unit-I Notes
SOFTWARE TESTING:
Introduction, Evolution, Myths & Facts, Goals, Psychology, Definition, Model for testing,
Effective Vs Exhaustive Software Testing. Software Testing Terminology and Methodology:
Software Testing Terminology, Software Testing Life Cycle, relating test life cycle to
development life cycle Software Testing Methodology.
Introduction:
Software has pervaded our society, from modern households to spacecraft’s. It has become an
essential component of any electronic device or system.
This is why software development has turned out to be an exciting career for computer engineers
in the last 10–15 years. However, software development faces many challenges.
Software is becoming complex, but the demand for quality in software products has increased.
This rise in customer awareness for quality increases the workload and responsibility of the
software development team. That is why software testing has gained so much popularity in the
last decade. Job trends have shifted from development to software testing. Today, software
quality assurance and software testing courses are offered by many institutions. Organizations
have separate testing groups with proper hierarchy. Software development is driven with testing
outputs. If the testing team claims the presence of bugs in the software, then the development
team cannot release the product. However, there still is a gap between academia and the demand
of industries.
The practical demand is that passing graduates must be aware of testing terminologies, standards,
and techniques. But students are not aware in most cases, as our universities and colleges do not
offer separate software quality and testing courses. They study only software engineering. It can
be said that software engineering is a mature discipline today in industry as well as in academia.
On the other hand, software testing is mature in industry but not in academia. Thus, this gap must
be bridged with separate courses on software quality and testing so that students do not face
problems when they go for testing in industries. Today, the ideas and techniques of software
testing have become essential knowledge for software developers, testers, and students as well.
This book is a step forward to bridge this gap.
We cannot say that the industry is working smoothly, as far as software testing is concerned.
While many industries have adopted effective software testing techniques and the development is
driven by testing efforts, there are still some loopholes. Industries are dependent on automation
of test execution. Therefore, testers also rely on effi cient tools. But there may be an instance
where automation will not help, which is why they also need to design test cases and execute
them manually. Are the testers prepared for this case? This requires testing teams to have
knowledge of testing tactics and procedures of how to design test cases.
How do industries measure their testing process? Since software testing is a complete process
today, it must be measured to check whether the process is suitable for projects. CMM
(Capability Maturity Model) has measured the development process on a scale of 1–5 and
companies are running for the highest scale. On the same pattern, there should be a measurement
program for testing processes. Fortunately, the measurement technique for testing processes has
also been developed. But how many managers, developers, testers, and of course students, know
that we have a Testing Maturity Model (TMM) for measuring the maturity status of a testing
process? This book gives an overview of various test process maturity models and emphasizes
the need for these. Summarizing the above discussion, it is evident that industry and academia
should go parallel. Industries constantly aspire for high standards.
On the other side, organizations cannot run with the development team looking after every stage,
right from requirement gathering to implementation. Testing is an important segment of software
development and it has to be thoroughly done. Therefore, there should be a separate testing
group with divided responsibilities among the members.
In the early days of software development, software testing was considered only a debugging
process for removing errors after the development of software. By 1970, the term ‘software
engineering’ was in common use. But software testing was just a beginning at that time. In 1978,
G. J. Myers realized the need to discuss the techniques of software testing in a separate subject.
He wrote the book The Art of Software Testing [2] which is a classic work on software testing.
He emphasized that there is a requirement that undergraduate students must learn software
testing techniques so that they pass out with the basic knowledge of software testing and do not
face problems in the industry. Moreover, Myers discussed the psychology of testing and
emphasized that testing should be done with a mindset of fi nding errors and not to demonstrate
that errors are not present.
Gelperin and Hetzel [79] have characterized the growth of software testing with time. Based on
this, we can divide the evolution of software testing into the following phases
Debugging-oriented Phase (Before 1957) This phase is the early period of testing. At that time,
testing basics were unknown. Programs were written and then tested by the programmers until
they were sure that all the bugs were removed. The term used for testing was checkout, focused
on getting the system to run. Debugging was a more general term at that time and it was not
distinguishable from software testing. Till 1956, there was no clear distinction between software
development, testing, and debugging. Demonstration-oriented Phase (1957–78) The term
‘debugging’ continued in this phase. However, in 1957, Charles Baker pointed out that the
purpose of checkout is not only to run the software but also to demonstrate the correctness
according to the mentioned requirements. Thus, the scope of checkout of a program increased
from program runs to program correctness. Moreover, the purpose of checkout was to show the
absence of errors. There was no stress on the test case design. In this phase, there was a
misconception that the software could be tested exhaustively.
Destruction-oriented Phase (1979–82) This phase can be described as the revolutionary turning
point in the history of software testing. Myers changed the view of testing from ‘testing is to
show the absence of errors’ to ‘testing is to fi nd more and more errors.’ He separated debugging
from testing and stressed on the valuable test cases if they explore more bugs. This phase has
given importance to effective testing in comparison to exhaustive testing. The importance of
early testing was also realized in this phase.
Evaluation-oriented Phase (1983–87) With the concept of early testing, it was realized that if
the bugs were identifi ed at an early stage of development, it was cheaper to debug them as
compared to the bugs found in implementation or post-implementation phases. This phase
stresses on the quality of software products such that it can be evaluated at every stage of
development. In fact, the early testing concept was established in the form of verifi cation and
validation activities which help in producing better quality software. In 1983, guidelines by the
National Bureau of Standards were released to choose a set of verifi cation and validation
techniques and evaluate the software at each step of software development.
Prevention-oriented Phase (1988–95) The evaluation model stressed on the concept of bug-
prevention as compared to the earlier concept of bug-detection. With the idea of early detection
of bugs in earlier phases, we can prevent the bugs in implementation or further phases. Beyond
this, bugs can also be prevented in other projects with the experience gained in similar software
projects. The prevention model includes test planning, test analysis, and test design activities
playing a major role, while the evaluation model mainly relies on analysis and reviewing
techniques other than testing.
Process-oriented Phase (1996 onwards) In this phase, testing was established as a complete
process rather than a single phase (performed after coding) in the software development life
cycle (SDLC). The testing process starts as soon as the requirements for a project are specifi ed
and it runs parallel to SDLC. Moreover, the model for measuring the performance of a testing
process has also been developed like CMM. The model for measuring the testing process is
known as Testing Maturity Model (TMM). Thus, the emphasis in this phase is also on quantifi
cation of various parameters which decide the performance of a testing process.
The evolution of software testing was also discussed by Hung Q. Nguyen and Rob Pirozzi in a
white paper [81], in three phases, namely Software Testing 1.0, Software Testing 2.0, and
Software Testing 3.0. These three phases discuss the evolution in the earlier phases that we
described. According to this classifi cation, the current state-of-practice is Software Testing 3.0.
These phases are discussed below.
Software Testing 1.0 In this phase, software testing was just considered a single phase to be
performed after coding of the software in SDLC. No test organization was there. A few testing
tools were present but their use was limited due to high cost. Management was not concerned
with testing, as there was no quality goal.
Software Testing 2.0 In this phase, software testing gained importance in SDLC and the
concept of early testing also started. Testing was evolving in the direction of planning the test
resources. Many testing tools were also available in this phase.
Software Testing 3.0 In this phase, software testing is being evolved in the form of a process
which is based on strategic effort. It means that there should be a process which gives us a
roadmap of the overall testing process. Moreover, it should be driven by quality goals so that all
controlling and monitoring activities can be performed by the managers. Thus, the management
is actively involved in this phase.
Truth This myth is more in the minds of students who have just passed out or are going to pass
out of college and want to start a career in testing. So the general perception is that, software
testing is an easy job, wherein test cases are executed with testing tools only. But in reality, tools
are there to automate the tasks and not to carry out all testing activities. Testers’ job is not easy,
as they have to plan and develop the test cases manually and it requires a thorough understanding
of the project being developed with its overall design. Overall, testers have to shoulder a lot of
responsibility which sometimes make their job even harder than that of a developer.
Truth This myth prevails in the minds of every team member and even in fresher’s who are
seeking jobs. As a fresher, we dream of a job as a developer. We get into the organization as a
developer and feel superior to other team members. At the managerial level also, we feel happy
about the achievements of the developers but not of the testers who work towards the quality of
the product being developed. Thus, we have this myth right from the beginning of our career,
and testing is considered a secondary job. But testing has now become an established path for
job-seekers. Testing is a complete process like development, so the testing team enjoys equal
status and importance as the development team.
Truth This myth also exists at various levels of the development team. Almost every person who
has not experienced the process of designing and executing the test cases manually feels that
complete testing is possible. Complete testing at the surface level assumes that if we are giving
all the inputs to the software, then it must be tested for all of them. But in reality, it is not
possible to provide all the possible inputs to test the software, as the input domain of even a
small program is too large to test. Moreover, there are many things which cannot be tested
completely, as it may take years to do so. This will be demonstrated soon in this chapter. This is
the reason why the term ‘complete testing’ has been replaced with ‘effective testing.’ Effective
testing is to select and run some select test cases such that severe bugs are uncovered first.
Truth Most of the team members, who are not aware of testing as a process, still feel that testing
cannot commence before coding. But this is not true. As mentioned earlier, the work of a tester
begins as soon as we get the specifications. The tester performs testing at the end of every phase
of SDLC in the form of verification (discussed later) and plans for the validation testing
(discussed later). He writes detailed test cases, executes the test cases, reports the test results, etc.
Testing after coding is just a part of all the testing activities.
Truth Today, all the testing activities are driven by quality goals. Ultimately, the goal of testing
is also to ensure quality of the software. But quality does not imply checking only the
functionalities of all the modules. There are various things related to quality of the software, for
which test cases must be executed.
Truth This is the extension of the myth that ‘testing is easy.’ Most of us think that testing is an
intuitive process and it can be performed easily without any training. And therefore, anyone can
be a tester. As an established process, software testing as a career also needs training for various
purposes, such as to understand (i) various phases of software testing life cycle, (ii) recent
techniques to design test cases, (iii) various tools and how to work on them, etc. This is the
reason that various testing courses for certified testers are being run.
Short-term or immediate goals: These goals are the immediate results after performing testing.
These goals may be set in the individual phases of SDLC. Some of them are discussed below.
Bug discovery: The immediate goal of testing is to find errors at any stage of software
development. More the bugs discovered at an early stage, better will be the success rate of
software testing.
Bug prevention: It is the consequent action of bug discovery. From the behavior and
interpretation of bugs discovered, everyone in the software development team gets to learn how
to code safely such that the bugs discovered should not be repeated in later stages or future
projects. Though errors cannot be prevented to zero, they can be minimized. In this sense, bug
prevention is a superior goal of testing.
Long-term goals These goals affect the product quality in the long run, when one cycle of the
SDLC is over. Some of them are discussed here.
Quality: Since software is also a product, its quality is primary from the users’ point of view.
Thorough testing ensures superior quality. Therefore, the first goal of understanding and
performing the testing process is to enhance the quality of the software product.
Though quality depends on various factors, such as correctness, integrity, efficiency, etc.,
reliability is the major factor to achieve quality. The software should be passed through a
rigorous reliability analysis to attain high quality standards. Reliability is a matter of confidence
that the software will not fail, and this level of confidence increases with rigorous testing. The
confidence in reliability, in turn, increases the quality.
Customer satisfaction From the users’ perspective, the prime concern of testing is customer
satisfaction only. If we want the customer to be satisfied with the software product, then testing
should be complete and thorough. Testing should be complete in the sense that it must satisfy the
user for all the specified requirements mentioned in the user manual, as well as for the
unspecified requirements which are otherwise understood. A complete testing process achieves
reliability, reliability enhances the quality, and quality in turn, increases the customer
satisfaction.
Risk management Risk is the probability that undesirable events will occur in a system. These
undesirable events will prevent the organization from successfully implementing its business
initiatives. Thus, risk is basically concerned with the business perspective of an organization.
Risks must be controlled to manage them with ease. Software testing may act as a control, which
can help in eliminating or minimizing risks.
Thus, managers depend on software testing to assist them in controlling their business goals. The
purpose of software testing as a control is to provide information to management so that they can
better react to risk situations [4]. For example, testing may indicate that the software being
developed cannot be delivered on time, or there is a probability that high priority bugs will not be
resolved by the specified time. With this advance information, decisions can be made to
minimize risk situation. Hence, it is the testers’ responsibility to evaluate business risks (such as
cost, time, resources, and critical features of the system being developed) and make the same a
basis for testing choices. Testers should also categorize the levels of risks after their assessment
(like high-risk, moderate-risk, low-risk) and this analysis becomes the basis for testing activities.
Thus, risk management becomes the long-term goal for software testing.
Post-implementation goals These goals are important after the product is released. Some of
them are discussed here.
Reduced maintenance cost The maintenance cost of any software product is not its physical
cost, as the software does not wear out. The only maintenance cost in a software product is its
failure due to errors. Post-release errors are costlier to fix, as they are difficult to detect. Thus, if
testing has been done rigorously and effectively, then the chances of failure are minimized and in
turn, the maintenance cost is reduced.
Improved software testing process A testing process for one project may not be successful and
there may be scope for improvement. Therefore, the bug history and post-implementation results
can be analyzed to find out snags in the present testing process, which can be rectified in future
projects. Thus, the long-term post-implementation goal is to improve the testing process for
future projects.
Software testing is directly related to human psychology. Though software testing has not been
defined till now, but most frequently, it is defined as, Testing is the process of demonstrating
that there are no errors.
The purpose of testing is to show that the software performs its intended functions correctly. This
defi nition is correct, but partially. If testing is performed keeping this goal in mind, then we
cannot achieve the desired goals (described above in the previous section), as we will not be able
to test the software as a whole. Myers first identified this approach of testing the software. This
approach is based on the human psychology that human beings tend to work according to the
goals fixed in their minds. If we have a preconceived assumption that the software is error-free,
then consequently, we will design the test cases to show that all the modules run smoothly. But it
may hide some bugs. On the other hand, if our goal is to demonstrate that a program has errors,
then we will design test cases having a higher probability to uncover bugs.
Thus, if the process of testing is reversed, such that we always presume the presence of bugs in
the software, then this psychology of being always suspicious of bugs widens the domain of
testing. It means, now we don’t think of testing only those features or specifications which have
been mentioned in documents like SRS (software requirement specification), but we also think in
terms of finding bugs in the domain or features which are understood but not specified. You can
argue that, being suspicious about bugs in the software is a negative approach. But, this negative
approach is for the benefit of constructive and effective testing. Thus, software testing may be
defined as, Testing is the process of executing a program with the intent of finding errors.
This definition has implications on the psychology of developers. It is very common that they
feel embarrassed or guilty when someone finds errors in their software. However, we should not
forget that humans are prone to error. We should not feel guilty for our errors. This psychology
factor brings the concept that we should concentrate on discovering and preventing the errors and
not feel guilt about them. Therefore, testing cannot be a joyous event unless you cast out your
guilt.
According to this psychology of testing, a successful test is that which finds errors. This can be
understood with the analogy of medical diagnostics of a patient. If the laboratory tests do not
locate the problem, then it cannot be regarded as a successful test. On the other hand, if the
laboratory test determines the disease, then the doctor can start an appropriate treatment. Thus,
the destructive approach of software testing, the definitions of successful and unsuccessful
testing should also be modified.
Many practitioners and researchers have defined software testing in their own way. Some are
given below
Testing is the process of executing a program with the intent of finding errors.
Testing is the process of executing a program with the intent of finding errors. Myers [2]
Testing can show the presence of bugs but never their absence. W. Dijkstra [125]
Program testing is a rapidly maturing area within software engineering that is receiving
increasing notice both by computer science theoreticians and practitioners. Its general aim
is to affi rm the quality of software systems by systematically exercising the software in
carefully controlled circumstances. E. Miller[84]
Testing is a support function that helps developers look good by fi nding their mistakes
before anyone else does. James Bach [83]
Testing is a concurrent lifecycle process of engineering, using and maintaining test ware
(i.e. testing artifacts) in order to measure and improve the quality of the software being
tested. Craig [117]
Since quality is the prime goal of testing and it is necessary to meet the defined quality
standards, software testing should be defined keeping in view the quality assurance terms. Here,
it should not be misunderstood that the testing team is responsible for quality assurance. But the
testing team must be well aware of the quality goals of the software so that they work towards
achieving them.
Moreover, testers these days are aware of the definition that testing is to find more and more
bugs. But the problem is that there are too many bugs to fix. Therefore, the recent emphasis is on
categorizing the more important bugs first. Thus, software testing can be defined as,
Software testing is a process that detects important bugs with the objective of having better
quality software.
Testing is not an intuitive activity; rather it should be learnt as a process. Therefore, testing
should be performed in a planned way. For the planned execution of a testing process, we need
to consider every element and every aspect related to software testing. Thus, in the testing model,
we consider the related elements and team members involved
The software is basically a part of a system for which it is being developed. Systems consist of
hardware and software to make the product run. The developer develops the software in the
prescribed system environment considering the testability of the software. Testability is a major
issue for the developer while developing the software, as a badly written software may be
difficult to test. Testers are supposed to get on with their tasks as soon as the requirements are
specified. Testers work on the basis of a bug model which classifies the bugs based on the SDLC
phase in which the testing is to be performed. Based on the software type and the bug model,
testers decide a testing methodology which guides how the testing will be performed. With
suitable testing techniques decided in the testing methodology, testing is performed on the
software with a particular goal. If the testing results are in line with the desired goals, then the
testing is successful; otherwise, the software or the bug model or the testing methodology has to
be modified so that the desired results are achieved. The following describe the testing model.
Software is built after analysing the system in the environment. It is a complex entity which
deals with environment, logic, programmer psychology, etc. But complex software makes it very
difficult to test. Since in this model of testing, our aim is to concentrate on the testing process,
therefore the software under consideration should not be so complex such that it would not be
tested. In fact, this is the point of consideration for developers who design the software. They
should design and code the software such that it is testable at every point. Thus, the software to
be tested may be modeled such that it is testable, avoiding unnecessary complexities.
Bug Model Bug model provides a perception of the kind of bugs expected. Considering the
nature of all types of bugs, a bug model can be prepared that may help in deciding a testing
strategy. However, every type of bug cannot be predicted. Therefore, if we get incorrect results,
the bug model needs to be modifi ed.
Testing methodology and Testing Based on the inputs from the software model and the bug
model, testers can develop a testing methodology that incorporates both testing strategy and
testing tactics. Testing strategy is the roadmap that gives us well-defined steps for the overall
testing process. It prepares the planned steps based on the risk factors and the testing phase. Once
the planned steps of the testing process are prepared, software testing techniques and testing
tools can be applied within these steps. Thus, testing is performed on this methodology.
However, if we don’t get the required results, the testing plans must be checked and modified
accordingly.
Exhaustive or complete software testing means that every statement in the program and every
possible path combination with every possible combination of data must be executed. But soon,
we will realize that exhaustive testing is out of scope. That is why the questions arise: (i) When
are we done with testing? or (ii) How do we know that we have tested enough? There may be
many answers for these questions with respect to time, cost, customer, quality, etc. This section
will explore that exhaustive or complete testing is not possible. Therefore, we should concentrate
on effective testing which emphasizes efficient techniques to test the software so that important
features will be tested within the constrained resources.
The testing process should be understood as a domain of possible tests (see Fig. 1.7). There are
subsets of these possible tests. But the domain of possible tests becomes infinite, as we cannot
test every possible combination.
This combination of possible tests is infinite in the sense that the processing resources and time
are not sufficient for performing these tests. Computer speed and time constraints limit the
possibility of performing all the tests. Complete testing requires the organization to invest a long
time which is not cost-effective. Therefore, testing must be performed on selected subsets that
can be performed within the constrained resources. This selected group of subsets, but not the
whole domain of testing, makes effective software testing. Effective testing can be enhanced if
subsets are selected based on the factors which are required in a particular environment
If we consider the input data as the only part of the domain of testing, even then, we are not able
to test the complete input data combination. The domain of input data has four sub-parts: (a)
valid inputs, (b) invalid inputs, (c) edited inputs, and (d) race condition inputs
Valid inputs It seems that we can test every valid input on the software. But look at a very
simple example of adding two-digit two numbers. Their range is from –99 to 99 (total 199). So
the total number of test case combinations will be 199 × 199 = 39601. Further, if we increase the
range from two digits to four-digits, then the number of test cases will be 399,960,001. Most
addition programs accept 8 or 10 digit numbers or more. How can we test all these combinations
of valid inputs?
Invalid inputs Testing the software with valid inputs is only one part of the input sub-domain.
There is another part, invalid inputs, which must be tested for testing the software effectively.
The important thing in this case is the behavior of the program as to how it responds when a user
feeds invalid inputs. The set of invalid inputs is also too large to test. If we consider again the
example of adding two numbers, Then the following possibilities may occur from invalid inputs:
Edited inputs If we can edit inputs at the time of providing inputs to the program, then
many unexpected input events may occur. For example, you can add many spaces in the
input, which are not visible to the user. It can be a reason for non-functioning of the
program. In another example, it may be possible that a user is pressing a number key,
then Backspace key continuously and finally after sometime, he presses another number
key and Enter. Its input buffer overflows and the system crashes. The behavior of users
cannot be judged. They can behave in a number of ways, causing defect in testing a
program. That is why edited inputs are also not tested completely.
Race condition inputs The timing variation between two or more inputs is also one of
the issues that limit the testing. For example, there are two input events,
A and B. According to the design, A precedes B in most of the cases. But, B can also
come first in rare and restricted conditions. This is the race condition; whenever B
precedes A. Usually the program fails due to race conditions, as the possibility of
preceding B in restricted condition has not been taken care, resulting in a race condition
bug. In this way, there may be many race conditions in the system, especially in
multiprocessing systems and interactive systems. Race conditions are among the least
tested.
There are too Many Possible Paths through the Program to Test
A program path can be traced through the code from the start of a program to its
termination. Two paths differ if the program executes different statements in each, or
executes the same statements but in different order. A testing person thinks that if all the
possible paths of control flow through the program are executed, then possibly the
program can be said to be completely tested. However, there are two flaws in this
statement.
(i) The number of unique logic paths through a program is too large. This was
demonstrated by Myers[2] with an example shown in Fig. 1.9. It depicts a 10–20
statements program consisting of a DO loop that iterates up to 20 times. Within
the body of the DO loop is a set of nested IF statements. The number of all the
paths from point A to B is approximately 1014. Thus, all these paths cannot be
tested, as it may take years of time.
See another example for the code fragment shown in Fig. 1.10 and its corresponding fl
ow graph in Fig. 1.11 (We will learn how to convert the program into a fl ow graph in
Chapter 5).
Now calculate the number of paths in this fragment. For calculating the number of paths,
we must know how many paths are possible in one iteration. Here in our example, there
are two paths in one iteration. Now the total number of paths will be 2n + 1, where n is
the number of times the loop will be carried out, and 1 is added, as the for loop will exit
after its looping ends and it terminates. Thus, if n is 20, then the number of paths will be
220 + 1, i.e. 1048577. Therefore, all these paths cannot be tested, as it may take years.
(ii) The complete path testing, if performed somehow, does not guarantee that there
will not be errors. For example, it does not claim that a program matches its
specification. If one were asked to write an ascending order sorting program but
the developer mistakenly produces a descending order program, then exhaustive
path testing will be of little value. In another case, a program may be incorrect
because of missing paths. In this case, exhaustive path testing would not detect
the missing path.
Every Design Error Cannot be Found
Manna and Waldinger [15] have mentioned the following fact: ‘We can never be sure
that the specifications are correct.’ How do we know that the specifications are
achievable? Its consistency and completeness must be proved, and in general, that is a
provably unsolvable problem [9]. Therefore, specification errors are one of the major
reasons that make the design of the software faulty. If the user requirement is to have
measurement units in inches and the specification says that these are in meters, then the
design will also be in meters. Secondly, many user interface failures are also design
errors.
The study of these limitations of testing shows that the domain of testing is infinite and
testing the whole domain is just impractical. When we leave a single test case, the
concept of complete testing is abandoned. But it does not mean that we should not focus
on testing. Rather, we should shift our attention from exhaustive testing to effective
testing. Effective testing provides the flexibility to select only the subsets of the domain
of testing based on project priority such that the chances of failure in a particular
environment is minimized.
Software Testing Terminology and Methodology
Since software testing or rather effective software testing is an emerging fi eld, many
related terms are either undefi ned or yet to be known. We tend to use the terms ‘error’,
‘bug’, ‘defect’, ‘fault’, ‘failure’, etc. interchangeably. But they don’t mean the same.
Bugs in general, are more important during software testing. All the bugs do not have the
same level of severity. Some bugs are less important, as they are not going to affect the
product badly. On the other hand, bugs with high severity level will affect the product
such that the product may not be released. Therefore, bugs need to be classified according
to their severity. A bug has a complete cycle from its initiation to death. Moreover, these
bugs also affect a project cost at various levels of development. If there is a bug at the
specification level and it is caught, then it is more economical to debug it as compared to
the case if it is found in the implementation stage. If the bug propagates further into later
stages of software development life cycle (SDLC), it would become more costly to
debug. As we have discussed in the previous chapter, software testing today is recognized
as a complete process, and we need to identify various stages of software testing life
cycle. These stages, like SDLC stages, define various tasks to be done during the testing
of software in a hierarchy. Besides this, what should be our complete methodology to test
the software? How should we go for testing a complete project? What will be the
techniques used during testing? Should we use testing tools during testing? This chapter
defines all the terms related to software testing, some concepts related to these terms, and
then discusses the development of a testing methodology.
2.1 SOFTWARE TESTING TERMINOLOGY
2.1.1 DEFINITIONS
As mentioned earlier, terms like error, bug, failure, defect, etc. are not synonymous and
hence these should not be used interchangeably. All these terms are defined below
Failure
When the software is tested, failure is the first term being used. It means the inability of a
system or component to perform a required function according to its specification. In
other words, when results or behavior of the system under test are different as compared
to specified expectations, then failure exists.
Fault/Defect/Bug
Failure is the term which is used to describe the problems in a system on the output side,
as shown in Fig. 2.1. Fault is a condition that in actual causes a system to produce failure.
Fault is synonymous with the words defect or bug. Therefore, fault is the reason
embedded in any phase of SDLC and results in failures. It can be said that failures are
manifestation of bugs. One failure may be due to one or more bugs and one bug may
cause one or more failures. Thus, when a bug is executed, then failures are generated. But
this is not always true. Some bugs are hidden in the sense that these are not executed, as
they do not get the required conditions in the system. So, hidden bugs may not always
produce failures. They may execute only in certain rare conditions.
Error Whenever a development team member makes a mistake in any phase of SDLC,
errors are produced. It might be a typographical error, a misleading of a specifi cation, a
misunderstanding of what a subroutine does, and so on. Error is a very general term used
for human mistakes. Thus, an error causes a bug and the bug in turn causes failures, as
shown in Fig.
Suppose the module shown above is expected to print the value of x which is critical for
the use of software. But when this module will be executed, the value of x will not be
printed. It is a failure of the program. When we try to look for the reason of the failure,
we fi nd that in Module A(), the while loop is not being executed. A condition is
preventing the body of while loop to be executed. This is known as bug/defect/fault. On
close observation, we fi nd a semicolon being misplaced after the while loop which is not
its correct syntax and it is not allowing the loop to execute. This mistake is known as an
error.
Test case Test case is a well-documented procedure designed to test the functionality of a
feature in the system. A test case has an identity and is associated with a program
behaviour. The primary purpose of designing a test case is to fi nd errors in the system.
For designing the test case, it needs to provide a set of inputs and its corresponding
expected outputs. The sample for a test case is shown in Fig.
Test Execution
In this phase, all test cases are executed including verification and validation.
Verification test cases are started at the end of each phase of SDLC. Validation test cases
are started after the completion of a module. It is the decision of the test team to opt for
automation or manual execution. Test results are documented in the test incident reports,
test logs, testing status, and test summary reports, as shown in Fig. 2.10.
Responsibilities at various levels for execution of the test cases are outlined in Table 2.1.
Table 2.1 Testing level vs responsibility
Test Execution Level Person Responsible
Unit Developer of the module
Integration Testers and Developers
System Testers, Developers, End-users
Acceptance Testers, End-users
Post-Execution/Test Review
As we know, after successful test execution, bugs will be reported to the concerned
developers. This phase is to analyse bug-related issues and get feedback so that maximum
number of bugs can be removed. This is the primary goal of all test activities done earlier.
As soon as the developer gets the bug report, he performs the following activities.
Understanding the bug
The developer analyses the bug reported and builds an understanding of its whereabouts.
Reproducing the bug
Next, he confi rms the bug by reproducing the bug and the failure that exists. This is
necessary to cross-check failures. However, some bugs are not reproducible which
increases the problems of developers.
Analysing the nature and cause of the bug
After examining the failures of the bug, the developer starts debugging its symptoms and
tracks back to the actual location of the error in the design. The process of debugging has
been discussed in detail in Chapter 17.
After fixing the bug, the developer reports to the testing team and the modifi ed portion
of the software is tested once again.
After this the results from manual and automated testing can be collected.
The final bug report and associated metrics are reviewed and analysed for overall testing
process. The following activities can be done: „
Reliability analysis can be performed to establish whether the software meets the
predefined reliability goals or not. If so, the product can be released and the
decision on a release date can be taken. If not, then the time and resources
required to reach the reliability goals are outlined.
Coverage analysis can be used as an alternative criterion to stop testing. „
Overall defect analysis can identify risk areas and help focus our efforts on
quality improvement.
2.3 SOFTWARE TESTING METHODOLOGY
Software testing methodology is the organization of software testing by means of which the test
strategy and test tactics are achieved, as shown in Fig. 2.11. All the terms related to software
testing methodology and a complete testing strategy is discussed in this section.
2.3.1 SOFTWARE TESTING STRATEGY
Testing strategy is the planning of the whole testing process into a well-planned series of
steps. In other words, strategy provides a roadmap that includes very specific activities
that must be performed by the test team in order to achieve a specific goal.
In Chapter 1, risk reduction was described as one of the goals of testing. In fact, when a
test strategy is being developed, risk reduction is addressed first. The components of a
testing strategy are discussed below:
Test Factors
Test factors are risk factors or issues related to the system under development. Risk
factors need to be selected and ranked according to a specific system under development.
The testing process should reduce these test factors to a prescribed level.
Test Phase This is another component on which the testing strategy is based. It refers to
the phases of SDLC where testing will be performed. Testing strategy may be different
for different models of SDLC, e.g. strategies will be different for waterfall and spiral
models.
2.3.2 TEST STRATEGY MATRIX
A test strategy matrix identifi es the concerns that will become the focus of test planning
and execution. In this way, this matrix becomes an input to develop the testing strategy.
The matrix is prepared using test factors and test phase (Table 2.2). The steps to prepare
this matrix are discussed below.
Select and rank test factors
Based on the test factors list, the most appropriate factors according to specifi c systems
are selected and ranked from the most signifi cant to the least. These are the rows of the
matrix. Identify system development phases
Different phases according to the adopted development model are listed as columns of
the matrix. These are called test phases.
Identify risks associated with the system under development
In the horizontal column under each of the test phases, the test concern with the strategy
used to address this concern is entered. The purpose is to identify the concerns that need
to be addressed under a test phase. The risks may include any events, actions, or
circumstances that may prevent the test program from being implemented or executed
according to a schedule, such as late budget approvals, delayed arrival of test equipment,
or late availability of the software application. As risks are identified, they must be
assessed for impact and then mitigated with strategies for overcoming them, because
risks may be realized despite all precautions having been taken. The test team must
carefully examine risks in order to derive effective test and mitigation strategies for a
particular application. Concerns should be expressed as questions so that the test strategy
becomes a high-level focus for testers when they reach the phase where it’s most
appropriate to address a concern.
Creating a test strategy Let’s take a project as an example. Suppose a new operating
system has to be designed, which needs a test strategy. The following steps are used
(Table 2.3).
The V&V process, in a nutshell, involves (i) verification of every step of SDLC (all
verification activities are explained in the next chapter) and (ii) validation of the verified
system at the end.
2.3.5 VALIDATION ACTIVITIES
Validation has the following three activities which are also known as the three levels of
validation testing. Unit Testing It is a major validation effort performed on the smallest
module of the system. If avoided, many bugs become latent bugs and are released to the
customer. Unit testing is a basic level of testing which cannot be overlooked, and
confirms the behaviour of a single module according to its functional specifications.
Integration Testing It is a validation technique which combines all unit-tested modules
and performs a test on their aggregation. One may ask, when we have tested all modules
in unit testing, where is the need to test them on aggregation? The answer is interfacing.
Unit modules are not independent, and are related to each other by interface
specifications between them. When we unit test a module, its interfacing with other
modules remain untested. When one module is combined with another in an integrated
environment, interfacing between units must be tested. If some data structures, messages,
or other things are common between some modules, then the standard format of these
interfaces must be checked during integration testing, otherwise these will not be able to
interface with each other.
But how do we integrate the units together? Is it a random process? It is actually a
systematic technique for combining modules. In fact, interfacing among modules is
represented by the system design. We integrate the units according to the design and
availability of units. Therefore, the tester must be aware of the system design.
System Testing
This testing level focuses on testing the entire integrated system. It incorporates many
types of testing, as the full system can have various users in different environments. The
purpose is to test the validity for specific users and environments. The validity of the
whole system is checked against the requirement specifications.
2.3.6 TESTING TACTICS
The ways to perform various types of testing under a specific test strategy are discussed
below. Software Testing Techniques
In the previous sections, it has been observed that complete or exhaustive testing is not
possible. Instead, our effort should be on effective testing. How -ever, effective testing is
a real challenge in the domain of testing. At this stage, testing can be defined as the
design of effective test cases such that most of the testing domains will be covered
detecting the maximum number of bugs. As Myers [2] said—
Given constraints on time, cost, computer time, etc., the key issue of testing becomes —
What subset of all possible test cases has the highest probability of detecting the most
errors?
Therefore, the next objective is to find the technique which will meet both the objectives
of effective test case design, i.e. coverage of testing domain and detection of maximum
number of bugs. The technique used to design effective test case is called Software
Testing Technique.
Till now, we have discussed the overall strategy for testing methodology. V&V and the
levels of testing under this process describe only the organization and planning of
software testing. Actual methods for designing test cases, i.e. software testing techniques,
implement the test cases on the software. These techniques can be categorized into two
parts: (a) static testing and (b) dynamic testing.
Static Testing It is a technique for assessing the structural characteristics of source code,
design specifications or any notational representation that conforms to well-defined
syntactic rules [16]. It is called as static because we never execute the code in this
technique. For example, the structure of code is examined by the teams but the code is
not executed.
Dynamic Testing
All the methods that execute the code to test software are known as dynamic testing
techniques. In this technique, the code is run on a number of inputs provided by the user
and the corresponding results are checked.
This type of testing is further divided into two parts: (a) black-box testing and (b) white-
box testing.
Black-box testing
This technique takes care of the inputs given to a system and the output is received after
processing in the system. What is being processed in the system? How does the system
perform these operations? Black-box testing is not concerned with these questions. It
checks the functionality of the system only. That is why the term black-box is used. It is
also known as functional testing. It is used for system testing under validation.
White-box testing
This technique complements black-box testing. Here, the system is not a black box.
Every design feature and its corresponding code is checked logically with every possible
path execution. So, it takes care of the structural paths instead of just outputs. It is also
known as structural testing and is used for unit testing under verification.
Testing Tools
Testing tools provide the option to automate the selected testing technique with the help
of tools. A tool is a resource for performing a test process. The combination of tools and
testing techniques enables the test process to be performed. The tester should first
understand the testing techniques and then go for the tools that can be used with each of
the techniques.
2.3.7 CONSIDERATIONS IN DEVELOPING TESTING METHODOLOGIES
The considerations in developing a testing methodology are described below [4].
Determine Project Risks
A test strategy is developed with the help of another team familiar with the business risks
associated with the software. The major issue in determining the strategy is to identify the
risks associated with the project. What are the high priority risks? What are the
consequences, if risks are not handled?
Determine the Type of Development Project
The environment or methodology to develop the software also affects the testing risks.
The risks associated with a new development project will be different from a
maintenance project or a purchased software.
Identify Test Activities According to SDLC Phase
After identifying all the risk factors in an SDLC phase, testing can be started in that
phase. However, all the testing activities must be recognized at all the SDLC phases.
Build the Test Plan
A tactical test plan is required to start testing. This test plan provides:
Environment and pre-test background
Overall objectives
Test team
Schedule and budget showing detailed dates for the testing
Resource requirements including equipment, software, and personnel
Testing materials including system documentation, software to be tested, test
inputs, test documentation, test tools
Specified business functional requirements and structural functions to be tested
Type of testing technique to be adopted in a particular SDLC phase or what are
the specific tests to be conducted in a particular phase
Since the internal design of software may be composed of many components, each of
these components or units must also have its own test plan. The idea is that units
should not be submitted without testing. So an extra effort must be put in preparing
unit test plans also.