Module - 1
Module - 1
Today, testing is seen as the best way to ensure the quality of any product.
The need for testing is increasing, as businesses face pressure to develop sophisticated
applications in shorter timeframes.
Software testing is an integral part of the software development life cycle which
identifies the defects, flaws or the errors in the application. It is incremental and
iterative in nature.
The goal of testing as described by Millers states that, “The general aim of testing is
to affirm the quality of software systems by systematically exercising the software in
carefully controlled circumstances”.
The objectives of software testing:
In software testing, a bug not only means an error, but anything that affects
the quality of the software program.
Software bugs take different names such as – defect, fault, problem, error,
incident, anomaly, failure, variance and inconsistency and so on.
The following are certain conditions that result in a bug in a
software:
If the software does not respond or act in the way as stipulated in the product
specification.
If the software behaves in a way that is stipulated in an opposite way in the product
specification.
If the software responds or reacts in a way that is not mentioned in the product
specification.
If the software does not behave in the mandatory way as expected --perhaps, this
might not be mentioned in the product specification.
Sometimes due to certain factors such as Lack of resources or not following proper
steps Fault occurs in software.
Correctness refers to the correct operation of an application and is always with reference to
some artifact. For a tester, correctness is with respect to the requirements; for a user, it is
often with respect to a user manual.
Completeness refers to the availability of all features listed in the requirements, or in the
user manual. Incomplete software is one that does not fully implement all features required.
Usability refers to the ease with which an application can be used. This is an area in itself
and there exist techniques for usability testing.
Performance refers to the time the application takes to perform a requested task.
Performance is considered as a non-functional requirement. It is specified in terms such as
“This task must be performed at the rate of X units of activity in one second on a machine
running at speed Y, having Z gigabytes of memory.”
Requirements, Behavior, and Correctness
Example 1.3 Here are the two requirements, each of which leads to a different program.
Requirement 1: It is required to write a program that inputs two integers and outputs the
maximum of these.
Requirement 2: It is required to write a program that inputs a sequence of integers
and outputs the sorted version of this sequence.
Testers are often faced with incomplete and/or ambiguous requirements. In such situations, a
tester may resort to a variety of ways to determine what behavior to expect from the program
under test.
a) Input domain:
A program is considered correct if it behaves as desired on all possible test inputs. Usually, the
set of all possible inputs is too large for the program to be executed on each input.
For example, suppose that the max program above is to be tested on a computer in which the
integers range from −32,768 to 32,767. To test max on all possible integers would require it to
be executed on all pairs of integers in this range.
There are several ways to define and specify program behavior. The simplest way is
to specify the behavior in a natural language such as English.
However, this is more likely subject to multiple interpretations than a more formally
specified behavior.
Here, we explain how the notion of program “state” can be used to define program
behavior and how the “state transition diagram,” or simply “state diagram,” can be
used to specify program behavior.
c) Valid and invalid inputs
The input domains are derived from the requirements. However, due to the incompleteness
of requirements, one might have to think a bit harder to determine the input domain.
Identifying the set of invalid inputs and testing, the program against these inputs is an
important part of the testing activity. Even when the requirements fail to specify the
program behavior on invalid inputs, the programmer does treat these in one way or another.
Testing a program against invalid inputs might reveal errors in the program.
Example: Requirement: It is required to write a program that inputs a sequence of integers and
outputs the sorted version of this sequence.
Correctness versus Reliability:
To establish correctness via testing would imply testing a program on all elements in
the input domain. In most cases that are encountered in practice, this is impossible to
accomplish.
Thus, correctness is established via mathematical proofs of programs.
While correctness attempts to establish that the program is error free, testing attempts
to find if there are any errors in it.
Thus, completeness of testing does not necessarily demonstrate that a program is error
free.
Software reliability [ANSI/IEEE Std 729-1983]: is the probability of failure free
operation of software over a given time interval and under given conditions.
Software reliability is the probability of failure free operation of software in its intended
environment.
Testing and Debugging
However, when testing reveals an error, the process used to determine the cause of this
error and to remove it is known as debugging.
As illustrated in figure below, testing and debugging are often used as two related
activities in a cyclic manner.
A test and debug cycle
Testing and debugging are two distinct though intertwined activities. Testing generally
leads to debugging though both activities might not be always performed by the same
individual.
e) Constructing an oracle
a) Preparing a test plan
The sample test plan is often augmented by items such as the method used for testing,
method for evaluating the adequacy of test cases, and method to determine if a program
has failed or not.
A test case is a pair of input data and the corresponding program output. The test data
are a set of values: one for each input variable.
A test set is a collection of test cases. A test set is sometimes referred to as a test suite.
Program requirements and the test plan help in the construction of test data.
A test case is a pair of input data and the corresponding program output; a test set is a
collection of test cases.
c) Executing the program
Execution of a program under test is the next significant step in testing. Execution of this
step for the sort program is most likely a trivial exercise. However, this may not be so for
large and complex programs.
An important step in testing a program is the one wherein the tester determines if the
observed behavior of the program under test is correct or not.
This step can be further divided into two smaller steps. In the first step one observes the
behavior and in the second step analyses the observed behavior to check if it is correct
or not.
e) Constructing an oracle
Software testing metrics are quantifiable indicators of the software testing process
progress, quality, productivity, and overall health.
The purpose of software testing metrics is to increase the efficiency and effectiveness
of the software testing process.
Test metrics help to determine what types of enhancements are required in order to
create a defect-free, high-quality software product.
In software testing, there exist a variety of metrics. Figure shows a classification of
various types of metrics briefly discussed in this section.
Metrics can be computed at the organizational, process, project, and product levels.
Each set of measurements has its value in monitoring, planning, and control.
Organizational metrics: Metrics at the level of an organization are useful in overall project
planning and management. Some of these metrics are obtained by aggregating
compatible metrics across multiple projects.
Process Metrics: A project’s characteristics and execution are defined by process metrics.
These features are critical to the SDLC process’s improvement and maintenance (Software
Development Life Cycle). The goal of a process metric is to assess the “goodness” of the
process.
Project Metrics: Project Metrics are used to assess a project’s overall quality. It is used to
estimate a project’s resources and deliverables, as well as to determine costs, productivity,
and flaws.
• Project metrics relate to a specific project, for example, the I/O device testing project or a
compiler project. These are useful in the monitoring and control of a specific project.
Product Metrics: A product’s size, design, performance, quality, and complexity are
defined by product metrics. Developers can improve the quality of their software
development by utilizing these features.
Table below, lists a sample of product metrics for object-oriented and other applications.
A sample plot of cumulative count of defects found over seven consecutive months in a software project
Software and Hardware Testing:
There are several similarities and differences between techniques used for testing software
and hardware.
Ex: To test whether there is a memory leak in an application; one performs a combination of
stress testing and code inspection.
• A variety of faults could lead to memory leaks
- Higher level
Software Testing Hardware Testing
Fault present in the application will VLSI chip, that might fail over time due
remain and no new faults will creep in to a fault that did not exist at the time
unless the application is changed chip was manufactured and tested
Built-in self test meant for hardware BIST(Built in self Test) intended to
product, rarely can be applied to software actually test for the correct functioning of
designs and code a circuit
It only detects faults that were present Hardware testers generate test based on
when the last change was made fault-models
Testing and Verification
Software testing is a process of examining the functionality and behavior of the software
through verification and validation.
Verification is a process of determining if the software is designed and developed as per the
specified requirements.
3. Ensures that the product meets the quality standards and expectations of all stakeholders
involved.
Defect Management
Defect management is an integral part of a development and test process in many software
development organizations. It is a sub process of the development process. It entails the
following: defect prevention, discovery, recording and reporting, classification, resolution,
and prediction.
Defect prevention is achieved through a variety of processes and tools. For example, good
coding techniques, unit test plans, and code inspections are all important elements of any
defect prevention process.
Defects found are classified and recorded in a database. Classification becomes important
in dealing with the defects. For example, defects classified as “high severity” will likely be
attended to first by the developers than those classified as “low severity.”
Defect classification assists an organization in measuring statistics such as the types of
defects, their frequency, and their location in the development phase and document.
Several tools exist for recording defects, and computing and reporting defect-related statistics.
Bugzilla, open source, and FogBugz, commercially available, are three such tools.
They provide several features for defect management including defect recording,
classification, and tracking. Several tools that compute complexity metrics also predict
defects using code complexity.
Execution History
Execution history of a program, also known as execution trace, is an organized collection of
information about various elements of a program during a given execution.
Sequence of objects and the corresponding methods accessed for object oriented
languages such as Java.
A complete execution history recorded from the start of a program’s execution until
It is possible to get partial execution history also for some program elements.
Test Generation Strategies
One of the key tasks in any software test activity is the generation of test cases. The
program under test is executed against the test cases to determine whether or not it
conforms to the requirements. How to generate test cases?
Mathematical
modelling lang.
Most test generation strategies use requirements as a base, directly or indirectly, to
generate tests.
Another set of strategies falls under the category model based test generation. These
strategies require that a subset of the requirements be modeled using a formal notation.
Languages based on predicate logic as well as algebraic languages are also used to
express subsets of requirements in a formal manner.
There also exist techniques to generate tests directly from the code. Such techniques,
fall under code-based test generation.
Static Testing:
Static Testing is a type of a Software Testing method which is performed to check the
defects in software without actually executing the code of the software application.
Whereas in Dynamic Testing checks, the code is executed to detect the defects.
The errors that cannot be found using Dynamic Testing, can be easily found by Static
Testing.
Static testing is best carried out by an individual who did not write the code, or by a
team of individuals. A sample process of static testing is illustrated in figure.
There are mainly two type techniques used in Static Testing:
1. Review: In static testing review is a process or technique that is performed to find the potential
defects in the design of the software. It is process to detect and remove errors and defects in the
different supporting documents like software requirements specifications.
• Informal: In informal review the creator of the documents put the contents in front of audience
and everyone gives their opinion and thus defects are identified in the early stage.
• Peer review: Peer review means checking documents of one-another to detect and fix the defects.
It is basically done in a team of colleagues.
• Inspection: Inspection is basically the verification of document the higher authority like the
verification of software requirement specifications (SRS).
2. Static Analysis: Static Analysis includes the evaluation of the code quality that is written by
developers. Different tools are used to do the analysis of the code and comparison of the same
with the standard. It also helps in following identification of following defects:
(a)Unused variables
(b) Dead code
(c) Infinite loops
(d) Variable with undefined value
(e) Wrong syntax
Static Analysis is of three types:
• Control Flow: Control flow is basically how the statements or instructions are
executed.
A walkthrough begins with a review plan agreed upon by all members of the team.
Each item of the document, for example, a source code module, is reviewed with
clearly stated objectives in view.
A detailed report is generated that lists items of concern regarding the document
reviewed.
In requirements walkthrough, the test team must review the requirements document to
ensure that the requirements match user needs, and are free from ambiguities and
inconsistencies.
Inspections:
Code inspection is carried out by a team. The team works according to an inspection
plan that consists of the following elements:
(b)Work product to be inspected, this includes code and associated documents needed
for inspection,
(e) Data collection forms where the team will record its findings such as defects
discovered, coding standard violations, and time spent in each task.
Members of the inspection team are assigned roles of moderator, reader, recorder, and
author. The moderator is in charge of the process and leads the review.
Members of inspection team
Reader: actual code is read by the reader, perhaps with help of a code browser
and with monitors for all in the team to view the code.
Often a team must decide which of the several modules should be inspected
first.
A more complex module is likely to have more errors and must be accorded
higher priority for inspection than a module with lower complexity.
Static analysis tools often compute complexity metrics using one or more
complexity metrics. Such metrics could be used as a parameter in deciding
which modules to inspect first.
Model-Based Testing and Model Checking:
The sensitivity principle says that we should try to make these faults easier to detect by
making them cause failure more often. It can be applied in three main ways:
1. at the design level, changing the way in which the program fails;
2. at the analysis and testing level, choosing a technique more reliable with respect to
Feedback - Feedback is another classic engineering principle that applies to analysis and
testing. Feedback applies both to the process itself (process improvement) and to
individual techniques (e.g., using test histories to prioritize regression testing).
Systematic inspection and walkthrough derive part of their success from feedback.
Participants in inspection are guided by checklists, and checklists are revised and refined
based on experience.