0% found this document useful (0 votes)
17 views60 pages

Module - 1

The document provides an overview of software testing, emphasizing its importance in ensuring software quality and identifying defects throughout the software development life cycle. It discusses various aspects of testing, including types of bugs, the relationship between errors and failures, software quality attributes, and the testing process itself, which includes planning, executing, and assessing program correctness. Additionally, it covers defect management and the significance of metrics in monitoring the testing process.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views60 pages

Module - 1

The document provides an overview of software testing, emphasizing its importance in ensuring software quality and identifying defects throughout the software development life cycle. It discusses various aspects of testing, including types of bugs, the relationship between errors and failures, software quality attributes, and the testing process itself, which includes planning, executing, and assessing program correctness. Additionally, it covers defect management and the significance of metrics in monitoring the testing process.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 60

Module - 1

Basics of Software Testing


Introduction to Basics of Software Testing
 The success of any software product or application is greatly dependent on its
quality.

 Today, testing is seen as the best way to ensure the quality of any product.

 The need for testing is increasing, as businesses face pressure to develop sophisticated
applications in shorter timeframes.

 Testing is a method of investigation conducted to assess the quality of the software


product or service.

 checking the correctness of a product.

 According to IEEE 83a -

“Software testing is the process of exercising or evaluating a system or system


component by manual or automated means to verify that it satisfies specified
Software Testing

 Software testing is an integral part of the software development life cycle which
identifies the defects, flaws or the errors in the application. It is incremental and
iterative in nature.

 The goal of testing as described by Millers states that, “The general aim of testing is
to affirm the quality of software systems by systematically exercising the software in
carefully controlled circumstances”.
The objectives of software testing:

 It ensures if the solutions meet the business requirements, thereby


enhancing customer confidence.

 It catches the bugs, errors and defects.

 It ensures if the system is stable and ready for use.

 It identifies the areas of weakness in an application or product.

 It establishes the degree of quality.

 It determines user acceptability


Bug

A bug, also known as a software bug, is an error in a software program that


may produce incorrect, undesired result or prevent the program from working
correctly.

In software testing, a bug not only means an error, but anything that affects
the quality of the software program.
Software bugs take different names such as – defect, fault, problem, error,
incident, anomaly, failure, variance and inconsistency and so on.
The following are certain conditions that result in a bug in a
software:
 If the software does not respond or act in the way as stipulated in the product
specification.

 If the software behaves in a way that is stipulated in an opposite way in the product
specification.
 If the software responds or reacts in a way that is not mentioned in the product
specification.

 If the software does not behave in the mandatory way as expected --perhaps, this
might not be mentioned in the product specification.

 If the software is difficult to understand or has cumbersome steps to follow, or if it is


slow in its reaction.
Types of Bugs:
Bugs due to Incorrect usage of syntax in the program, misspelled keywords, using
conceptual error wrong or improper design or concept
Divide by zero error, overflow or underflow, lack of precision in
Math bugs arithmetic values due to incorrect rounding or truncation of decimal
values
Infinite loops, infinite recursion, applying wrong logic, incorrect
Logical bugs usage of jump or break conditions
Stack or buffer overflow, access violations, using variables that are
Resource bugs not initialized
Team working Out of date comments, non-matching of documentation or files,
bugs linking the program to incorrect files.
Humans, Errors, and Testing
 Errors are a part of our daily life. Humans make errors in their thoughts, in their actions, and in
the products that might result from their actions. Errors are a part of occur almost everywhere.
For example, humans make errors in our daily life.
Errors, faults, and failures:
 The Problem in code leads to errors, which means that a mistake can occur due to the
developer's coding error as the developer misunderstood the requirement or the
requirement was not defined correctly. The developers use the term error.

 Sometimes due to certain factors such as Lack of resources or not following proper
steps Fault occurs in software.

 A failure in software development is when a system or software programme falls short


of user expectations or intended requirements.
Errors, faults, and failures in the process of programming and testing
Software Quality
 Software Quality describes the desirable attributes of software products.
 There exist several measures of software quality. These can be divided into static and
dynamic quality attributes.
 Static quality attributes refer to the actual code and related documentation. Dynamic
quality attributes relate to the behavior of the application while in use.
 Static quality attributes include structured, maintainable, testable code as well as the
availability of correct and complete documentation. Dynamic quality attributes include
software reliability, correctness, completeness, consistency, usability, and performance.
 Reliability refers to the probability of failure-free operation.

 Correctness refers to the correct operation of an application and is always with reference to
some artifact. For a tester, correctness is with respect to the requirements; for a user, it is
often with respect to a user manual.

 Completeness refers to the availability of all features listed in the requirements, or in the
user manual. Incomplete software is one that does not fully implement all features required.

 Consistency refers to adherence to a common set of conventions and assumptions.

 Usability refers to the ease with which an application can be used. This is an area in itself
and there exist techniques for usability testing.

 Performance refers to the time the application takes to perform a requested task.
Performance is considered as a non-functional requirement. It is specified in terms such as
“This task must be performed at the rate of X units of activity in one second on a machine
running at speed Y, having Z gigabytes of memory.”
Requirements, Behavior, and Correctness

 Products are designed in response to requirements. Requirements specify the functions


that a product is expected to perform.
 Once the product is ready, it is the requirements that determine the expected behavior.
Of course, during the development of the product, the requirements might have
changed from what was stated originally. Regardless of any change, the expected
behavior of the product is determined by the tester’s understanding of the requirements
during testing.

Example 1.3 Here are the two requirements, each of which leads to a different program.
Requirement 1: It is required to write a program that inputs two integers and outputs the
maximum of these.
Requirement 2: It is required to write a program that inputs a sequence of integers
and outputs the sorted version of this sequence.
 Testers are often faced with incomplete and/or ambiguous requirements. In such situations, a
tester may resort to a variety of ways to determine what behavior to expect from the program
under test.

a) Input domain:

 A program is considered correct if it behaves as desired on all possible test inputs. Usually, the
set of all possible inputs is too large for the program to be executed on each input.

 For example, suppose that the max program above is to be tested on a computer in which the
integers range from −32,768 to 32,767. To test max on all possible integers would require it to
be executed on all pairs of integers in this range.

 Testing a program on all possible inputs is known as exhaustive testing.

 A tester often needs to determine what constitutes “all possible inputs.”


b) Specifying program behavior

 There are several ways to define and specify program behavior. The simplest way is
to specify the behavior in a natural language such as English.

 However, this is more likely subject to multiple interpretations than a more formally
specified behavior.

 Here, we explain how the notion of program “state” can be used to define program
behavior and how the “state transition diagram,” or simply “state diagram,” can be
used to specify program behavior.
c) Valid and invalid inputs

 The input domains are derived from the requirements. However, due to the incompleteness
of requirements, one might have to think a bit harder to determine the input domain.

 Identifying the set of invalid inputs and testing, the program against these inputs is an
important part of the testing activity. Even when the requirements fail to specify the
program behavior on invalid inputs, the programmer does treat these in one way or another.
Testing a program against invalid inputs might reveal errors in the program.

Example: Requirement: It is required to write a program that inputs a sequence of integers and
outputs the sorted version of this sequence.
Correctness versus Reliability:

 Though correctness of a program is desirable, it is almost never the objective of testing.

 To establish correctness via testing would imply testing a program on all elements in
the input domain. In most cases that are encountered in practice, this is impossible to
accomplish.
Thus, correctness is established via mathematical proofs of programs.

 While correctness attempts to establish that the program is error free, testing attempts
to find if there are any errors in it.

 Thus, completeness of testing does not necessarily demonstrate that a program is error
free.
 Software reliability [ANSI/IEEE Std 729-1983]: is the probability of failure free
operation of software over a given time interval and under given conditions.

 Software reliability is the probability of failure free operation of software in its intended
environment.
Testing and Debugging

 Testing is the process of determining if a program behaves as expected. In the process,


one may discover errors in the program under test.

 However, when testing reveals an error, the process used to determine the cause of this
error and to remove it is known as debugging.

 As illustrated in figure below, testing and debugging are often used as two related
activities in a cyclic manner.
A test and debug cycle
 Testing and debugging are two distinct though intertwined activities. Testing generally
leads to debugging though both activities might not be always performed by the same
individual.

a) Preparing a test plan

b) Constructing test data

c) Executing the program

d) Assessing program correctness

e) Constructing an oracle
a) Preparing a test plan

 The sample test plan is often augmented by items such as the method used for testing,
method for evaluating the adequacy of test cases, and method to determine if a program
has failed or not.

b) Constructing test data

 A test case is a pair of input data and the corresponding program output. The test data
are a set of values: one for each input variable.

 A test set is a collection of test cases. A test set is sometimes referred to as a test suite.

 Program requirements and the test plan help in the construction of test data.

A test case is a pair of input data and the corresponding program output; a test set is a
collection of test cases.
c) Executing the program

 Execution of a program under test is the next significant step in testing. Execution of this
step for the sort program is most likely a trivial exercise. However, this may not be so for
large and complex programs.

d) Assessing program correctness

 An important step in testing a program is the one wherein the tester determines if the
observed behavior of the program under test is correct or not.

 This step can be further divided into two smaller steps. In the first step one observes the
behavior and in the second step analyses the observed behavior to check if it is correct
or not.

e) Constructing an oracle

 Construction of an automated oracle, such as the one to check a matrix multiplication or


a sort program, requires the determination of input–output relationship
Test Metrics
The term “metric” refers to a standard of measurement.

 Software testing metrics are quantifiable indicators of the software testing process
progress, quality, productivity, and overall health.

 The purpose of software testing metrics is to increase the efficiency and effectiveness
of the software testing process.

 Test metrics help to determine what types of enhancements are required in order to
create a defect-free, high-quality software product.

 In software testing, there exist a variety of metrics. Figure shows a classification of
various types of metrics briefly discussed in this section.
 Metrics can be computed at the organizational, process, project, and product levels.
 Each set of measurements has its value in monitoring, planning, and control.
 Organizational metrics: Metrics at the level of an organization are useful in overall project
planning and management. Some of these metrics are obtained by aggregating
compatible metrics across multiple projects.

 Process Metrics: A project’s characteristics and execution are defined by process metrics.
These features are critical to the SDLC process’s improvement and maintenance (Software
Development Life Cycle). The goal of a process metric is to assess the “goodness” of the
process.

 Project Metrics: Project Metrics are used to assess a project’s overall quality. It is used to
estimate a project’s resources and deliverables, as well as to determine costs, productivity,
and flaws.

• Project metrics relate to a specific project, for example, the I/O device testing project or a
compiler project. These are useful in the monitoring and control of a specific project.
 Product Metrics: A product’s size, design, performance, quality, and complexity are
defined by product metrics. Developers can improve the quality of their software
development by utilizing these features.

 Table below, lists a sample of product metrics for object-oriented and other applications.

A sample of product metrics


Progress monitoring and trends:
1. Metrics are often used for monitoring progress. This requires making measurements on a
regular basis over time. Such measurements offer trends.
2. For example, suppose that a browser has been coded, unit tested, and its components
integrated. It is now in the system testing phase. One could measure the cumulative
number of defects found and plot this over time.
3. Such a plot will rise over time. Eventually, it will likely show a saturation indicating that
the product is reaching a stability stage. Figure shows a sample plot of new defects found
over time.

A sample plot of cumulative count of defects found over seven consecutive months in a software project
Software and Hardware Testing:

 There are several similarities and differences between techniques used for testing software
and hardware.

 Software testers generate tests to test for correct functionality.

 Sometimes such tests do not correspond to any general fault model

Ex: To test whether there is a memory leak in an application; one performs a combination of
stress testing and code inspection.
• A variety of faults could lead to memory leaks

 Hardware testers use a variety of fault models at different levels of abstraction

 Ex: - Transistor levels faults,


- Low level,

- Gate level, Circuit level, function level faults

- Higher level
Software Testing Hardware Testing

Fault present in the application will VLSI chip, that might fail over time due
remain and no new faults will creep in to a fault that did not exist at the time
unless the application is changed chip was manufactured and tested

Built-in self test meant for hardware BIST(Built in self Test) intended to
product, rarely can be applied to software actually test for the correct functioning of
designs and code a circuit

It only detects faults that were present Hardware testers generate test based on
when the last change was made fault-models
Testing and Verification

 Software testing is a process of examining the functionality and behavior of the software
through verification and validation.

 Verification is a process of determining if the software is designed and developed as per the
specified requirements.

 Software testing is incomplete until it undergoes verification and validation processes.


Verification and validation are the main elements of software testing workflow because they:

1. Ensure that the end product meets the design requirements.

2. Reduce the chances of defects and product failure.

3. Ensures that the product meets the quality standards and expectations of all stakeholders
involved.
Defect Management

 Defect management is an integral part of a development and test process in many software
development organizations. It is a sub process of the development process. It entails the
following: defect prevention, discovery, recording and reporting, classification, resolution,
and prediction.

 Defect prevention is achieved through a variety of processes and tools. For example, good
coding techniques, unit test plans, and code inspections are all important elements of any
defect prevention process.

 Defect discovery is the identification of defects in response to failures observed during


dynamic testing or found during static testing.

 Defects found are classified and recorded in a database. Classification becomes important
in dealing with the defects. For example, defects classified as “high severity” will likely be
attended to first by the developers than those classified as “low severity.”
 Defect classification assists an organization in measuring statistics such as the types of
defects, their frequency, and their location in the development phase and document.

 Defect prediction is another important aspect of defect management. Organizations often do


source code analysis to predict how many defects an application might contain before it
enters the testing phase

 Several tools exist for recording defects, and computing and reporting defect-related statistics.
Bugzilla, open source, and FogBugz, commercially available, are three such tools.

 They provide several features for defect management including defect recording,
classification, and tracking. Several tools that compute complexity metrics also predict
defects using code complexity.
Execution History
 Execution history of a program, also known as execution trace, is an organized collection of
information about various elements of a program during a given execution.

 There are several ways to represent an execution history,


 Sequence in which the functions in a given program are executed against a given test input,
 Sequence in which program blocks are executed.

 Sequence of objects and the corresponding methods accessed for object oriented
languages such as Java.
 A complete execution history recorded from the start of a program’s execution until

its termination represents a single execution path through the program.

 It is possible to get partial execution history also for some program elements.
Test Generation Strategies
 One of the key tasks in any software test activity is the generation of test cases. The
program under test is executed against the test cases to determine whether or not it
conforms to the requirements. How to generate test cases?

Mathematical
modelling lang.
 Most test generation strategies use requirements as a base, directly or indirectly, to
generate tests.

 Another set of strategies falls under the category model based test generation. These
strategies require that a subset of the requirements be modeled using a formal notation.

 Languages based on predicate logic as well as algebraic languages are also used to
express subsets of requirements in a formal manner.

 There also exist techniques to generate tests directly from the code. Such techniques,
fall under code-based test generation.
Static Testing:

 Static Testing is a type of a Software Testing method which is performed to check the
defects in software without actually executing the code of the software application.

 Whereas in Dynamic Testing checks, the code is executed to detect the defects.

 Static testing is performed in early stage of development to avoid errors as it is easier to


find sources of failures and it can be fixed easily.

 The errors that cannot be found using Dynamic Testing, can be easily found by Static
Testing.

 Static testing is best carried out by an individual who did not write the code, or by a
team of individuals. A sample process of static testing is illustrated in figure.
 There are mainly two type techniques used in Static Testing:
1. Review: In static testing review is a process or technique that is performed to find the potential
defects in the design of the software. It is process to detect and remove errors and defects in the
different supporting documents like software requirements specifications.

 Review is of four types:

• Informal: In informal review the creator of the documents put the contents in front of audience
and everyone gives their opinion and thus defects are identified in the early stage.

• Walkthrough: It is basically performed by experienced person or expert to check the defects so


that there might not be problem further in the development or testing phase.

• Peer review: Peer review means checking documents of one-another to detect and fix the defects.
It is basically done in a team of colleagues.

• Inspection: Inspection is basically the verification of document the higher authority like the
verification of software requirement specifications (SRS).
 2. Static Analysis: Static Analysis includes the evaluation of the code quality that is written by
developers. Different tools are used to do the analysis of the code and comparison of the same
with the standard. It also helps in following identification of following defects:

(a)Unused variables
(b) Dead code
(c) Infinite loops
(d) Variable with undefined value
(e) Wrong syntax
 Static Analysis is of three types:

• Data Flow: Data flow is related to the stream processing.

• Control Flow: Control flow is basically how the statements or instructions are
executed.

• Cyclomatic Complexity: Cyclomatic complexity defines the number of independent


paths in the control flow graph made from the code or flowchart so that minimum
number of test cases can be designed for each independent path.
Walkthroughs:

 A walkthrough begins with a review plan agreed upon by all members of the team.
Each item of the document, for example, a source code module, is reviewed with
clearly stated objectives in view.

 A detailed report is generated that lists items of concern regarding the document
reviewed.

 In requirements walkthrough, the test team must review the requirements document to
ensure that the requirements match user needs, and are free from ambiguities and
inconsistencies.
Inspections:

 Code inspection is carried out by a team. The team works according to an inspection
plan that consists of the following elements:

(a) Statement of purpose,

(b)Work product to be inspected, this includes code and associated documents needed
for inspection,

(c) Team formation, roles, and tasks to be performed,

(d)Rate at which the inspection task is to be completed, and

(e) Data collection forms where the team will record its findings such as defects
discovered, coding standard violations, and time spent in each task.

 Members of the inspection team are assigned roles of moderator, reader, recorder, and
author. The moderator is in charge of the process and leads the review.
 Members of inspection team

 Moderator: in charge of the process and leads the review.

 Reader: actual code is read by the reader, perhaps with help of a code browser
and with monitors for all in the team to view the code.

 Recorder: records any errors discovered or issues to be looked into.

 Author: actual developer of the code.


Software Complexity and static testing:

 Often a team must decide which of the several modules should be inspected
first.

 A more complex module is likely to have more errors and must be accorded
higher priority for inspection than a module with lower complexity.

 Static analysis tools often compute complexity metrics using one or more
complexity metrics. Such metrics could be used as a parameter in deciding
which modules to inspect first.
Model-Based Testing and Model Checking:

 Model-based testing refers to the acts of modeling and the generation of


tests from a formal model of application behavior.
 Model checking refers to a class of techniques that allow the validation of
one or more properties from a given model of an application.
 The model checker attempts to verify whether the given properties are
satisfied by the given model.
 For each property, the checker could come up with one of three possible
answer:
o Property is satisfy
o Property is not satisfied.
o Or unable to determine.
Types of testing:

 One possible classification is based on the following five classifiers:

C1: Source of test generation.

C2: Lifecycle phase in which testing takes place

C3: Goal of a specific testing activity

C4: Characteristics of the artifact under test

C5: Test process


Classifier C1: Source of test generation.
Classifier C2: Life cycle phases: Lists various types of testing depending on the phase in which the activity occurs is shown in the
table below.
Classifier: C3: Goal-directed testing
Finding any hidden errors is the prime goal of testing; goal-oriented testing
looks for specific types of failures.
Classifier: C4: Artifact under test
 Testers often say “We do X-testing” where X corresponds to an artifact under test. Table
is a partial list of testing techniques named after the artifact that is being tested. Table
below is a partial list of testing techniques named after the artifact that is being tested.
Classifier: C5: Test process models
Software testing can be integrated into the software development life cycle in a variety
of ways. This leads to various models for the test process listed in Table.
Basic Principles:
 Sensitivity - Human developers make errors, producing faults in software. Faults may lead
to failures, but faulty software may not fail on every execution. The sensitivity principle
states that it is better to fail every time than sometimes.

 The sensitivity principle says that we should try to make these faults easier to detect by
making them cause failure more often. It can be applied in three main ways:

1. at the design level, changing the way in which the program fails;

2. at the analysis and testing level, choosing a technique more reliable with respect to

the property of interest; and

3. at the environment level, choosing a technique that reduces the impact of

external factors on the results.


 Redundancy - Redundancy is the opposite of independence. If one part of a software
artifact (program, design document, etc.) constrains the content of another, then they
are not entirely independent, and it is possible to check them for consistency.

 Partition - Partition, often also known as “divide and conquer,” is a general


engineering principle. Dividing a complex problem into subproblems to be attacked
and solved independently is probably the most common human problem-solving
strategy.
 Visibility - Visibility means the ability to measure progress or status against goals. In
software engineering, one encounters the visibility principle mainly in the form of process
visibility, and then mainly in the form of schedule visibility: ability to judge the state of
development against a project schedule.

 Feedback - Feedback is another classic engineering principle that applies to analysis and
testing. Feedback applies both to the process itself (process improvement) and to
individual techniques (e.g., using test histories to prioritize regression testing).
 Systematic inspection and walkthrough derive part of their success from feedback.
Participants in inspection are guided by checklists, and checklists are revised and refined
based on experience.

You might also like