UNIT-4 Software Testing and Maintenance

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 58

UNIT 4

Software Testing
and Maintenance
Software Testing
• Make judgement about quality or acceptability and
to discover problems
• Test
 Testing is obviously concerned with errors, faults,
failures, and incidents.
 A test is the act of exercising software with test cases.
 A test has two distinct goals: to find failures
or to demonstrate correct execution.
• Test case
 A test case has an identity and is associated with a
program behaviour. It also has a set of inputs
and expected outputs.
Why should We Test ?

Although software testing is itself an expensive


activity, yet launching of software without
testing may lead to cost potentially much higher
than that of testing, specially in systems where
human safety is involved.
In the software life cycle the earlier the errors
are discovered and removed, the lower is the
cost of their removal.
Who should Do the Testing ?

• Testing requires the developers to find errors


from their software.
• It is difficult for software developer to point
out errors from own creations.
• Many organisations have made a distinction
between development and testing phase by
making different people responsible for each
phase
What should We Test ?
We should test the program’s responses to every
possible input. It means, we should test for all valid
and invalid inputs. Suppose a program requires two
8 bit integers as inputs. Total possible combinations
are 28x28 . If only one second it required to execute
one set of inputs, it may take 18 hours to test all
combinations. Practically, inputs are more than two
and size is also more than 8 bits. We have also not
considered invalid inputs where so many
combinations are possible. Hence, complete testing
is just not possible, although, we may wish to do so.
Some Terminologies

Testing, Error, Mistake, Bug, Fault and Failure


People make errors. A good synonym is mistake.
This may be a syntax error or misunderstanding
of specifications. Sometimes, there are logical
errors.

When developers make mistakes while coding,


we call these mistakes “bugs”.
A fault is the representation of an error, where
representation is the mode of expression, such
as narrative text, data flow diagrams, ER
diagrams, source code etc.

Defect is a good synonym for fault. A failure


occurs when a fault executes. A particular fault
may cause different failures, depending on how
it has been exercised.
Test case describes an input description
and an expected output description.
TEST CASE TEMPLATE
A model of the software testing
process
• Test cases are specifications of the inputs to the
test and the expected output from the system (the test
results), plus a statement of what is being tested.
• Test data are the inputs that have been devised to
test a system.
• Test data can sometimes be generated
automatically, but automatic test case generation is
impossible, as people who understand what the system
is supposed to do must be involved to specify the
expected test results.
•Test execution can be automated.
• The expected results are automatically
compared with the predicted results so there is no need
for a person to look for errors and anomalies in the test
run.
Test Cases
• The essence of software testing is to determine a set of
test cases for the item to be tested.
• A test case is a recognized work product.
• A complete test case will contain a test case identifier, a
brief statement of purpose, a description
of preconditions, the actual test case inputs, the
expected outputs, a description of expected post
conditions, and an execution history
• Execution history may contain the date when the test
was run, the person who ran it, the version on which
it was run, and the pass/fail result.
• Test cases need to be developed , reviewed , used,
managed and saved 5
Types of Software Testing
Software Testing can be broadly classified into 3
types:
1.Functional testing : It is a type of software
testing that validates the software systems
against the functional requirements. It is
performed to check whether the application is
working as per the software’s functional
requirements or not. Various types of functional
testing are Unit testing, Integration testing,
System testing, Smoke testing, and so on.
2.Non-functional testing : It is a type of
software testing that checks the application for
non-functional requirements like performance,
scalability, portability, stress, etc. Various types
of non-functional testing are Performance
testing, Stress testing, Usability Testing, and so
Functional Testing
• Boundary value analysis
• Equivalence class testing
• Path testing
• Unit testing
• Integration and system testing
• Debugging
• Alpha & beta testing
• Testing tools & standards
• Black-box testing, also called
behavioral testing, focuses on the
functional requirements of the software
•Black-box testing techniques enable
you to derive sets of input conditions that
will fully exercise all functional
requirements for a program
Equivalence Class Testing
• Equivalence partitioning is a black-box testing method
that divides the input domain of a program into classes
of data from which test cases can be derived
• Test-case design for equivalence partitioning is based
on an evaluation of equivalence classes for an input
condition
• If a set of objects can be linked by relationships that
are symmetric, transitive, and reflexive, an equivalence
class is present
• An equivalence class represents a set of valid or invalid
states for input conditions.
• Typically, an input condition is either a specific numeric
value, a range of values, a set of related values, or a
Boolean condition.
Equivalence class Guidelines
Boundary value Analysis
• A greater number of errors occurs at the boundaries
of the input domain rather than in the “center.” It is
for this reason that boundary value analysis (BVA)
has been developed as a testing technique.
• Boundary value analysis leads to a selection of
test cases that exercise bounding values.
• Boundary value analysis is a test-case design
technique that complements equivalence
partitioning. Rather than selecting any element of an
equivalence class, BVA leads to the selection of test
cases at the “edges” of the class
Guidelines for BVA
1. If an input condition specifies a range
bounded by values a and b, test cases
should be designed with values a and b and
just above and just below a and b.
2. If an input condition specifies a number of
values, test cases should be developed
that exercise the minimum and maximum
numbers. Values just above and below
minimum and maximum are also tested.
3. Apply guidelines 1 and 2 to output
conditions If internal program data
structures have prescribed boundaries
Path testing
If the logical complexity of the function is high,
path testing (a white-box test-case design
method) can be used to ensure that every
independent path in the program has been
exercised
Unit Testing
• Unit testing focuses verification effort on the
smallest unit of software design—the software
component or module.
• Using the component-level design description as
a guide, important control paths are tested to
uncover errors within the boundary of the
module
• The unit test focuses on the internal processing
logic and data structures within the boundaries of
a component. This type of testing can be
conducted in parallel for multiple components
Unit-test considerations
• The module interface is tested to ensure
that information properly flows into and out of the
program unit under test.
• Local data structures are examined to ensure
that data stored temporarily maintains its integrity
during all steps in an algorithm’s execution.
• All independent paths through the control
structure are exercised to ensure that all statements in
a module have been executed at least once.
• Boundary conditions are tested to ensure
that the module operates properly at boundaries
established to limit or restrict processing.
•And finally, all error-handling paths are tested.
• Selective testing of execution paths is an
essential task during the unit test.
• Test cases should be designed to uncover
errors due to erroneous computations,
incorrect comparisons, or improper
control flow
• Test cases that exercise data structure,
control flow, and data values just below,
and just above maxima and minima are
very likely to uncover errors
Unit-test environment
• In most applications a driver is nothing more than
a “main program” that accepts test case data,
passes such data to the component (to be tested),
and prints relevant results
• Stubs serve to replace modules that are subordinate
(invoked by) the component to be tested.
• A stub or “dummy subprogram” uses the subordinate
module’s interface, may do minimal data manipulation,
prints verification of entry, and returns control to the
module undergoing testing
• Drivers and stubs represent testing “overhead.” That is,
both are software that must be written but that is not
delivered with the final software product
Integration Testing
• Integration testing is a systematic technique for
constructing the software architecture while at
the same time conducting tests to uncover errors
associated with interfacing
• The objective is to take unit-tested components
and build a program structure that has been
dictated by design
• Construct the program using a “big bang”
approach. All components are combined in
advance. The entire program is tested as a whole
• Incremental integration is the antithesis of the big
bang approach. The program is constructed and
tested in small increments, where errors are
easier to isolate and correct
Top-down integration
• Top-down integration testing is an incremental
approach to construction of the software
architecture.
• Modules are integrated by moving
downward through the control hierarchy,
beginning with the main control module (main
program).
• Modules subordinate (and ultimately
subordinate) to the main control module are
incorporated into the structure in either a depth-
first or breadth-first manner
Top-down integration
• Depth-first integration integrates all components
on a major control path of the program structure.
Selection of a major path is somewhat arbitrary
and depends on application-specific
characteristics. For example, selecting the left-
hand path, components M1, M2 , M5 would be
integrated first. Next, M8 or (if necessary for
proper functioning of M2) M6 would be
integrated. Then, the central and right-hand
control paths are built.
• Breadth-first integration incorporates all
components directly subordinate at each level,
moving across the structure horizontally. From
the figure, components M2, M3, and M4 would
be integrated first. The next control level, M5,
M6, and so on, follows.
The integration process is performed in a series
of five steps
Bottom-up integration
• Begins construction and testing with atomic modules
(i.e., components at the lowest levels in the program
structure)
• A bottom-up integration strategy may be implemented
with the following steps
Components are combined to form clusters 1, 2, and 3. Each of the clusters is
tested using a driver (shown as a dashed block). Components in clusters 1 and 2
are subordinate to Ma. Drivers D1 and D2 are removed and the clusters are
interfaced directly to Ma. Similarly, driver D3 for cluster 3 is removed prior to
integration with module Mb. Both Ma and Mb will ultimately be integrated with
component Mc
Regression testing
• Each time a new module is added as part of
integration testing, the software changes
• New data flow paths are established, new
I/O may occur, and new control logic is
invoked. These changes may cause
problems with functions that previously worked
flawlessly
• In the context of an integration test strategy,
regression testing is the reexecution of some
subset of tests that have already been conducted
to ensure that changes have not propagated
unintended side effects
• Regression testing helps to ensure that
changes (due to testing or for other
reasons) do not introduce unintended
behavior or additional errors.
• Regression testing may be conducted
manually, by reexecuting a subset of all test
cases or using automated capture/playback
tools
• Capture/playback tools enable the software
engineer to capture test cases and results for
subsequent playback and comparison
The regression test suite (the subset of tests to be
executed) contains three different classes of test
cases

The regression test suite should be designed to


include only those tests that address one or more
classes of errors in each of the major program
functions
Smoke testing
• Smoke is an testing
testing
approach that integration is when
product software is commonly used
developed.
• It is designed as a pacing mechanism for time-
critical projects, allowing the software team to
assess the project on a frequent basis
• The smoke-testing approach encompasses the
following activities:
Smoke testing provides a number of benefits
when it is applied on complex, time critical
software projects:
1. Integration risk is minimized
2. The quality of the end product is
improved
3. Error diagnosis and correction are
simplified
4. Progress is easier to assess
• Sandwich Testing- A combined approach
(sometimes called sandwich testing) that uses
top-down tests for upper levels of the program
structure, coupled with bottom-up tests for
subordinate levels may be the best compromise
• A critical module has one or more of the
following characteristics:
(1) addresses several software requirements
(2) has a high level of control (resides relatively
high in the program structure)
(3) is complex or error prone or
(4) has definite performance requirements
Integration test work products
• An overall plan for integration of the software
and a description of specific tests is
documented in a Test Specification
• This work product incorporates a test plan and
a test procedure and becomes part of the
software configuration
• Testing is divided into phases and builds
that address specific functional and
behavioral characteristics of the software
• For example, integration testing for the
SafeHome security system might be
divided into the following test phases:
• The following criteria and corresponding tests
are applied for all test phases
Alpha and Beta Testing
• When custom software is built for one customer, a
series of acceptance tests are conducted to enable the
customer to validate all requirements
• Conducted by the end user rather than
software engineers, an acceptance test can range
from an informal “test drive” to a planned and
systematically executed series of tests
• Acceptance testing can be conducted over a period
of weeks or months, thereby uncovering
cumulative errors that might degrade the system over
time
• The alpha test is conducted at the developer’s site by a
representative group of end users
• The beta test is conducted at one or more end-user
sites
• Unlike alpha testing, the developer generally is
not present. Therefore, the beta test is a “live”
application of the software in an environment that
cannot be controlled by the developer
• The customer records all problems (real or imagined)
that are encountered during beta testing and reports
these to the developer at regular intervals
• As a result of problems reported during beta tests,
you make modifications and then prepare for release of
the software product to the entire customer base
• A variation on beta testing, called customer acceptance
testing, is sometimes performed when custom
software is delivered to a customer under contract.
System Testing
 Recovery Testing
Software is incorporated with
 Security Testing other system elements (e.g.,
 Stress Testing hardware,people,information),an
d a series of system integration
 Performance Testing and validation tests are
conducted
 Deployment Testing
Recovery Testing

• A system failure must be corrected within a


specified period of time or severe economic damage
will occur
• Recovery testing is a system test that forces
the software to fail in a variety of ways and verifies
that recovery is properly performed
• If recovery is automatic (performed by the
system itself), reinitialization, checkpointing
mechanisms, data recovery, and restart are evaluated
for correctness
• If recovery requires human intervention, the
mean- time-to-repair (MTTR) is evaluated to
determine whether it is within acceptable limits
Security Testing
• Any computer-based system that manages sensitive
information or causes actions that can improperly harm (or
benefit) individuals is a target for improper or illegal
penetration
• Penetration spans a broad range of activities: hackers
who attempt to penetrate systems for sport,
disgruntled employees who attempt to penetrate for
revenge, dishonest individuals who attempt to penetrate for
illicit personal gain
• Security testing attempts to verify that protection
mechanisms built into a system will, in fact, protect it
from improper penetration
• During security testing, the tester plays the role(s) of the
individual who desires to penetrate the system
Stress Testing
• Stress tests are designed to confront programs
with abnormal situations. In essence, the
tester who performs stress testing asks: “How
high can we crank this up before it fails?”
• Stress testing executes a system in a manner
that demands resources in abnormal
quantity, frequency, or volume
• For example,
(1)special tests may be designed that
generate ten interrupts per
second, when one or two is the average
rate,
(2)input data rates may be increased by
an order of magnitude to determine
how input functions will respond,
(3)test cases that require maximum
memory or other resources are
executed,
(4) test cases that may cause thrashing
in a virtual operating system are
designed,
(5) test cases that may cause excessive
hunting for disk-resident data are
Performance Testing
• Performance testing is designed to test the
run-time performance of software within the
context of an integrated system
• Performance testing occurs throughout
all steps in the testing process
• Even at the unit level, the performance of
an individual module may be assessed as
tests are conducted
Deployment Testing
• Deployment testing, sometimes called
configuration testing, exercises the software in
each environment in which it is to operate
• In addition, deployment testing examines all
installation procedures and specialized
installation software (e.g., “installers”) that
will be used by customers, and all
documentation that will be used to introduce
the software to end users
Debugging
• Debugging occurs as a consequence of successful
testing. That is, when a test case uncovers an
error, debugging is the process that results in the
removal of the error
• The debugging process begins with the execution
of a test case. Results are assessed and a lack of
correspondence between expected and actual
performance is encountered
• Debugging process outcome:
(1) The cause will be found and corrected or
(2) The cause will not be found
Characteristics of bugs
1. The symptom and the cause may be geographically
remote
2. The symptom may disappear (temporarily) when another
error is corrected
3. The symptom may actually be caused by nonerrors (e.g.,
round-off inaccuracies)
4. The symptom may be caused by human error that is not
easily traced
5. The symptom may be a result of timing problems, rather
than processing problems.
6. It may be difficult to accurately reproduce input
conditions (e.g., a real-time application in which
input ordering is indeterminate).
7. The symptom may be intermittent.
8. The symptom may be due to causes that are distributed
across a number of tasks running on different
processors
The debugging process
• Psychological Considerations

• Debugging Strategies
objective— to find and correct the cause of a software error or
defect
Bradley describes the debugging approach in this way: Debugging
is a straightforward application of the scientific method that has
been developed over 2,500 years. The basis of debugging is to
locate the problem’s source [the cause] by binary partitioning,
through working hypotheses that predict new values to be
examined
Three debugging strategies have been proposed [Mye79]:
(1) brute force, (2) backtracking, and (3) cause elimination
• The brute force category of debugging is probably the
most common and least efficient method for isolating
the cause of a software error. You apply brute force
debugging methods when all else fails
• Backtracking is a fairly common debugging approach that
can be used successfully in small programs. Beginning at
the site where a symptom has been uncovered, the
source code is traced backward (manually) until the
cause is found
• Cause elimination—is manifested by induction or
deduction and introduces the concept of binary
partitioning
• Automated debugging
Each of these debugging approaches can be
supplemented with debugging tools that can provide
you with semiautomated support as debugging
strategies are
• Correcting the Error attempted
Three simple questions that you should ask before
making the “correction” that removes the cause of a
bug
1. Is the cause of the bug reproduced in another part of
the program?
2. What “next bug” might be introduced by the fix I’m
about to make?
3. What could we have done to prevent this bug in the
first place?

You might also like