Chapter 13
Chapter 13
Chapter 13
Chapter 13
Software Testing Strategies
CHAPTER OVERVIEW AND COMMENTS
All s/w testing strategies provide the s/w developer with a template for testing
and all have the following generic characteristics:
Conduct effective formal technique reviews, by doing this, many errors will
be eliminated before testing commences.
Testing begins at the component level and works “outward” towards the
integration of the entire computer-based system.
Different testing techniques are appropriate at different points in time.
Testing is conducted by the developer of the S/W and an independent test
group.
Testing and debugging are different activities, but debugging must be
accommodated by testing strategy.
Verification refers to the set of activities that ensure that S/W correctly implements
a specific function.
Validation refers to the set of activities that ensure that the S/W has been built is
traceable to customer requirements.
Testing does provide the last fortress from which quality can be assessed and
more pragmatically, errors can be uncovered.
Testing should not be viewed as a safety net that will catch all errors that
occurred b/c of weak S/W eng. practices. Stress quality and error detection
throughout the S/W process.
For every S/W project, there is an inherent conflict of interest that occurs as
testing begins. Programmers that built the S/W are asked to test it.
An independent test group does not have the conflict that builders of the S/W
might experience.
The ITG and S/W eng. Work closely throughout a S/W project to ensure that
thorough tests are conducted.
13-4 SEPA, 6/e Instructor’s Guide
Testing Strategy
State testing objectives explicitly “test effectiveness, test coverage, mean time to
failure, etc.”
Chapter Comments 13-5
Understand the users of the software and develop a profile for each user category. Build
use-cases.
Develop a testing plan that emphasizes “rapid cycle testing.” Feedback generated
from rapid-cycle tests can be used to control quality levels and the corresponding
test strategies.
Conduct formal technical reviews to assess the test strategy and test cases themselves.
Develop a continuous improvement approach for the testing process. The test strategy
should be measured by using metrics.
Module interface is tested to ensure that information properly flows into and out
of the program unit under test.
Local data structures are examined to ensure that data stored temporarily
maintains its integrity.
All independent paths through the control structure are exercised to ensure that
all statements in a module have been executed at least once.
All error handling paths are tested.
If data do not enter and exit properly, all other tests are moot.
Comparison and control flow are closely coupled. Test cases should uncover
errors such as
1. comparison of different data types
2. incorrect logical operators or precedence
3. expectation of equality when precision error makes equality unlikely
4. incorrect comparison of variables
5. improper loop termination
6. failure to exit when divergent iterations is encountered
7. improperly modified loop variables
Boundary testing is essential. S/W often fails at its boundaries. Test cases that
exercise data structure, control flow, and data values just below, at, and just
above maxima and minima are very likely to uncover errors.
Error handling: when error handling is evaluated, potential errors should be
tested:
1. error description is unintelligible
2. error noted does not correspond to error encountered
3. error condition causes O/S intervention prior to error handling
4. exception-condition processing is incorrect
5. error description does not provide enough information to assist the location
of the cause of the error.
Unit Test Procedures
Because a component is not a stand-alone program, driver and/or stub S/W must
be developed for each unit test.
13-8 SEPA, 6/e Instructor’s Guide
In most applications, a driver is nothing more than a “main program” that accepts
test case data, passes such data to the component, and prints relevant results.
Stubs serve to replace modules that are subordinate to the component to be
tested. A stub “dummy program” uses the subordinate module’s interface, may
do minimal data manipulation, provides verification of entry, and returns control
to the module undergoing testing.
Chapter Comments 13-9
Top-down Integration
Modules subordinate to the main control module are incorporated into the
structure in either depth-first or breadth-first manner.
1. The main control module is used as a test driver, and stubs are substituted
for all components directly subordinate to the main control module.
2. Depending on the integration approach selected subordinate stubs are
replaced one at a time with actual components.
3. Tests are conducted as each component is integrated.
4. On completion of each set of tests, another stub is replaced with the real
component.
5. Regression testing may be conducted to ensure that new errors have not
been introduced.
The process continues from step 2 until the entire program structure is built.
Bottom-down Integration
Bottom-down Integration testing begins construction testing with atomic modules.
1. Low-level components are combined into clusters that perform a specific S/W
sub-function.
2. A driver is written to coordinate test case input and output.
3. The cluster is tested.
4. Drivers are removed and clusters are combined moving upward in the
program structure.
13-12 SEPA, 6/e Instructor’s Guide
Regression testing
Regression testing is the re-execution of some subset of tests that have already
been conducted to ensure that changes have not propagated unintended side
effects.
Regression testing is the activity that helps to ensure that changes do not
introduce unintended behavior or additional errors.
The regression test suite contains three different classes of test cases:
Smoke Testing
Smoke Testing: is integration testing approach that is commonly used when S/W
products are being developed.
A common approach for creating “daily builds” for product software
Smoke testing steps:
Chapter Comments 13-13
1. Software components that have been translated into code are integrated into a
“build.” A build includes all data files, libraries, reusable modules, and
engineered components that are required to implement one or more product
functions.
2. A series of tests is designed to expose errors that will keep the build from
properly performing its function. The intent should be to uncover “show
stopper” errors that have the highest likelihood of throwing the software
project behind schedule.
3. The build is integrated with other builds and the entire product (in its current
form) is smoke tested daily. The integration approach may be top down or
bottom up.
This section clarifies the differences between OOT and conventional testing with
regard to unit testing and integration testing. The key point to unit testing in an
OO context is that the lowest testable unit should be the encapsulated class or
object (not isolated operations) and all test cases should be written with this goal
in mind.
An encapsulated class is the focus of unit testing; however, operations within the
class and the state behavior of the class are the smallest testable units.
13-14 SEPA, 6/e Instructor’s Guide
Class testing for OO S/W is analogous to module testing for conventional S/W. It
is not advisable to test operations in isolation.
Chapter Comments 13-15
Use-Based testing begins the construction of the system by testing those classes
(called independent classes) that use very few server (if any) classes.
Next, the dependent classes, which use independent classes, are tested.
This sequence of testing layers of dependent classes continues until the entire
system is constructed.
In this section validation testing is described as the last chance to catch program
errors before delivery to the customer. If the users are not happy with what they
see, the developers often do not get paid. The key point to emphasize is
traceability to requirements. In addition, the importance of alpha and beta testing
(in product environments) should be stressed.
Configuration Review:
13-16 SEPA, 6/e Instructor’s Guide
It is important to ensure that the elements of the S/W configuration have been
properly developed.
Alpha/Beta testing:
The alpha-test is conducted at the developer’s site by end-users. The S/W is used
in natural setting with the developer “looking over the shoulder” of typical users
and recording errors and usage problems. Alpha tests are conducted in a
controlled environment.
The beta-test is conducted at the end-users sites. The developer is generally not
present. Beta test is a live application of the S/W in an environment that cannot
be controlled by the developer. The end-user records errors and all usage
problems encountered during the test and the list is reported to the developer.
Then S/W engineers make modifications and then prepare for release of S/W
product to the entire customer base.
The focus is on system integration. “Like death and taxes, testing is both
unpleasant and inevitable.”
Forces the software to fail in a variety of ways and verifies that recovery is
properly performed. “Data recovery”
It verifies that protection mechanisms built into a system will, in fact, protect it
from improper penetration.
Beizer “The system’s security must, of course, be tested for invulnerability from
frontal attack-but must also be tested for invulnerability from flank or rear
attack.” “Social Engineering”
Chapter Comments 13-17
For example:
1) Special tests may be designed to generate 10 interrupts per second, when one
or two is the average rate,
2) Input data rates may be increased by an order of magnitude to determine
how input functions will respond,
3) Test cases that require maximum memory or other resources are executed,
4) Test cases that may cause memory management problems are designed,
5) Test cases that may cause excessive hunting for disk-resident data are created.
Performance tests are coupled with stress testing and usually require both H/W
and S/W instrumentation. “Processing Cycle, log events”