BalciSlides 21 VVTechniques
BalciSlides 21 VVTechniques
BalciSlides 21 VVTechniques
OSMAN BALCI
Professor
https://manta.cs.vt.edu/balci
A Audit
Desk Checking
Documentation Checking
Cause-Effect Graphing
Control Analysis
Calling StructureAnalysis
AcceptanceTesting
AlphaTesting
Assertion Checking
Induction
InductiveAssertions
Inference
Audit
Desk Checking
Documentation Checking
Face Validation
Inspections
Reviews
Turing Test
Walkthroughs
Audit
Add Student /
Initialization Set count = 0 Open
do: Initialize course entry: Register student
exit: Increment count
Cancel Cancel
[ count = 10 ]
Cancelled Closed
do: Notify registered students Cancel do: Finalize course
Data Analysis: Data Dependency Analysis
Control
Resource
Imposed by Sources of
language dependency Sharing of resources
constructs such as among instructions
if-then-else
Data
Access/update of a common piece of data
Data Analysis: Data Dependency Analysis
A=1 {A} {}
B=C+D {B} { C, D }
Data Dependence is largely a function of DEF and USE sets.
S1: A = B + C S1: A = B + C
S2: D = A +10 S2: A = A - D
Data Analysis: Data Dependency Analysis
A F
As information moves
through flows in the
F
model, it is modified by V X
F 2 B
a series of A F Z F
1 4 5
transformations.
W F
Y
3
Data Analysis: Data Flow Analysis
Monitored
temperature
Monitor and
adjust Corrected
temperature temperature
level
Temperature
set-point
Data Analysis: Data Flow Analysis
Testing is conducted
in the same pattern as Module 1 Module 2 Module M
(Class 1) (Class 2) (Class M)
bottom-up development.
Begin testing the
modules at the lowest Sub-Module 1 Sub-Module 2 Sub-Module K
(Sub-Class 1) (Sub-Class 2) (Sub-Class K)
level and then proceed
to testing the higher levels.
As each module is completed, it is thoroughly tested. When
the modules comprising a level have been coded and tested,
the modules are integrated and integration testing is
performed. This process is repeated until the entire model
has been integrated and tested.
The integration of completed modules need not wait for all
“same level” modules to be completed. Module integration
and testing can be, and often is, performed incrementally.
Bottom-up Testing
Advantages:
Encourages extensive testing at the module level.
Since most well-structured model consists of many modules,
there is much to be gained by bottom-up testing.
The smaller the module and more limited its function, the easier
and more complete its testing can be.
Particularly attractive for testing distributed systems.
Disadvantages:
The need for individual module drivers to test the modules.
These drivers, called test harnesses, simulate the calling of the
module and pass data necessary to exercise the module.
Developing harnesses for every module can be quite large.
The harnesses may themselves contain errors.
Faces the same cost and complexity issues as does top-down
testing.
Comparison Testing
Used for testing the model with the test data generated by
analyzing the model’s functional representatives
(partitions).
It is accomplished by:
1. decomposing both model specification and implementation
into functional representatives (partitions),
2. comparing the elements and prescribed functionality of
each partition specification with the elements and actual
functionality of corresponding partition implementation,
3. deriving test data to extensively test the functional behavior
of each partition, and
4. testing the model by using the generated test data.
Predictive Validation
Used for testing the model under high workloads to probe its
viability under resource overload.
It is useful for time-dependent real-time systems.
Example parameters to change to perform Stress Testing:
Arrival rate of transactions (messages)
Amounts of resources used by transactions (messages) (e.g.,
CPU time, memory, disk storage, etc.)
Forces race conditions.
Totally distorts the normal order of processing, especially
processing that occurs at different priority levels.
Forces the exercise of all system limits, thresholds, or other
controls designed to deal with overload situations.
Greatly increases the number of simultaneous actions.
Depletes resource pools in extraordinary and un-thought of
sequences.
Special Input Testing: Trace-Driven Input Testing
Example:
Test cases:
case 1 (some_exp = true, some_case = true) traverses path 1
case 2 (some_exp = true, some_case = false) traverses path 2
case 3 (some_exp = false, some_case = true) traverses path 3
case 4 (some_exp = false, some_case = false) traverses path 4
Structural (White-Box) Testing: Loop Testing
Nested Loop:
Unstructured Loop:
6 7
Should avoid in the first place
hard to select counter
break effect 1 3 4 5 2
cross-connected loop
hidden loop
things need to be considered 8
end points
looping values
Structural (White-Box) Testing: Path Testing
What is it?
• Test based on selecting sets of test paths through
the program
Generate tests from control flow
Criteria for selecting paths
Generate test case
Determine path-forcing input values
Path Selection Criteria:
• Complete testing
Every path from entry to exit
Every statement or instruction
Every branch/case statement, in each direction
• Adaptive Strategy
Path prefix strategy
Essential branch strategy
Structural (White-Box) Testing: Path Testing
Control Flowgraph
1: do while records remain
read record; 1
7a: endif
endif
8
7b: enddo
8:end
Structural (White-Box) Testing: Path Testing
F
1 T F
T
1 X
T F
2 2
3
F T
5
F
3
T
T F
6 4
F
7
T
4
F
T 5
6
7
T F
8
8 X
Structural (White-Box) Testing: Path Testing
B) 1F 1T
1
F 1T2F5F7T7F1F8T
T
T F
T F
2 1 X X
F T F
3 5 2 X
T
T F
6 3
F
T F 7
4 T 4
5 X
6
T F
8
7 X X
8 X X
Structural (White-Box) Testing: Path Testing
C) 1T2F 1T2T
1
F 1T2T3T4F3F1T2F5F7T7F1F8T
T
T F
T F
2
1 X X
F T F
3 5 2 X X
T
T F
6
F
3 X X
T F 7
4 T
4 X
5 X
6
T F
8
7 X X
8 X X
Structural (White-Box) Testing: Path Testing
T F
T F
2
F T F
1 X X
3 5
T
T F
2 X X
6
T F 7
F 3 X X
4 T
4 X X
5 X X
T
8
F 6 X X
7 X X
8 X X
Structural (White-Box) Testing: Statement Testing
X = 5; X = 5;
y = 6; y = 6;
z = x++ + y++; z = ++x + ++y;
x y z x y z
6 7 11 6 7 13
NOTE:
• ++n increments n before its value is used.
• n++ increments n after its value is used.
Submodel / Module Testing
Testing is
Module 1 Module 2 Module M
conducted (Class 1) (Class 2) (Class M)
in the same
pattern
as top-down Sub-Module 1 Sub-Module 2 Sub-Module K
(Sub-Class 1) (Sub-Class 2) (Sub-Class K)
development.
Begin testing the global level and then proceed to testing the
lower levels.
When testing a given level, calls to sublevels are simulated
by using “stubs” – a dummy component which has no other
function but to let its caller complete the call.
Top-Down Testing
Advantages:
Model integration testing is minimized.
Early existence of a working model results.
Higher level interfaces are tested first.
A natural environment for testing lower levels is provided.
Errors are localized to new modules and interfaces.
Disadvantages:
Thorough module testing is discouraged (the entire model
must be executed to perform testing).
Testing can be expensive (since the whole model must be
executed for each test).
Adequate input data is difficult to obtain (because of the
complexity of the data paths and control predicates).
Integration testing is hampered (again, because of the size
and complexity induced by testing the whole model).
Visualization / Animation
Induction
Inductive Assertions
Inference
Lambda Calculus
Logical Deduction
Predicate Calculus
Predicate Transformation
Proof of Correctness
Induction