Chapter 3. Software Testing Methods and Techniques
Chapter 3. Software Testing Methods and Techniques
Chapter 3. Software Testing Methods and Techniques
SOFTWARE
TESTING
METHODS AND TECHNIQUES
I. SOFTWARE TESTING APPROACHES
STATIC TESTING
• Is a type of software testing in which software application is tested
without code execution.
Design Integration T
Tests
9
Costs of reviews
• Rough guide: 5%-15% of development effort
• half day a week is 10%
• Effort required for reviews
• planning (by leader / moderator)
• preparation / self-study checking
• meeting
• fixing / editing / follow-up
• recording & analysis of statistics / metrics
• process improvement (should!)
TYPES OF REVIEW
Types of review of documents
• Informal Review (undocumented)
• widely viewed as useful and cheap (but no one can prove it!). A helpful first step for chaotic organisations.
• Technical Review (or peer review)
• includes peer and technical experts, no management participation. Normally documented, fault-finding.
Can be rather subjective.
• Decision-making Review
• group discusses document and makes a decision about the content, e.g. how something should be done,
go or no-go decision, or technical comments
• Walkthrough
• author guides the group through a document and his or her thought processes, so all understand the
same thing, consensus on changes to make
• Inspection
• formal individual and group checking, using sources and standards, according to generic and specific rules
and checklists, using entry and exit criteria, Leader must be trained & certified, metrics required
Reviews in general 1
• Objectives / goals
• validation & verification against specifications & standards
• achieve consensus (excluding Inspection)
• process improvement (ideal, included in Inspection)
Reviews in general 2
• Activities
• planning
• overview / kickoff meeting (Inspection)
• preparation / individual checking
• review meeting (not always)
• follow-up (for some types)
• metrics recording & analysis (Inspections and sometimes reviews)
Reviews in general 3
• Roles and responsibilities
• Leader / moderator - plans the review / Inspection, chooses participants, helps &
encourages, conducts the meeting, performs follow-up, manages metrics
• Author of the document being reviewed / Inspected
• Reviewers / Inspectors - specialised fault-finding roles for Inspection
• Managers - excluded from some types of review, need to plan project time for review /
Inspection
• Others: e.g. Inspection/ review Co-ordinator
Reviews in general 4
• Deliverables
• Changes (edits) in review product
• Change requests for source documents (predecessor documents to product being
reviewed / Inspected)
• Process improvement suggestions
• to the review / Inspection process
• to the development process which produced the product just reviewed / Inspected
• Metrics (Inspection and some types of review)
Reviews in general 5
• Pitfalls (they don’t always work!)
• Lack of training in the technique (especially Inspection, the most formal)
• Lack of or quality of documentation - what is being reviewed / Inspected
• Lack of management support
• Failure to improve processes
Inspection is different
19
STATIC ANALYSIS
What can static analysis do?
• Remember: static techniques do not execute the code
n := 0
read (x) Data flow anomaly: n is
n := 1 re-defined without being used
while x > y do Data flow fault: y is used
begin before it has been defined
read (y) (first time around the loop)
write( n*y)
x := x - n
end
23
Control flow analysis
• Highlights:
• nodes not accessible from start node
• infinite loops
• multiple entry to loops
• whether code is well structured, i.e. reducible
• whether code conforms to a flowchart grammar
• any jumps to undefined labels
• any labels not jumped to
• cyclomatic complexity and other metrics
24
Cyclomatic complexity
• cyclomatic complexity is a measure of the complexity of a flow graph
• (and therefore the code that the flow graph represents)
• the more complex the flow graph, the greater the measure
• it can most easily be calculated as:
• complexity = number of decisions + 1
25
Which flow graph is most complex?
1 What is the cyclomatic complexity?
2 3 5
26
Example of control flow graph init
do
Pseudo-code:
Result = 0 if r=r+1
Right = 0
DO WHILE more Questions
IF Answer = Correct THEN end
Right = Right + 1
ENDIF
res
END DO
Result = (Right / Questions)
IF Result > 60% THEN if pass
Print "pass"
ELSE fail
Print "fail”
ENDIF
end
27
Other static metrics
• lines of code (LOC)
• operands & operators (Halstead’s metrics)
• fan-in & fan-out
• nesting levels
• function calls
• OO metrics: inheritance tree depth, number of methods, coupling &
cohesion
• symbolic evaluation
28
Limitations and advantages
• Limitations
• cannot distinguish "fail-safe" code from programming faults or anomalies (often
creates overload of spurious error messages)
• does not execute the code, so not related to operating conditions
• Advantages
• can find faults difficult to "see"
• gives objective quality assessment of code
29
Summary
• Reviews help to find faults in development and test documentation, and
should be applied early
• Types of review: informal, walkthrough, technical / peer review,
Inspection
• Static analysis can find faults and give information about code without
executing
30
DYNAMIC TESTING
DYNAMIC TESTING
• Dynamic Testing is a software testing method used to test the dynamic
behaviour of software code.
• Under Dynamic Testing, a code is executed.
• It checks for functional behavior of software system, memory/cpu usage
and overall performance of the system.
Dynamic Testing Example
• Suppose we are testing a Login Page where we have two fields say
"Username" and "Password" and the Username is restricted to
Alphanumeric.
• When the user enters Username as "Guru99", the system accepts the
same. Where as when the user enters as Guru99@123 then the
application throws an error message. This result shows that the code is
acting dynamically based on the user input.
DYNAMIC TESTING
• Dynamic testing is when you are working with the actual system by
providing an input and comparing the actual behavior of the application
to the expected behavior. In other words, working with the system with
the intent of finding errors.