ISTQB-CTAL-TTA Syllabus v4.0
ISTQB-CTAL-TTA Syllabus v4.0
ISTQB-CTAL-TTA Syllabus v4.0
v4.0
Copyright Notice
Copyright Notice © International Software Testing Qualifications Board (hereinafter called ISTQB ®)
Copyright © 2021, the authors for the update 2021 Adam Roman, Armin Born, Christian Graf, Stuart
Reid
Copyright © 2019, the authors for the update 2019 Graham Bath (vice-chair), Rex Black, Judy McKay,
Kenji Onoshi, Mike Smith (chair), Erik van Veenendaal.
All rights reserved. The authors hereby transfer the copyright to the ISTQB®. The authors (as current
copyright holders) and ISTQB® (as the future copyright holder) have agreed to the following conditions
of use:
Extracts, for non-commercial use, from this document may be copied if the source is acknowledged.
Any Accredited Training Provider may use this syllabus as the basis for a training course if the authors
and the ISTQB® are acknowledged as the source and copyright owners of the syllabus and provided
that any advertisement of such a training course may mention the syllabus only after official
Accreditation of the training materials has been received from an ISTQB®-recognized Member Board.
Any individual or group of individuals may use this syllabus as the basis for articles and books, if the
authors and the ISTQB® are acknowledged as the source and copyright owners of the syllabus.
Any other use of this syllabus is prohibited without first obtaining the approval in writing of the ISTQB ®.
Any ISTQB®-recognized Member Board may translate this syllabus provided they reproduce the
abovementioned Copyright Notice in the translated version of the syllabus.
Revision History
Version Date Remarks
v4.0 2021/06/30 GA release for v4.0 version
v4.0 2021/04/28 Draft updated based on feedback from Beta Review.
2021 v4.0 Beta 2021/03/01 Draft updated based on feedback from Alpha Review.
2021 v4.0 Alpha 2020/12/07 Draft for Alpha Review updated to:
• Improve text throughout
• Remove subsection associated with K3 TTA-2.6.1 (2.6
Basis Path Testing) and remove LO
• Remove subsection associated with K2 TTA-3.2.4 (3.2.4
Call Graphs) and remove LO
• Rewrite subsection associated with TTA-3.2.2 (3.2.2
Data Flow Analysis) and make it a K3
• Rewrite section associated with TTA-4.4.1 and TTA-4.4.2
(4.4. Reliability Testing)
• Rewrite section associated with TTA-4.5.1 and TTA-4.5.2
(4.5 Performance Testing)
• Add section 4.9 on Operational Profiles.
• Rewrite section associated with TTA-2.8.1 (2.7 Selecting
White-Box Test Techniques section)
• Rewrite TTA-3.2.1 to include cyclomatic complexity (no
impact on exam Qs)
• Rewrite TTA-2.4.1 (MC/DC) to make it consistent with
other white-box LOs (no impact on exam Qs)
Table of Contents
Copyright Notice ...................................................................................................................................... 2
Revision History ....................................................................................................................................... 3
Table of Contents .................................................................................................................................... 4
0. Introduction to this Syllabus ............................................................................................................ 7
0.1 Purpose of this Syllabus ......................................................................................................... 7
0.2 The Certified Tester Advanced Level in Software Testing ..................................................... 7
0.3 Examinable Learning Objectives and Cognitive Levels of Knowledge .................................. 7
0.4 Expectations of Experience.................................................................................................... 7
0.5 The Advanced Level Technical Test Analyst Exam ............................................................... 8
0.6 Entry Requirements for the Exam .......................................................................................... 8
0.7 Accreditation of Courses ........................................................................................................ 8
0.8 Level of Syllabus Detail .......................................................................................................... 8
0.9 How this Syllabus is Organized.............................................................................................. 9
1. The Technical Test Analyst's Tasks in Risk-Based Testing - 30 mins. ........................................ 10
1.1 Introduction........................................................................................................................... 11
1.2 Risk-based Testing Tasks .................................................................................................... 11
1.2.1 Risk Identification ............................................................................................................. 11
1.2.2 Risk Assessment ............................................................................................................. 11
1.2.3 Risk Mitigation.................................................................................................................. 12
2. White-Box Test Techniques - 300 mins. ....................................................................................... 13
2.1 Introduction........................................................................................................................... 14
2.2 Statement Testing ................................................................................................................ 14
2.3 Decision Testing ................................................................................................................... 15
2.4 Modified Condition/Decision Testing .................................................................................... 15
2.5 Multiple Condition Testing .................................................................................................... 16
2.6 Basis Path Testing ............................................................................................................... 17
2.7 API Testing ........................................................................................................................... 17
2.8 Selecting a White-Box Test Technique ................................................................................ 18
2.8.1 Non-Safety-Related Systems .......................................................................................... 19
2.8.2 Safety-related systems .................................................................................................... 19
3. Static and Dynamic Analysis - 180 mins. ..................................................................................... 21
3.1 Introduction........................................................................................................................... 22
3.2 Static Analysis ...................................................................................................................... 22
3.2.1 Control Flow Analysis ...................................................................................................... 22
3.2.2 Data Flow Analysis .......................................................................................................... 22
3.2.3 Using Static Analysis for Improving Maintainability ......................................................... 23
3.3 Dynamic Analysis ................................................................................................................. 24
3.3.1 Overview .......................................................................................................................... 24
3.3.2 Detecting Memory Leaks ................................................................................................. 24
3.3.3 Detecting Wild Pointers ................................................................................................... 25
3.3.4 Analysis of Performance Efficiency ................................................................................. 25
4. Quality Characteristics for Technical Testing - 345 mins. ........................................................... 27
4.1 Introduction........................................................................................................................... 28
4.2 General Planning Issues ...................................................................................................... 29
4.2.1 Stakeholder Requirements .............................................................................................. 29
4.2.2 Test Environment Requirements ..................................................................................... 29
4.2.3 Required Tool Acquisition and Training ........................................................................... 30
4.2.4 Organizational Considerations ......................................................................................... 30
4.2.5 Data Security and Data Protection .................................................................................. 30
4.3 Security Testing ................................................................................................................... 30
4.3.1 Reasons for Considering Security Testing ...................................................................... 30
4.3.2 Security Test Planning ..................................................................................................... 31
Acknowledgements
The 2019 version of this document was produced by a core team from the International Software Testing
Qualifications Board Advanced Level Working Group: Graham Bath (vice-chair), Rex Black, Judy
McKay, Kenji Onoshi, Mike Smith (chair), Erik van Veenendaal.
The following persons participated in the reviewing, commenting, and balloting of the 2019 version of
this syllabus:
This document was produced by a core team from the International Software Testing Qualifications
Board Advanced Level Working Group: Armin Born, Adam Roman, Stuart Reid.
The updated v4.0 version of this document was produced by a core team from the International Software
Testing Qualifications Board Advanced Level Working Group: Armin Born, Adam Roman, Christian Graf,
Stuart Reid.
The following persons participated in the reviewing, commenting, and balloting of the updated v4.0
version of this syllabus:
The core team thanks the review team and the National Boards for their suggestions and input.
This document was formally released by the General Assembly of the ISTQB® on 30 June 2021.
The ISTQB® may allow other entities to use this syllabus for other purposes, provided they seek and
obtain prior written permission.
The ISTQB® Technical Test Analyst Advanced Level Overview is a separate document
[CTAL_TTA_OVIEW] which includes the following information:
• Business Outcomes
• Matrix showing traceability between business outcomes and learning objectives
• Summary
The knowledge levels of the specific learning objectives at K2, K3 and K4 levels are shown at the
beginning of each chapter and are classified as follows:
• K2: Understand
• K3: Apply
• K4: Analyze
The format of the exam is multiple choice. There are 45 questions. To pass the exam, at least 65% of
the total points must be earned.
Exams may be taken as part of an accredited training course or taken independently (e.g., at an exam
center or in a public exam). Completion of an accredited training course is not a pre-requisite for the
exam.
The syllabus content is not a description of the entire knowledge area; it reflects the level of detail to be
covered in Advanced Level training courses. It focuses on material that can apply to any software
projects, using any software development lifecycle. The syllabus does not contain any specific learning
objectives relating to any particular software development models, but it does discuss how these
concepts apply in Agile software development, other types of iterative and incremental software
development models, and in sequential software development models.
• Chapter 1: The Technical Test Analyst's Tasks in Risk-Based Testing (30 minutes)
• Chapter 2: White-Box Test Techniques (300 minutes)
• Chapter 3: Static and Dynamic Analysis (180 minutes)
• Chapter 4: Quality Characteristics for Technical Testing (345 minutes)
• Chapter 5: Reviews (165 minutes)
• Chapter 6: Test Tools and Automation (180 minutes)
1.1 Introduction
The Test Manager has overall responsibility for establishing and managing a risk-based testing strategy.
The Test Manager will usually request the involvement of the Technical Test Analyst to ensure the risk-
based approach is implemented correctly.
Technical Test Analysts work within the risk-based testing framework established by the Test Manager
for the project. They contribute their knowledge of the technical product risks that are inherent in the
project, such as risks related to security, system reliability and performance. They should also contribute
to the identification and treatment of project risks associated with test environments, such as the
acquisition and set-up of test environments for performance, reliability, and security testing.
These tasks are performed iteratively throughout the project to deal with emerging risks and changing
priorities, and to regularly evaluate and communicate risk status.
Risks that might be identified by the Technical Test Analyst are typically based on the [ISO 25010]
product quality characteristics listed in Chapter 4 of this syllabus.
The likelihood of a product risk is usually interpreted as the probability of the occurrence of the failure in
the system under test. The Technical Test Analyst contributes to understanding the probability of each
technical product risk whereas the Test Analyst contributes to understanding the potential business
impact of the problem should it occur.
Project risks that become issues can impact the overall success of the project. Typically, the following
generic project risk factors need to be considered:
• Conflict between stakeholders regarding technical requirements
• Communication problems resulting from the geographical distribution of the development
organization
• Tools and technology (including relevant skills)
• Time, resource, and management pressure
• Lack of earlier quality assurance
Product risks that become issues may result in higher numbers of defects. Typically, the following
generic product risk factors need to be considered:
• Complexity of technology
• Code complexity
• Amount of change in the source code (insertions, deletions, modifications)
• Large number of defects found relating to technical quality characteristics (defect history)
• Technical interface and integration issues
Given the available risk information, the Technical Test Analyst proposes an initial risk likelihood
according to the guidelines established by the Test Manager. The initial value may be modified by the
Test Manager when all stakeholder views have been considered. The risk impact is normally determined
by the Test Analyst.
The Technical Test Analyst will often cooperate with specialists in areas such as security and
performance to define risk mitigation measures and elements of the test strategy. Additional information
can be obtained from ISTQB® Specialist syllabi, such as the Security Testing syllabus [CT_SEC_SYL]
and the Performance Testing syllabus [CT_PT_SYL].
2.6 Basis Path Testing (has been removed from version v4.0 of this syllabus)
TTA-2.6.1 has been removed from version v4.0 of this syllabus.
2.1 Introduction
This chapter describes white-box test techniques. These techniques apply to code and other structures
with a control flow, such as business process flow charts.
Each specific technique enables test cases to be derived systematically and focuses on a particular
aspect of the structure. The test cases generated by the techniques satisfy coverage criteria which are
set as an objective and are measured against. Achieving full coverage (i.e. 100%) does not mean that
the entire set of tests is complete, but rather that the technique being used no longer suggests any
useful further tests for the structure under consideration.
Test inputs are generated to ensure a test case exercises a particular part of the code (e.g., a statement
or decision outcome). Determining the test inputs that will cause a particular part of the code to be
exercised can be challenging, especially if the part of the code to be exercised is at the end of a long
control flow sub-path with several decisions on it. The expected results are identified based on a source
external to this structure, such as a requirements or design specification or another test basis.
The Foundation Syllabus [CTFL_SYL] introduces statement testing and decision testing. Statement
testing focuses on exercising the executable statements in the code, whereas decision testing exercises
the decision outcomes.
The modified condition/decision and multiple condition techniques listed above are based on decision
predicates containing multiple conditions and find similar types of defects. No matter how complex a
decision predicate may be, it will evaluate to either TRUE or FALSE, which will determine the path taken
through the code. A defect is detected when the intended path is not taken because a decision predicate
does not evaluate as expected.
Refer to [ISO 29119] for more details on the specification, and examples, of these techniques.
Applicability
Achieving full statement coverage should be considered as a minimum for all code being tested,
although this is not always possible in practice.
Limitations/Difficulties
Achieving full statement coverage should be considered as a minimum for all code being tested,
although this is not always possible in practice due to constraints on the available time and/or effort.
Even high percentages of statement coverage may not detect certain defects in the code’s logic. In
many cases achieving 100% statement coverage is not possible due to unreachable code. Although
unreachable code is generally not considered good programming practice, it may occur, for instance, if
a switch statement must have a default case, but all possible cases are handled explicitly.
Coverage is measured as the number of decision outcomes exercised by the tests divided by the total
number of decision outcomes in the test object, normally expressed as a percentage. Note that a single
test case may exercise several decision outcomes.
Compared to the modified condition/decision and multiple condition techniques described below,
decision testing considers the entire decision as a whole and evaluates only the TRUE and FALSE
outcomes, regardless of the complexity of its internal structure.
Branch testing is often used interchangeably with decision testing, because covering all branches and
covering all decision outcomes can be achieved with the same tests. Branch testing exercises the
branches in the code, where a branch is normally considered to be an edge of the control flow graph.
For programs with no decisions, the definition of decision coverage above results in a coverage of 0/0,
which is undefined, no matter how many tests are run, while the single branch from entry to exit point
(assuming one entry and exit point) will result in 100% branch coverage being achieved. To address
this difference between the two measures, ISO 29119-4 requires at least one test to be run on code with
no decisions to achieve 100% decision coverage, so making 100% decision coverage and 100% branch
coverage equivalent for nearly all programs. Many test tools that provide coverage measures, including
those used for testing safety-related systems, employ a similar approach.
Applicability
This level of coverage should be considered when the code being tested is important or even critical
(see the tables in section 2.8.2 for safety-related systems). This technique can be used for code and for
any model that involves decision points, like business process models.
Limitations/Difficulties
Decision testing does not consider the details of how a decision with multiple conditions is made and
may fail to detect defects caused by combinations of the condition outcomes.
Each decision predicate is made up of one or more atomic conditions, each of which evaluates to a
Boolean value. These are logically combined to determine the outcome of the decision. This technique
checks that each of the atomic conditions independently and correctly affects the outcome of the overall
decision.
This technique provides a stronger level of coverage than statement and decision coverage when there
are decisions containing multiple conditions. Assuming N unique, mutually independent atomic
conditions, MC/DC for a decision can usually be achieved by exercising the decision N+1 times. Modified
condition/decision testing requires pairs of tests that show a change of a single atomic condition
outcome can independently affect the result of a decision. Note that a single test case may exercise
several condition combinations and therefore it is not always necessary to run N+1 separate test cases
to achieve MC/DC.
Applicability
This technique is used in the aerospace and automotive industries, and other industry sectors for safety-
critical systems. It is used when testing software where a failure may cause a catastrophe. Modified
condition/decision testing can be a reasonable middle ground between decision testing and multiple
condition testing (due to the large number of combinations to test). It is more rigorous than decision
testing but requires far fewer test conditions to be exercised than multiple condition testing when there
are several atomic conditions in the decision.
Limitations/Difficulties
Achieving MC/DC may be complicated when there are multiple occurrences of the same variable in a
decision with multiple conditions; when this occurs, the conditions may be “coupled”. Depending on the
decision, it may not be possible to vary the value of one condition such that it alone causes the decision
outcome to change. One approach to addressing this issue is to specify that only uncoupled atomic
conditions are tested using modified condition/decision testing . The other approach is to analyze each
decision in which coupling occurs.
Some compilers and/or interpreters are designed such that they exhibit short-circuiting behavior when
evaluating a complex decision statement in the code. That is, the executing code may not evaluate an
entire expression if the final outcome of the evaluation can be determined after evaluating only a portion
of the expression. For example, if evaluating the decision “A and B”, there is no reason to evaluate B if
A has already been evaluated as FALSE. No value of B can change the final result, so the code may
save execution time by not evaluating B. Short-circuiting may affect the ability to attain MC/DC since
some required tests may not be achievable. Usually, it is possible to configure the compiler to switch off
the short-circuiting optimization for the testing, but this may not be allowed for safety-critical applications,
where the tested code and the delivered code must be identical.
Coverage is measured as the number of exercised atomic condition combinations over all decisions in
the test object, normally expressed as a percentage.
Applicability
This technique is used to test high risk software and embedded software which are expected to run
reliably without crashing for long periods of time.
Limitations/Difficulties
Because the number of test cases can be derived directly from a truth table containing all the atomic
conditions, this level of coverage can easily be determined. However, the sheer number of test cases
required for multiple condition testing makes modified condition/decision testing more feasible for
situations where there are several atomic conditions in a decision.
If the compiler uses short-circuiting, the number of condition combinations that can be exercised will
often be reduced, depending on the order and grouping of logical operations that are performed on the
atomic conditions.
API Testing is a type of testing rather than a technique. In certain respects, API testing is quite similar
to testing a graphical user interface (GUI). The focus is on the evaluation of input values and returned
data.
Negative testing is often crucial when dealing with APIs. Programmers that use APIs to access services
external to their own code may try to use API interfaces in ways for which they were not intended. That
means that robust error handling is essential to avoid incorrect operation. Combinatorial testing of many
interfaces may be required because APIs are often used in conjunction with other APIs, and because a
single interface may contain several parameters, where values of these may be combined in many ways.
APIs frequently are loosely coupled, resulting in the very real possibility of lost transactions or timing
glitches. This necessitates thorough testing of the recovery and retry mechanisms. An organization that
provides an API interface must ensure that all services have a very high availability; this often requires
strict reliability testing by the API publisher as well as infrastructure support.
Applicability
API testing is becoming more important for testing systems of systems as the individual systems become
distributed or use remote processing as a way of off-loading some work to other processors. Examples
include:
• Operating systems calls
• Service-oriented architectures (SOA)
• Remote procedure calls (RPC)
• Web services
Software containerization results in the division of a software program into several containers which
communicate with each other using mechanisms such as those listed above. API testing should also
target these interfaces.
Limitations/Difficulties
Testing an API directly usually requires a Technical Test Analyst to use specialized tools. Because there
is typically no direct graphical interface associated with an API, tools may be required to setup the initial
environment, marshal the data, invoke the API, and determine the result.
Coverage
API testing is a description of a type of testing; it does not denote any specific level of coverage. At a
minimum, the API test should include making calls to the API with both realistic input values and
unexpected inputs for checking exception handling. More thorough API tests may ensure that callable
entities are exercised at least once, or all possible functions are called at least once.
Representational State Transfer is an architectural style. RESTful Web services allow requesting
systems to access Web resources using a uniform set of stateless operations. Several coverage criteria
exist for RESTful web APIs, the de-facto standard for software integration [Web-7]. They can be divided
into two groups: input coverage criteria and output coverage criteria. Among others, input criteria may
require the execution of all possible API operations, the use of all possible API parameters, and the
coverage of sequences of API operations. Among others, output criteria may require the generation of
all correct and erroneous status codes, and the generation of responses containing resources exhibiting
all properties (or all property types).
Types of Defects
The types of defects that can be found by testing APIs are quite disparate. Interface issues are common,
as are data handling issues, timing problems, loss of transactions, duplication of transactions and issues
in exception handling.
The selected white-box test technique is normally specified in terms of a required coverage level, which
is achieved by applying the test technique. For instance, a requirement to achieve 100% statement
coverage would typically lead to the use of statement testing. However black-box test techniques are
normally applied first, coverage is then measured, and the white-box test technique only used if the
required white-box coverage level has not been achieved. In some situations, white-box testing may be
used less formally to provide an indication of where coverage may need to be increased (e.g., creating
additional tests where white-box coverage levels are particularly low). Statement testing would normally
be sufficient for such informal coverage measurement.
When specifying a required white-box coverage level it is good practice to only specify it at 100%. The
reason for this is that if lower levels of coverage are required, then this typically means that the parts of
the code that are not exercised by testing are the parts that are the most difficult to test, and these parts
are normally also the most complex and error-prone. So, by requesting and achieving for example 80%
coverage, it may mean that the code that includes the majority of the detectable defects is left untested.
For this reason, when white-box coverage criteria are specified in standards, they are nearly always
specified at 100%. Strict definitions of coverage levels have sometimes made this level of coverage
impracticable. However, those given in ISO 29119-4 allow infeasible coverage items to be discounted
from the calculations, thus making 100% coverage an achievable goal.
When specifying the required white-box coverage for a test object, it is also only necessary to specify
this for a single coverage criterion (e.g., it is not necessary to require both 100% statement coverage
and 100% MC/DC). With exit criteria at 100% it is possible to relate some exit criteria in a subsumes
hierarchy, where coverage criteria are shown to subsume other coverage criteria. One coverage
criterion is said to subsume another if, for all components and their specifications, every set of test cases
that satisfies the first criterion also satisfies the second. For example, branch coverage subsumes
statement coverage because if branch coverage is achieved (to 100%), then 100% statement coverage
is guaranteed. For the white-box test techniques covered in this syllabus, we can say that branch and
decision coverage subsume statement coverage, MC/DC subsumes decision and branch coverage, and
multiple condition coverage subsumes MC/DC (if we consider branch and decision coverage to be the
same at 100%, then we can say that they subsume each other).
Note that when determining the white-box coverage levels to be achieved for a system, it is quite normal
to define different levels for different parts of the system. This is because different parts of a system
contribute differently to risk. For instance, in an avionics system, the subsystems associated with in-
flight entertainment would be assigned a lower level of risk than those associated with flight control.
Testing interfaces is common for all types of systems and is normally required for all integrity levels for
safety-related systems (see section 2.8.2 for more on integrity levels). The level of required coverage
for API testing will normally increase based on the associated risk (e.g., the higher level of risk
associated with a public interface may require more rigorous API testing).
The selection of which white-box test technique to use is generally based on the nature of the test object
and its perceived risks. If the test object is considered to be safety-related (i.e. failure could cause harm
v4.0 Page 18 of 59 2021-06-30
© International Software Testing Qualifications Board
Certified Tester
Advanced Level Syllabus – Technical Test Analyst (CTAL-TTA)
to people or the environment) then regulatory standards are applicable and will define required white-
box coverage levels (see section 2.8.2). If the test object is not safety-related then the choice of which
white-box coverage levels to achieve is more subjective, but should still be largely based on perceived
risks, as described in section 2.8.1.
The following factors (in no particular order of priority) are typically considered when selecting white-box
test techniques for non-safety-related systems:
• Contract – If the contract requires a particular level of coverage to be achieved, then not
achieving this coverage level will potentially result in a breach of contract.
• Customer – If the customer requests a particular level of coverage, for instance as part of the
test planning, then not achieving this coverage level may cause problems with the customer.
• Regulatory standard – For some industry sectors (e.g., financial) a regulatory standard that
defines required white-box coverage criteria applies for mission-critical systems. See section
2.8.2 for coverage of regulatory standards for safety-related systems.
• Test strategy – If the organization’s test strategy specifies requirements for white-box code
coverage, then not aligning with the organizational strategy may risk censure from higher
management.
• Coding style – If the code is written with no multiple conditions within decisions, then it would
be wasteful to require white-box coverage levels such as MC/DC and multiple condition
coverage.
• Historical defect information – If historical data on the effectiveness of achieving a particular
coverage level suggests that it would be appropriate to use for this test object, it would be
risky to ignore the available data. Note that such data may be available within the project,
organization or industry.
• Skills and experience – If the testers available to perform the testing are not sufficiently
experienced and skilled in a particular white-box technique, it may be misunderstood and may
introduce unnecessary risk if that technique was selected.
• Tools – White-box coverage can only be measured in practice by using coverage tools. If such
tools are not available that support a given coverage measure, then selecting that measure to
be achieved would introduce a high risk level.
When selecting white-box testing for non-safety-related systems, the Technical Test Analyst has more
freedom to recommend the appropriate white-box coverage for non-safety related systems than for
safety related systems. Such choices are typically a compromise between the perceived risks and the
cost, resources and time required to treat these risks through white-box testing. In some situations, other
treatments, which might be implemented by other software testing approaches or otherwise (e.g.,
different development approaches) may be more appropriate.
IEC 61508 (functional safety of programmable, electronic, safety-related systems [IEC 61508]) is an
umbrella international standard used for such purposes. In theory, it could be used for any safety-related
system, however some industries have created specific variants (e.g., ISO 26262 [ISO 26262] applies
to automotive systems) and some industries have created their own standards (e.g., DO-178C [DO-
178C] for airborne software). Additional information regarding ISO 26262 is provided in the ISTQB ®
Automotive Software Tester syllabus [CT_AuT_SYL].
IEC 61508 defines four safety integrity levels (SILs), each of which is defined as a relative level of risk-
reduction provided by a safety function, correlated to the frequency and severity of perceived hazards.
When a test object performs a function related to safety, the higher the risk of failure means that the test
object should have higher reliability. The following table shows the reliability levels associated with the
SILs. Note that the reliability level for SIL 4 for the continuous operation case is extremely high, since it
corresponds to a mean time between failures (MTBF) greater than 10,000 years.
The recommendations for white-box coverage levels associated with each SIL are shown in the following
table. Where an entry is shown as ‘Highly recommended’, in practice achieving that level of coverage is
normally considered mandatory. In contrast, where an entry is only shown as ‘Recommended’, many
practitioners consider this to be optional and avoid achieving it by providing a suitable rationale. Thus,
a test object assigned to SIL 3 is normally tested to achieve 100% branch coverage (it achieves 100%
statement coverage automatically, as shown by the subsumes ordering).
Note that the above SILs and requirements on coverage levels from IEC 61508 are different in ISO
26262, and different again in DO-178C.
Keywords
anomaly, control flow analysis, cyclomatic complexity, data flow analysis, definition-use pair, dynamic
analysis, memory leak, static analysis, wild pointer
Note: TTA-3.2.4 has been removed from version v4.0 of this syllabus.
3.1 Introduction
Static analysis (see section 3.2) is a form of testing that is performed without executing the software.
The quality of the software is evaluated by a tool or by a person based on its form, structure, content, or
documentation. This static view of the software allows detailed analysis without having to create the
data and preconditions necessary to execute test cases.
Apart from static analysis, static testing techniques also include different forms of review. The ones that
are relevant for the Technical Test Analyst are covered in Chapter 5.
Dynamic analysis (see section 3.3) requires the actual execution of the code and is used to find defects
which are more easily detected when the code is executing (e.g., memory leaks). Dynamic analysis, as
with static analysis, may rely on tools or may rely on a person monitoring the executing system watching
for such indicators as a rapid increase of memory use.
Control flow analysis can be used to determine cyclomatic complexity. The cyclomatic complexity is a
positive integer which represents the number of independent paths in a strongly connected graph.
The cyclomatic complexity is generally used as an indicator of the complexity of a component. Thomas
McCabe's theory [McCabe76] was that the more complex the system, the harder it would be to maintain
and the more defects it would contain. Many studies have noted this correlation between complexity and
the number of contained defects. Any component that is measured with a higher complexity should be
reviewed for possible refactoring, for example division into multiple components.
One common technique classifies the use of a variable as one of three atomic actions:
• when the variable is defined, declared, or initialized (e.g., x:=3)
• when the variable is used or read (e.g., if x > temp)
• when the variable is killed, destroyed, or goes out of scope (e.g., text_file_1.close, loop control
variable (i) on exit from loop)
Depending on the programming language, some of these anomalies may be identified by the compiler,
but a separate static analysis tool might be needed to identify the data flow anomalies. For instance, re-
definition with no intervening use is allowed in most programming languages, and may be deliberately
programmed, but it would be flagged by a data flow analysis tool as being a possible anomaly that
should be checked.
The use of control flow paths to determine the sequence of actions for a variable can lead to the reporting
of potential anomalies that cannot occur in practice. For instance, static analysis tools cannot always
identify if a control flow path is feasible, as some paths are only determined based on values assigned
to variables at run time. There is also a class of data flow analysis problems that are difficult for tools to
identify, when the analyzed data are part of data structures with dynamically assigned variables, such
as records and arrays. Static analysis tools also struggle with identifying potential data flow anomalies
when variables are shared between concurrent threads of control in a program as the sequence of
actions on data becomes difficult to predict.
In contrast to data flow analysis, which is static testing, data flow testing is dynamic testing in which test
cases are generated to exercise ‘definition-use pairs’ in program code. Data flow testing uses some of
the same concepts as data flow analysis as these definition-use pairs are control flow paths between a
definition and a subsequent use of a variable in a program.
Poorly written, uncommented, and unstructured code tends to be harder to maintain. It may require more
effort for developers to locate and analyze defects in the code, and the modification of the code to correct
a defect or add a feature is likely to result in further defects being introduced.
Static analysis is used to verify compliance with coding standards and guidelines; where non-compliant
code is identified, it can be updated to improve its maintainability. These standards and guidelines
describe required coding and design practices such as conventions for naming, commenting,
indentation and modularization. Note that static analysis tools generally raise warnings rather than
detect defects. These warnings (e.g., on level of complexity) may be provided even though the code
may be syntactically correct.
Modular designs generally result in more maintainable code. Static analysis tools support the
development of modular code in the following ways:
• They search for repeated code. These sections of code may be candidates for refactoring into
components (although the runtime overhead imposed by component calls may be an issue for
real-time systems).
• They generate metrics which are valuable indicators of code modularization. These include
measures of coupling and cohesion. A system that has good maintainability is more likely to
have a low measure of coupling (the degree to which components rely on each other during
execution) and a high measure of cohesion (the degree to which a component is self-contained
and focused on a single task).
• They indicate, in object-oriented code, where derived objects may have too much or too little
visibility into parent classes.
• They highlight areas in code or architecture with a high level of structural complexity.
The maintenance of a website can also be supported using static analysis tools. Here the objective is
to check if the tree-like structure of the site is well-balanced or if there is an imbalance that will lead to:
In addition to evaluating maintainability, static analysis tools can also be applied to the code used for
implementing websites to check for possible exposure to security vulnerabilities such as code injection,
cookie security, cross-site scripting, resource tampering, and SQL code injection. Further details are
provided in section 4.3 and in the Security Testing syllabus [CT_SEC_SYL].
Failures that are not immediately reproducible (intermittent) can have significant consequences on the
testing effort and on the ability to release or productively use software. Such failures may be caused by
memory or resource leaks, incorrect use of pointers and other corruptions (e.g., of the system stack)
[Kaner02]. Due to the nature of these failures, which may include the gradual worsening of system
performance or even system crashes, testing strategies must consider the risks associated with such
defects and, where appropriate, perform dynamic analysis to reduce them (typically by using tools).
Since these failures often are the most expensive failures to find and to correct, it is recommended to
start dynamic analysis early in the project.
Dynamic analysis may be performed at any test level and requires technical and system skills to do the
following:
• Specify the test objectives of dynamic analysis;
• Determine the proper time to start and stop the analysis;
• Analyze the results.
Dynamic analysis tools can be used even if the Technical Test Analyst has minimal technical skills; the
tools used usually create comprehensive logs which can be analyzed by those with the needed technical
and analytical skills.
memory is freed after use without the programmer's direct intervention. Isolating memory leaks can be
very difficult in cases where allocated memory should be freed by the automatic garbage collectors.
Memory leaks typically cause problems after some time – when a significant amount of memory has
leaked and become unavailable. When software is newly installed, or when the system is restarted,
memory is reallocated, and so memory leaks are not noticeable; testing is an example where frequent
memory allocation can prevent the detection of memory leaks. For these reasons, the negative effects
of memory leaks may first be noticed when the program is in production.
The primary symptom of a memory leak is a steadily worsening system response time which may
ultimately result in system failure. While such failures may be resolved by re-starting (re-booting) the
system, this may not always be practical or even possible for some systems.
Many dynamic analysis tools identify areas in the code where memory leaks occur so that they can be
corrected. Simple memory monitors can also be used to obtain a general impression of whether
available memory is declining over time, although a follow-up analysis would still be required to
determine the exact cause of the decline.
Note that changes to the program’s memory usage (e.g., a new build following a software change) may
trigger any of the four consequences listed above. This is particularly critical where initially the program
performs as expected despite the use of wild pointers, and then crashes unexpectedly (perhaps even
in production) following a software change. Tools can help identify wild pointers as they are used by the
program, irrespective of their impact on the program’s execution. Some operating systems have built-in
functions to check for memory access violations during runtime. For instance, the operating system may
throw an exception when an application tries to access a memory location that is outside of that
application's allowed memory area.
Dynamic analysis of program performance is often done while conducting system tests, although it may
also be done when testing a single sub-system in earlier phases of testing using test harnesses. Further
details are provided in the Performance Testing syllabus [CT_PT_SYL].
4.1 Introduction
In general, the Technical Test Analyst focuses testing on "how" the product works, rather than the
functional aspects of "what" it does. These tests can take place at any test level. For example, during
component testing of real-time and embedded systems, conducting performance efficiency
benchmarking and testing resource usage is important. During operational acceptance testing and
system testing, testing for reliability aspects, such as recoverability, is appropriate. The tests at this level
are aimed at testing a specific system, i.e., combinations of hardware and software. The specific system
under test may include various servers, clients, databases, networks, and other resources. Regardless
of the test level, testing should be performed according to the risk priorities and the available resources.
Both dynamic testing and static testing, including reviews (see Chapters 2, 3 and 5) may be applied to
test the non-functional quality characteristics described in this chapter.
The description of product quality characteristics provided in ISO 25010 [ISO 25010] is used as a guide
to the characteristics and their sub-characteristics. These are shown in the table below, together with
an indication of which characteristics/sub-characteristics are covered by the Test Analyst and the
Technical Test Analyst syllabi.
Note that a table is provided in Appendix A which compares the characteristics described in the now
cancelled ISO 9126-1 standard (as used in version 2012 of this syllabus) with those in the later ISO
25010 standard.
For all the quality characteristics and sub-characteristics discussed in this section, the typical risks must
be recognized so that an appropriate testing approach can be formed and documented. Quality
characteristic testing requires particular attention to lifecycle timing, required tools, required standards,
software and documentation availability and technical expertise. Without planning an approach to deal
with each characteristic and its unique testing needs, the tester may not have adequate planning,
preparation and test execution time built into the schedule.
Some of this testing, e.g., performance testing, requires extensive planning, dedicated equipment,
specific tools, specialized testing skills and, in most cases, a significant amount of time. Testing of the
quality characteristics and sub-characteristics must be integrated into the overall testing schedule with
adequate resources allocated to the effort.
While the Test Manager will be concerned with compiling and reporting the summarized metric
information concerning quality characteristics and sub-characteristics, the Test Analyst or the Technical
Test Analyst (according to the table above) gathers the information for each metric.
Measurements of quality characteristics gathered in pre-production tests by the Technical Test Analyst
may form the basis for Service Level Agreements (SLAs) between the supplier and the stakeholders
(e.g., customers, operators) of the software system. In some cases, the tests may continue to be
executed after the software has entered production, often by a separate team or organization. This is
usually seen for performance testing and reliability testing which may show different results in the
production environment than in the testing environment.
The following general factors are considered when performing these tasks:
• Stakeholder requirements
• Test environment requirements
• Required tool acquisition and training
• Organizational considerations
• Data security considerations
A common approach is to assume that if the customer is satisfied with the existing version of the system,
they will continue to be satisfied with new versions, as long as the achieved quality levels are maintained.
This enables the existing version of the system to be used as a benchmark. This can be a particularly
useful approach to adopt for some of the non-functional quality characteristics such as performance
efficiency, where stakeholders may find it difficult to specify their requirements.
It is advisable to obtain multiple viewpoints when capturing non-functional requirements. They must be
elicited from stakeholders such as customers, product owners, users, operations staff, and maintenance
staff. If key stakeholders are excluded some requirements are likely to be missed. For more details
about capturing requirements, refer to the Advanced Test Manager syllabus [CTAL_TM_SYL].
In agile software development non-functional requirements may be stated as user stories or added to
the functionality specified in use cases as non-functional constraints.
• Using a scaled-down version of the system, taking care that the test results obtained are
sufficiently representative of the production system
• Using cloud-based resources as an alternative to acquiring the resources directly
• Using virtualized environments
Test execution times must be planned carefully, and it is quite likely that some tests can only be executed
at specific times (e.g., at low usage times).
The development of a complex simulator may represent a development project in its own right and
should be planned as such. In particular the testing and documentation of the developed tool must be
accounted for in the schedule and resource plan. Sufficient budget and time should be planned for
upgrading and retesting the simulator as the simulated product changes. The planning for simulators to
be used in safety-critical applications must take into account the acceptance testing and possible
certification of the simulator by an independent body.
Data protection policies and laws may preclude the generation of any required test data based on
production data (e.g., personal data, credit card data). Making test data anonymous is a non-trivial task
which must be planned as part of test implementation.
• Software which exhibits unintended side-effects when performing its intended function. For
example, a media player which correctly plays audio but does so by writing files out to
unencrypted temporary storage exhibits a side-effect which may be exploited by software
pirates.
• Code inserted into a web page which may be exercised by subsequent users (cross-site
scripting or XSS). This code may be malicious.
• Buffer overflow (buffer overrun) which may be caused by entering strings into a user interface
input field which are longer than the code can correctly handle. A buffer overflow vulnerability
represents an opportunity for running malicious code instructions.
• Denial of service, which prevents users from interacting with an application (e.g., by overloading
a web server with “nuisance” requests).
• The interception, mimicking and/or altering and subsequent relaying of communications (e.g.,
credit card transactions) by a third party such that a user remains unaware of that third party’s
presence (“man-in-the-middle” attack)
• Breaking the encryption codes used to protect sensitive data.
• Logic bombs (sometimes called Easter Eggs), which may be maliciously inserted into code and
which activate only under certain conditions (e.g., on a specific date). When logic bombs
activate, they may perform malicious acts such as the deletion of files or formatting of disks.
Individual standards may apply when conducting security test planning, such as [IEC 62443-3-2], which
applies to industrial automation and control systems.
The Security Testing syllabus [CT_SEC_SYL] includes further details of key security test plan elements.
The ISO 25010 sub-characteristics of security [ISO 25010] also provide a basis from which security tests
may be specified. These focus on the following aspects of security:
• Confidentiality
• Integrity
• Non-repudiation
• Accountability
• Authenticity
Section 3.2 (static analysis) and the Security Testing syllabus [CT_SEC_SYL] include further details of
security testing.
Traditionally, maturity has been specified and measured for high-reliability systems, such as those
associated with safety-critical functions (e.g., an aircraft flight control system), where the maturity goal
is defined as part of a regulatory standard. A maturity requirement for such a high-reliability system may
be a mean time between failures (MTBF) of up to 109 hours (although this is practically impossible to
measure).
The usual approach to testing maturity for high-reliability systems is known as reliability growth
modelling, and it normally takes place at the end of system testing, after the testing for other quality
characteristics has been completed and any defects associated with detected failures have been fixed.
It is a statistical approach usually performed in a test environment as close to the operational
environment as possible. To measure a specified MTBF, test inputs are generated based on the
operational profile and the system is run and failures are recorded (and subsequently fixed). A reducing
frequency of failure allows the MTBF to be predicted using a reliability growth model.
Where maturity is used as a goal for lower reliability systems (e.g., non-safety-related), then the number
of failures observed during a defined period of expected operational use (e.g., no more than 2 high-
impact failures per week) may be used and may be recorded as part of the service level agreement
(SLA) for the system.
Measuring availability prior to operation (e.g., as part of making the release decision) is often performed
using the same tests used for measuring maturity; tests are based on an operational profile of expected
use over a prolonged period and performed in a test environment as close to the operational
environment as possible. Availability can be measured as MTTF/(MTTF + MTTR), where MTTF is the
mean time to failure and MTTR is the mean time to repair (MTTR), which is often measured as part of
maintainability testing. Where a system is high-reliability and incorporates recoverability (see section
4.4.5) then we can substitute mean time to recover for MTTR in the equation when the system takes
some time to recover from a failure.
A fault tolerant design will typically involve one or more duplicate subsystems, so providing a level of
redundancy in case of failure. In the case of software, such duplicate systems need to be developed
independently, to avoid common mode failures; this approach is known as N-version programming.
Aircraft flight control systems may include three or four levels of redundancy, with the most critical
functions implemented in several variants. Where hardware reliability is a concern, an embedded system
may run on multiple dissimilar processors, while a critical website may run with a mirror (failover) server
performing the same functions that is always available to take over in the event of the primary server
failing. Whichever approach to fault tolerance is implemented, testing it typically requires both the
detection of the failure and the subsequent response to the failure to be tested.
Fault injection testing tests the robustness of a system in the presence of defects in the system’s
environment (e.g., a faulty power supply, badly-formed input messages, a process or service not
available, a file not found, or memory not available) and defects in the system itself (e.g., a flipped bit
caused by cosmic radiation, a poor design, or bad coding). Fault injection testing is a form of negative
testing - we deliberately inject defects into the system to assure ourselves that it reacts in the way we
expected (i.e., safely for a safety-related system). Sometimes the defect scenarios we test should never
occur (e.g., a software task should never ‘die’ or get stuck in an infinite loop) and cannot be simulated
by traditional system testing, but with fault injection testing, we create the defect scenario and measure
the subsequent system behavior to ensure it detects and handles the failure.
Backup and restore testing focuses on testing the procedures in place to minimize the effects of a failure
on the system data. Tests evaluate the procedures for both the backup and restoration of data. While
testing for data backup is relatively easy, testing for restoring a system from backed up data can be
more complex and often require careful planning to ensure disruption to the operational system is
minimized. Measures include the time taken to perform different types of backup (e.g., full and
incremental), the time taken to restore the data (the recovery time objective), and the level of data loss
that is acceptable (recovery point objective).
Failover testing is performed when the system architecture comprises both a primary and a failover
system that will take over if the primary system fails. Where a system must be able to recover from a
catastrophic failure (e.g., a flood, terrorist attack, or a serious ransomware attack) then failover testing
is often known as disaster recovery testing and the failover system (or systems) may often be in another
geographical location. Performing a full disaster recovery test on an operational system needs extremely
careful planning due to the risks and disruption (often to senior managers’ free time, which is likely to
be taken up in managing the recovery). If a full disaster recovery test fails then we would immediately
revert to the primary system (as it hasn't really been destroyed!). Failover tests include testing the move
to the failover system, once it has taken over, provides the required service level.
• Exit criteria – Reliability requirements should be set by regulatory standards for safety-related
applications.
• Failure – Reliability measures are very dependent on counting failures and so there must be
up-front agreement on what constitutes a failure.
• Developers – For maturity testing using reliability growth models agreement must be reached
with developers that identified defects are fixed as soon as possible.
• Measuring operational reliability is relatively simple compared to measuring reliability prior to
release as we have to only measure failures; this this may need liaison with operations staff.
• Early testing - Achieving high reliability (as opposed to measuring reliability) requires testing to
start as early as possible, with rigorous reviews of early baseline documents and the static
analysis of code.
For testing fault tolerance and recoverability, it is often necessary to generate tests that replicate failures
in the environment and the system itself, to determine how the system reacts. Fault injection testing is
often used for this. Various techniques and checklists are available for the identification of possible
defects and corresponding failures (e.g., Fault Tree Analysis, Failure Mode and Effect Analysis).
For many systems, maximum response times for different system functions are specified as
requirements. In such cases the response time is the elapsed time plus the turnaround time. When a
system has to perform a number of steps (e.g., a pipeline) to complete an activity, it may be useful to
measure the time taken for each step and analyze the results to determine if one or more steps are
causing a bottleneck.
Dynamic analysis (see section 3.3.4) may be used to identify components causing a bottleneck,
measure the resources used for resource utilization testing, and measure the maximum limits for
capacity testing.
The performance testing described in sections 4.5.2, 4.5.3 and 4.5.4 can all be used to determine if the
software under test meets the specified acceptance criteria. They are also used to measure baseline
values that are used for later comparison when the system is changed. The following performance test
types are more often used to provide information to developers on how the system responds under
different operational conditions.
In the first, load tests are normally performed with the load initially set to the maximum expected and
then increased until the system fails (e.g., response times become unreasonably long or the system
crashes). Sometimes, instead of forcing the system to fail, a high load is used to stress the system and
then the load is reduced to a normal level and the system is checked to ensure its performance levels
have recovered to their pre-stress levels.
In the second, performance tests are run with the system deliberately compromised by reducing its
access to expected resources (e.g., reducing available memory or bandwidth). The results of stress
testing can provide developers with an insight into which aspects of the system are most critical (i.e.
weak links) and so may need upgrading.
The Performance Testing syllabus [CT_PT_SYL] includes further performance test types.
for stress testing. Care should be taken to ensure that any tools acquired to support the testing
are compatible with the communications protocols used by the system under test.
The Performance Testing syllabus [CT_PT_SYL] includes further details of performance test planning.
For performance testing, it is often necessary to change the load on the system by modifying parts of
the operational profile to simulate a change from the expected operational use of the system. For
instance, in the case of capacity testing, it will normally be necessary to override the operational profile
in the area of the variable being capacity tested (e.g., increase the number of users accessing the
system until the system stops responding to determine the user access capacity). Similarly, the volume
of transactions may be gradually increased when load testing.
The Performance Testing syllabus [CTSL_PT_SYL] includes further details of performance efficiency
test design.
Typical maintainability objectives of affected stakeholders (e.g., the software owner or operator)
include:
• Minimizing the cost of owning or operating the software
• Minimizing downtime required for software maintenance
Maintainability tests should be included in a test approach where one or more of the following factors
apply:
• Software changes are likely after the software enters production (e.g., to correct defects or
introduce planned updates)
• The benefits of achieving maintainability objectives over the software development lifecycle
are considered by the affected stakeholders to outweigh the costs of performing the
maintainability tests and making any required changes
• The risks of poor software maintainability (e.g., long response times to defects reported by
users and/or customers) justify conducting maintainability tests
Dynamic maintainability testing focuses on the documented procedures developed for maintaining a
particular application (e.g., for performing software upgrades). Maintenance scenarios are used as test
cases to ensure the required service levels are attainable with the documented procedures. This form
of testing is particularly relevant where the underlying infrastructure is complex, and support procedures
may involve multiple departments/organizations. This form of testing may take place as part of
operational acceptance testing.
Factors which influence these characteristics include the application of good programming practices
(e.g., commenting, naming of variables, indentation), and the availability of technical documentation
(e.g., system design specifications, interface specifications).
Modularity can be tested by means of static analysis (see section 3.2.3). Reusability testing may take
the form of architectural reviews (see Chapter 5).
Portability testing can start with the individual components (e.g., replaceability of a particular component
such as changing from one database management system to another) and then expand in scope as
more code becomes available. Installability may not be testable until all the components of the product
are functionally working.
Portability must be designed and built into the product and so must be considered early in the design
and architecture phases. Architecture and design reviews can be particularly productive for identifying
potential portability requirements and issues (e.g., dependency on a particular operating system).
• Validating that the software can be installed by following the instructions in an installation
manual (including the execution of any installation scripts), or by using an installation wizard.
This includes exercising installation options for different hardware/software configurations and
for various degrees of installation (e.g., initial or update).
• Testing whether failures which occur during installation (e.g., failure to load particular DLLs) are
dealt with by the installation software correctly without leaving the system in an undefined state
(e.g., partially installed software or incorrect system configurations)
• Testing whether a partial installation/de-installation can be completed
• Testing whether an installation wizard can identify invalid hardware platforms or operating
system configurations
• Measuring whether the installation process can be completed within a specified number of
minutes or within a specified number of steps
• Validating that the software can be downgraded or uninstalled
Functional suitability testing is normally performed after the installation test to detect any defects which
may have been introduced by the installation (e.g., incorrect configurations, functions not available).
Usability testing is normally performed in parallel with installability testing (e.g., to validate that users are
provided with understandable instructions and feedback/error messages during the installation).
Adaptability may relate to the ability of the software to be ported to various specified environments by
performing a predefined procedure. Tests may evaluate this procedure.
Replaceability tests may be executed in parallel with functional integration tests where more than one
alternative component is available for integration into the complete system. Replaceability may also be
evaluated by technical review or inspection at the architecture and design levels, where the emphasis
is placed on the clear definition of the interfaces of the component that may be used as a replacement.
conflicts). Coexistence testing should be performed when new or upgraded software will be rolled out
into environments which already contain installed applications.
Coexistence problems may arise when the application is tested in an environment where it is the only
installed application (where incompatibility issues are not detectable) and then deployed to another
environment (e.g., production) which also runs other applications.
Coexistence issues should be analyzed when planning the targeted production environment, but the
actual tests are normally performed after system testing has been completed.
The operational profile defines a pattern of use for the system, typically in terms of the users of the
system and the operations performed by the system. Users are typically specified in terms of how many
are expected to be using the system (and at which times), and perhaps their type (e.g., primary user,
secondary user). The different operations expected to be performed by the system are typically
specified, with their frequency (and probability of occurrence). This information may be obtained by
using monitoring tools (where the actual or a similar application is already available) or by predicting
usage based on algorithms or estimates provided by the business organization.
Tools can be used to generate test inputs based on the operational profile, often using an approach that
generates the test inputs pseudo-randomly. Such tools can be used to create "virtual" or simulated users
in quantities that match the operational profile (e.g., for reliability and availability testing) or exceed it
(e.g., for stress or capacity testing). See section 6.2.3 for more details on these tools.
Regardless of the type of review being performed, the Technical Test Analyst must be allowed adequate
time to prepare. This includes time to review the work product, time to check cross-referenced
documentation to verify consistency, and time to determine what might be missing from the work
product. Without adequate preparation time, the review can become an editing exercise rather than a
true review. A good review includes understanding what is written, determining what is missing, verifying
correctness with respect to technical aspects, and verifying that the described product is consistent with
other products that are either already developed or are in development. For example, when reviewing
an integration level test plan, the Technical Test Analyst must also consider the items that are being
integrated. Are they ready for integration? Are there dependencies that must be documented? Is there
data available to test the integration points? A review is not isolated to the work product being reviewed.
It must also consider the interaction of that item with other items in the system.
The most useful checklists are those gradually developed by an individual organization because they
reflect:
• The nature of the product
• The local development environment
o Staff
o Tools
o Priorities
• History of previous successes and defects
• Particular issues (e.g., performance efficiency, security)
Checklists should be customized for the organization and perhaps for the particular project. The
checklists in this chapter are examples.
Checklists1 used for architecture reviews of the time behavior of websites could, for example, include
verification of the proper implementation of the following items, which are quoted from [Web-2]:
• “Connection pooling - reducing the execution time overhead associated with establishing
database connections by establishing a shared pool of connections
• Load balancing – spreading the load evenly between a set of resources
• Distributed processing
• Caching – using a local copy of data to reduce access time
• Lazy instantiation
• Transaction concurrency
• Process isolation between Online Transactional Processing (OLTP) and Online Analytical
Processing (OLAP)
• Replication of data”
Checklists1 used for code reviews could include the following items:
1. Structure
• Does the code completely and correctly implement the design?
• Does the code conform to any pertinent coding standards?
• Is the code well-structured, consistent in style, and consistently formatted?
• Are there any uncalled or unneeded procedures or any unreachable code?
• Are there any leftover stubs or test routines in the code?
• Can any code be replaced by calls to external reusable components or library functions?
• Are there any blocks of repeated code that could be condensed into a single procedure?
• Is storage use efficient?
• Are symbolics used rather than “magic number” constants or string constants?
• Are any modules excessively complex and should be restructured or split into multiple
modules?
2. Documentation
• Is the code clearly and adequately documented with an easy-to-maintain commenting style?
• Are all comments consistent with the code?
• Does the documentation conform to applicable standards?
3. Variables
• Are all variables properly defined with meaningful, consistent, and clear names?
• Are there any redundant or unused variables?
4. Arithmetic Operations
• Does the code avoid comparing floating-point numbers for equality?
• Does the code systematically prevent rounding errors?
• Does the code avoid additions and subtractions on numbers with greatly different magnitudes?
• Are divisors tested for zero or noise?
1 The exam question will provide a subset of the checklist with which to answer the question
v4.0 Page 44 of 59 2021-06-30
© International Software Testing Qualifications Board
Certified Tester
Advanced Level Syllabus – Technical Test Analyst (CTAL-TTA)
6. Defensive Programming
• Are indices, pointers, and subscripts tested against array, record, or file bounds?
• Are imported data and input arguments tested for validity and completeness?
• Are all output variables assigned?
• Is the correct data element operated on in each statement?
• Is every memory allocation released?
• Are timeouts or error traps used for external device access?
• Are files checked for existence before attempting to access them?
• Are all files and devices left in the correct state upon program termination?
A test automation project should be considered a software development project. This includes the need
for architecture documentation, detailed design documentation, design and code reviews, component
and component integration testing, as well as final system testing. Testing can be needlessly delayed
or complicated when unstable or inaccurate test automation code is used.
There are multiple tasks that the Technical Test Analyst can perform regarding test execution
automation. These include:
• Determining who will be responsible for the test execution (possibly in coordination with a Test
Manager)
• Selecting the appropriate tool for the organization, timeline, skills of the team, and maintenance
requirements (note this could mean deciding to create a tool to use rather than acquiring one)
• Defining the interface requirements between the automation tool and other tools such as the
test management, defect management and tools used for continuous integration
• Developing adapters which may be required to create an interface between the test execution
tool and the software under test
• Selecting the automation approach, i.e., keyword-driven or data-driven (see section 6.1.1)
• Working with the Test Manager to estimate the cost of the implementation, including training. In
agile software development this aspect would typically be discussed and agreed in project/sprint
planning meetings with the whole team.
• Scheduling the automation project and allocating the time for maintenance
• Training the Test Analysts and Business Analysts to use and supply data for the automation
• Determining how and when the automated tests will be executed
• Determining how the automated test results will be combined with the manual test results
In projects with a strong emphasis on test automation, a Test Automation Engineer may be tasked with
many of these activities (see the Test Automation Engineer syllabus [CT_TAE_SYL] for details). Certain
organizational tasks may be taken on by a Test Manager according to project needs and preferences.
In agile software development the assignment of these tasks to roles is typically more flexible and less
formal.
These activities and the resulting decisions will influence the scalability and maintainability of the
automation solution. Sufficient time must be spent researching the options, investigating available tools
and technologies, and understanding the future plans for the organization.
One of the difficulties of testing through the GUI is the tendency for the GUI to change as the software
evolves. Depending on the way the test automation code is designed, this can result in a significant
maintenance burden. For example, using the capture/playback capability of a test automation tool may
result in automated test cases (often called test scripts) that no longer run as desired if the GUI changes.
This is because the recorded script captures interactions with the graphical objects when the tester
executes the software manually. If the objects being accessed change, the recorded scripts may also
need updating to reflect those changes.
Capture/playback tools may be used as a convenient starting point for developing automation scripts.
The tester records a test session and recorded script is then modified to improve maintainability (e.g.,
by replacing sections in the recorded script with reusable functions).
When using this approach, in addition to the test scripts that process the supplied data, a harness and
infrastructure are needed to support the execution of the script or set of scripts. The actual data held in
the spreadsheet or database is created by Test Analysts who are familiar with the business function of
the software. In agile software development the business representative (e.g., Product Owner) may also
be involved in defining data, in particular for acceptance tests. This division of labor allows those
responsible for developing test scripts (e.g., the Technical Test Analyst) to focus on the implementation
of automation scripts while the Test Analyst maintains ownership of the actual test. In most cases, the
Test Analyst will be responsible for executing the test scripts once the automation is implemented and
tested.
Once the keywords and data to be used have been defined, the test automator (e.g., Technical Test
Analyst or Test Automation Engineer) translates the business process keywords and lower level actions
into test automation code. The keywords and actions, along with the data to be used, may be stored in
spreadsheets or entered using specific tools which support keyword-driven test automation. The test
automation framework implements the keyword as a set of one or more executable functions or scripts.
Tools read test cases written with keywords and call the appropriate test functions or scripts which
implement them. The executables are implemented in a highly modular manner to enable easy mapping
to specific keywords. Programming skills are needed to implement these modular scripts.
This separation of the knowledge of the business logic from the actual programming required to
implement the test automation scripts provides the most effective use of the test resources. The
Technical Test Analyst, in the role as the test automator, can effectively apply programming skills without
having to become a domain expert across many areas of the business.
Separating the code from the changeable data helps to insulate the automation from changes, improving
the overall maintainability of the code and improving the return on the automation investment.
Keywords are generally used to represent high-level business interactions with a system. For example,
“Cancel_Order” may require checking the existence of the order, verifying the access rights of the
person requesting the cancellation, displaying the order to be cancelled and requesting confirmation of
the cancellation. Sequences of keywords (e.g., “Login”, “Select_Order”, “Cancel_Order”), and the
relevant test data are used by the Test Analyst to specify test cases.
• No matter how much analysis goes into the keyword language, there will often be times when
new and changed keywords will be needed. There are two separate domains to a keyword (i.e.,
the business logic behind it and the automation functionality to execute it). Therefore, a process
must be created to deal with both domains.
Keyword-based test automation can significantly reduce the maintenance costs of test automation. It
may be more costly to set up initially, but it is likely to be cheaper overall if the project lasts long enough.
The Test Automation Engineer syllabus [CT_TAE_SYL] includes further details of modelling business
processes for automation.
Note that detailed information about tools is provided by the following ISTQB® syllabi:
• Mobile Application Testing [CT_MAT_SYL]
• Performance Testing [CT_PT_SYL]
• Model-Based Testing [CT_MBT_SYL]
• Test Automation Engineer [CT_TAE_SYL]
Fault seeding tools are generally used by the Technical Test Analyst but may also be used by the
developer when testing newly developed code.
Fault injection tools are generally used by the Technical Test Analyst but may also be used by the
developer when testing newly developed code.
Load generation is performed by implementing a pre-defined operational profile (see section 4.9) as a
script. The script may initially be captured for a single user (possibly using a capture/playback tool) and
is then implemented for the specified operational profile using the performance test tool. This
implementation must take into account the variation of data per transaction (or sets of transactions).
Performance tools generate a load by simulating large numbers of multiple users (“virtual” users)
following their designated operational profiles to accomplish tasks including generating specific volumes
of input data. In comparison with individual test execution automation scripts, many performance testing
scripts reproduce user interaction with the system at the communications protocol level and not by
simulating user interaction via a graphical user interface. This usually reduces the number of separate
"sessions" needed during the testing. Some load generation tools can also drive the application using
its user interface to more closely measure response times while the system is under load.
A wide range of measurements are taken by a performance test tool to enable analysis during or after
execution of the test. Typical metrics taken and reports provided include:
• Number of simulated users throughout the test
• Number and type of transactions generated by the simulated users and the arrival rate of the
transactions
• Response times to particular transaction requests made by the users
• Reports and graphs of load against response times
• Reports on resource usage (e.g., usage over time with minimum and maximum values)
Performance test tools are typically acquired rather than developed in-house due to the effort required
to develop them. It may, however, be appropriate to develop a specific performance tool if technical
restrictions prevent an available product being used, or if the load profile and facilities to be provided
are simple compared to the load profile and facilities provided by commercial tools. Further details of
performance testing tools are provided in the Performance Testing syllabus [CT_PT_SYL].
Some tools that include a web spider engine can also provide information on the size of the pages and
on the time necessary to download them, and on whether a page is present or not (e.g., HTTP error
404). This provides useful information for the developer, the webmaster, and the tester.
Test Analysts and Technical Test Analysts use these tools primarily during system testing.
MBT models (and tools) can be used to generate large sets of distinct execution threads. MBT tools can
also help to reduce the very large number of possible paths that can be generated in a model. Testing
using these tools can provide a different view of the software to be tested. This can result in the discovery
of defects that might have been missed by functional testing.
Further details of model-based testing tools are provided in the Model-Based Testing syllabus
[CT_MBT_SYL].
Component testing tools are often specific to the language that is used for programming a component.
For example, if Java was used as the programming language, JUnit might be used to automate the
component testing. Many other languages have their own special test tools; these are collectively called
xUnit frameworks. Such a framework generates test objects for each class that is created, thus
simplifying the tasks that the programmer needs to do when automating the component testing.
Some build automation tools allow a new build to be automatically triggered when a component is
changed. After the build is completed, other tools automatically execute the component tests. This level
of automation around the build process is usually seen in a continuous integration environment.
When set up correctly, this set of tools can have a positive effect on the quality of builds being released
into testing. Should a change made by a programmer introduce regression defects into the build, it will
usually cause some of the automated tests to fail, triggering immediate investigation into the cause of
the failures before the build is released into the test environment.
6.2.7.1 Simulators
A mobile simulator models the mobile platform’s runtime environment. Applications tested on a simulator
are compiled into a dedicated version, which works in the simulator but not on a real device. Simulators
are used as replacements for real devices in testing, but are typically limited to initial functional testing
and simulating many virtual users in load testing. Simulators are relatively simple (compared to
emulators) and can run tests faster than an emulator. However, the application tested on a simulator
differs from the application that will be distributed.
6.2.7.2 Emulators
A mobile emulator models the hardware and utilizes the same runtime environment as the physical
hardware. Applications compiled to be deployed and tested on an emulator could also be used by the
real device.
However, an emulator cannot fully replace a device because the emulator may behave in a different
manner than the mobile device it tries to mimic. In addition, some features may not be supported such
as (multi)touch, and accelerometer. This is partly caused by limitations of the platform used to run the
emulator.
7. References
7.1 Standards
The following standards are mentioned in these respective chapters.
Chapter 2: [Web-7]
Chapter 4: [Web-1] [Web-4]
Chapter 5: [Web-2]
Chapter 6: [Web-3] [Web-5] [Web-6]
Maintainability Maintainability
Modularity New sub-characteristic
Reusability New sub-characteristic
Analysability Analysability
More accurate name combining changeability
Modifiability Stability
and stability
Testability Testability
Portability Portability
Adaptability Adaptability
Installability Installability
Coexistence Moved to Compatibility
Replaceability Replaceability
9. Index
installability testing, 39
accountability, 27 integrity, 27
action word-driven, 48
adaptability, 27 keyword-driven, 48
adaptability testing, 40
analyzability, 27 load testing, 36
anti-pattern, 43
API testing, 13, 17 maintainability, 23, 27
Application Programming Interface (API), 17 maintainability testing, 38
architectural reviews, 43 master test plan, 29
atomic condition, 13, 15 maturity, 27
attack, 32 MC/DC, 15
authenticity, 27 memory leak, 21
availability, 27 metrics
performance, 25
benchmark, 29 model-based testing, 46
modifiability, 27
capacity, 27 modified condition/decision testing (MC/DC),
capture/playback, 46 15
capture/playback tool, 48 modularity, 27
code reviews, 44 multiple condition coverage, 16
coexistence, 27 multiple condition testing, 13, 16
coexistence/compatibility testing, 40
cohesion, 23 non-repudiation, 27
compatibility, 27
confidentiality, 27 operational profile, 27, 41
control flow, 13 organizational considerations, 30
control flow analysis, 21, 22
control flow coverage, 15 performance efficiency, 27
coupling, 23 performance efficiency test planning, 37
cyclomatic complexity, 21, 22 performance efficiency test specification, 38
portability, 27
data flow analysis, 21, 22 portability testing, 39, 40
data security considerations, 30 product quality characteristics, 28
data-driven, 48 product risk, 10
decision predicates, 14
decision testing, 13, 15 quality attributes for technical testing, 27
definition-use pair, 21 quality characteristic, 27
dynamic analysis, 21, 24
memory leaks, 24 recoverability, 27
overview, 24 reliability, 27
performance efficiency, 25 reliability growth model, 27
wild pointers, 25 reliability testing, 32
dynamic maintainability testing, 38 remote procedure calls (RPC), 17
replaceability, 27
emulator, 46, 52 replaceability testing, 40
resource utilization, 27
fault injection, 34, 50 reusability, 27
fault seeding, 46, 50 reviews, 42
fault tolerance, 27 checklists, 43
risk assessment, 10, 11
installability, 27 risk identification, 10, 11