A4Q-SDET Sample Exam Answers EN 1
A4Q-SDET Sample Exam Answers EN 1
____________________________
____________________________
Phone: ____________________________
Fax: ____________________________
E-mail-address: ____________________________
____________________________
____________________________
Trainer ____________________________
Authored by:
German Testing Board e. V. – Examination Panel
(SET A4Q_SDET_Sample-Exam-Answers_SetA_2022_EN)
Introduction
This is a sample exam. It helps candidates to prepare for the actual certification exam.
Questions are included whose structure, layout and format are like a regular exam.
This version of the sample exam questions for A4Q-SDET has been compiled from the
following sources:
• ISTQB® CTFL CORE 2018 V3.1; SAMPLE EXAM SET A and SET B,
• CTAL-TTA V4.0; SAMPLE EXAM PAPER,
• and other supplemental questions created by a GTB working group.
The sample exam is protected by copyright. Joint holders of the copyright and
exclusive rights of use are the International Software Testing Qualification Board
(ISTQB®) and the German Testing Board e. V. (GTB).
It is strictly forbidden to use the exam questions as content of a certification exam, to
reproduce, distribute or make them publicly available for this purpose.
The use, in particular reproducing, distributing, or making the sample exam available
to the public is permitted to any individual for the purpose of his or her own preparation
for the certification exam and to any training provider accredited according to the
ISTQB® standard for the purpose of conducting training. Any national board recognized
by ISTQB® is further permitted to translate the sample exam into other languages and
to reproduce, distribute, and make the translations available to the public, and to grant
rights of use of the sample exam and its translations to the extent provided above. Any
use is only permitted if ISTQB® and GTB are named as the source and copyright
holder.
In all other respects, the use of the sample exam - insofar as it is not permitted under
the Copyright and Related Rights Act of September 9, 1965 (UrhG) - is only permitted
with the express consent of ISTQB® and GTB.
The use of the information contained in this work is at the sole risk of the user. GTB
does not guarantee the completeness, technical correctness, compliance with legal
requirements or standards, or the commercial usage of the information. No product
recommendations are made in this document.
GTB's liability is otherwise limited to intent and gross negligence.
General information:
Number of questions: 40
Which of the following provides the definition of the term test case?
c) Work products produced during the test process for use in planning,
designing, executing, evaluating and reporting on testing
a) The test should start as late as possible so that development has enough time
to create a good product
b) To validate whether the test object works as expected by the users and other
stakeholders
c) To prove that all possible defects are identified
d) To prove that any remaining defects will not cause any failures
a) WRONG – Contradiction to principle 3: “Early testing saves time and money” (Syllabus,
Section 1.3)
c) WRONG – Principle #2 states that exhaustive testing is impossible, so one can never
prove that all defects were identified (Syllabus, Section 1.3)
d) WRONG – To make an assessment whether a defect will cause a failure or not, one must
detect the defect first. Saying that no remaining defect will cause a failure implicitly
means that all defects were found. This again contradicts principle #2 (Syllabus, Section
1.3)
a) Testing identifies the source of defects; debugging analyzes the defects and
proposes prevention activities
b) Dynamic testing shows failures caused by defects; debugging eliminates the
defects, which are the source of failures
c) Testing does not remove faults; but debugging removes defects that cause
the faults
d) Dynamic testing prevents the causes of failures; debugging removes the
failures
a) WRONG – Testing does not identify the source of defects, debugging identifies the
defects (Syllabus, Section 1.1.2)
b) CORRECT – Dynamic testing can show failures that are caused by defects in the
software. Debugging eliminates the defects, which are the source of failures, not the root
cause of the defects (Syllabus, Section 1.1.2)
c) WRONG – Testing does not remove faults, but debugging remove defects that cause the
faults (Syllabus, Section 1.1.2)
d) WRONG – Dynamic testing does not directly prevent the causes of failures (defects) but
detects the presence of defects (Syllabus, Section 1.1.2 and 1.3)
a) Through its results, testing can be used as a tool to evaluate the performance
of developers.
b) Testing can help prevent possible failures of the software during operation.
d) Testing always ensures that all requirements are fully and correctly met.
Justification
a) WRONG – This is not one of the reasons mentioned in syllabus CTFL chapter 1.2,
contrary to the syllabus this is not supposed to take place (see A4Q SDET Syllabus
2022, Section 1.2).
b) CORRECT – A4Q SDET Syllabus 2022, Section 1.2 " Rigorous testing of components
and systems, and their associated documentation, can help reduce the risk of failures
occurring during operation."
c) WRONG – A4Q SDET Syllabus 2022, Section 1.2 Why is testing necessary?
"In addition, software testing may also be required to meet contractual or legal
requirements or industry-specific standards." --> it can be, but it is not always.
d) WRONG – A4Q SDET Syllabus 2022, Section 1.2.1 Testing’s Contributions to Success
“Throughout the history of computing, it is quite common for software and systems to be
delivered into operation and, due to the presence of defects, to subsequently cause
failures or otherwise not meet the stakeholders’ needs. However, using appropriate test
techniques can reduce the frequency of such problematic deliveries, when those
techniques are applied with the appropriate level of test expertise, in the appropriate test
levels, and at the appropriate points in the software development lifecycle.”
Testing reduces the frequency, but it is not possible to cover all problems and risks
completely. (See also 1.3 Seven Principles of Testing: Exhaustive testing is impossible).
d) The more test cases are executed, the higher the quality of the software.
FL-1.2.2 (K2) Describe why testing is part of quality assurance and give examples of how testing
contributes to higher quality
Justification:
a) CORRECT – Since quality assurance is concerned with the proper execution of the entire
process, quality assurance supports proper testing. (cf. A4Q SDET Syllabus 2022, section
1.2.2)
b) WRONG – For example, thoroughly testing all specified requirements and fixing all defects
found could still produce a system that is difficult to use, that does not fulfill the users’ needs
and expectations, or that is inferior compared to other competing systems. (cf. A4Q SDET
Syllabus 2022, section 1.3)
c) WRONG – (cf. A4Q SDET Syllabus 2022, Section 1.3 Seven Principles of Testing)
To find defects early, both static and dynamic test activities should be started as early as
possible in the software development lifecycle. Early testing is sometimes referred to as shift
left. Testing early in the software development lifecycle helps reduce or eliminate costly
changes (see section 3.1).
d) WRONG – (cf. A4Q SDET Syllabus 2022, Section 1.3 Seven Principles of Testing). Some
organizations expect that testers can run all possible tests and find all possible defects, but
principles 2 and 1, respectively, tell us that this is impossible. Further, it is a fallacy (i.e., a
mistaken belief) to expect that just finding and fixing a large number of defects will ensure the
success of a system. For example, thoroughly testing all specified requirements and fixing all
defects found could still produce a system that is difficult to use, that does not fulfil the users’
needs and expectations, or that is inferior compared to other competing systems.
b) WRONG – This is an example of a defect (something wrong in the code that may cause
a failure)
c) CORRECT – This is a deviation from the expected functionality - a cruise control system
should not be affected by the radio
a) Because the author of the requirements was unfamiliar with the domain of
fitness training. The author therefore wrongly assumed that users wanted
heartbeat in beats per hour.
b) The tester of the smartphone interface had not been trained in state transition
testing, so missed a major defect.
FL-1.2.4 (K2) Distinguish between the root cause of a defect and its effects
a) WRONG – The lack of familiarity of the requirements author with the fitness domain is a
root cause
b) WRONG – The lack of training of the tester in state transition testing was one of the root
causes of the defect (the developer presumably created the defect, as well)
c) CORRECT – The incorrect configuration data represents faulty software in the fitness
tracker (a defect), that may cause failures
d) WRONG – The lack of experience in designing user interfaces for wearable devices is a
typical example of a root cause of a defect
Mr. Test has been testing software applications on mobile devices for a
period of 5 years. He has a wealth of experience in testing mobile
applications and achieves better results in a shorter time than others.
Over several months, Mr. Test did not modify the existing automated test
cases and did not create any new test cases. This leads to fewer and fewer
defects being found by executing the tests. What principle of testing did
Mr. Test not observe?
b) WRONG – Exhaustive testing is impossible, regardless of the amount of effort put into
testing (principle #2)
c) CORRECT – Principle #5 says “If the same tests are repeated over and over again,
eventually these tests no longer find any new defects. To detect new defects, existing
tests and test data may need changing, and new tests may need to be written.”
Automated regression testing of the same test cases will not bring new findings
d) WRONG – ”Defect cluster together” (principle #4). A small number of modules usually
contain most of the defects, but this does not mean that fewer and fewer defects will be
found
a) Testing to see if defects have been introduced into unchanged areas of the
software.
b) Testing the impact of a changed environment to an operational system.
d) Testing after fixing a defect to confirm that a failure caused by that defect no
longer occurs.
a) Decision testing
c) Code review
d) Equivalence partitioning
Justification:
a) CORRECT – Decision testing is a white-box test technique. (see A4Q SDET Syllabus
2022, Section 4.3 White-box Test Techniques).
c) WRONG – Code review is a static test type, and thus does not belong to white-box
testing. (see A4Q SDET Syllabus 2022)
a) The purpose of regression testing is to ensure that all previously run tests still
work CORRECTLY, while the purpose of confirmation testing is to ensure that
any fixes made to one part of the system have not adversely affected other
parts
b) The purpose of confirmation testing is to check that a previously found defect
has been fixed, while the purpose of regression testing is to ensure that no
other parts of the system have been adversely affected by the fix
c) The purpose of regression testing is to ensure that any changes to one part of
the system have not caused another part to fail, while the purpose of
confirmation testing is to check that all previously run tests still provide the
same results as before
d) The purpose of confirmation testing is to confirm that changes to the system
were made successfully, while the purpose of regression testing is to run tests
that previously failed to ensure that they now work CORRECTLY
FL-2.3.3 (K2) Compare the purposes of confirmation testing and regression testing
b) CORRECT – The descriptions of both confirmation and regression testing match the
intent of those in the syllabus, Section 2.2.4
b) WRONG – This is a trigger for maintenance testing: Operational tests of the new
environment as well as of the changed software, see Syllabus A4Q SDET 2021, Section
2.3.1
c) WRONG – This is the trigger for maintenance testing: testing restore/retrieve procedures
after archiving for long retention periods, see Syllabus A4Q SDET 2021, Section 2.3.1
a) CORRECT – Impact analysis may be used to identify those areas of the system that will
be affected by the fix, and so the extent of the impact (e. g. necessary regression testing)
can be used when deciding if the change is worthwhile, see Syllabus A4Q SDET 2021,
Section 2.3.2
b) WRONG – Although testing migrated data is part of maintenance testing (see conversion
testing), impact analysis does not identify how this is done
c) WRONG – Impact analysis shows which parts of a system are affected by a change, so it
can show the difference between different hot fixes in terms of the impact on the system,
however it does not give any indication of the value of the changes to the user
d) WRONG – Impact analysis shows which parts of a system are affected by a change; it
cannot provide an indication of the effectiveness of test cases
a) Review
b) Static Analysis
d) Pairwise Testing
b) WRONG – Here, the focus is not on evaluating the code based on its form, structure,
content, or documentation, as in static analysis.
c) WRONG – white-box testing is a dynamic test and not a type of static test. But here we
are talking about a static test without executing the code.
d) WRONG – Pairwise testing is a dynamic test procedure, but here we are talking about
static testing without executing the code.
a) CORRECT – This is the definition of the definition-use pair from the glossary.
b) WRONG – The "usage" in the name of the term does not refer to the statement but to the
variable.
c) WRONG – The "definition" in the name of the term does not refer to the comment but to
the code itself.
d) WRONG – The object of the definition-use pair is not the software but a variable.
Librarians can:
Borrowers can:
You have been assigned the checklist entry that requires you to review the
specification for inconsistencies between individual requirements (i. e.
conflicts between requirements).
b) 6-15, 9-11
d) 6-15, 7-12
• 6-10 – If librarians should get system responses within 5 seconds, it is NOT inconsistent
for borrowers to get system responses within 3 seconds.
• 6-15 - If librarians should get system responses within 5 seconds, it is inconsistent for all
users to get system responses within 3 seconds.
• 7-12 – If borrowers can borrow a maximum of 3 books at one time it is NOT inconsistent
for them to also reserve books (if they are on-loan).
• 9-11 – If a borrower can be fined for failing to return a book within 3 weeks it is
inconsistent for them to also be allowed to borrow a book at no cost for a maximum of 4
weeks – as the length of valid loans are different.
Thus, of the potential inconsistencies, 6-15 and 9-11 are valid inconsistencies, and so option
b) IS CORRECT.
Justification:
b) WRONG
c) WRONG
d) WRONG
Below is the pseudo-code for a program that calculates and prints sales
commissions:
Justification:
a) CORRECT – Anomalies:
total: used at line 6 before it is defined.
commission_lo: defined at line 12 & no subsequent use
b) WRONG
c) WRONG
d) WRONG
Below you can see the pseudo-code for a program called TRICKY.
00 programme TRICKY
01 var1, var2, var3: integer
02 begin
03 read(var2)
04 read( var1 )
05 while var2 < 10 loop
06 var3 = var2 + var1
07 var2 = 4
08 var1 = var2 + 1
09 print ( var3 )
10 if var1 = 5 then
11 print ( var1 )
12 else
13 print ( var1+1 )
14 endif
15 var2 = var2 + 1
16 endloop
17 write ( " Wow – that was tricky!" )
18 write ( "But the answer is..." )
19 write ( var2+var1 )
20 end program TRICKY
How could the use of static analysis best improve the maintainability of
the program?
TTA-3.2.3 (K3) Propose ways to improve the maintainability of code by applying static
analysis
Justification:
a) WRONG – The code is well structured with controls (e. g. loop, if-then-else). It is unlikely
that static analysis can identify improvements to the control structure.
b) WRONG – No global variables are defined and no other programs are called. Coupling is
not an area of improvement.
c) CORRECT – Static analysis is used with tool support to improve code maintainability by
verifying compliance to coding standards and guidelines. This includes commenting (see
A4Q SDET Syllabus 2022, Section 3.3.3). Since the program has no comments at all,
this would be highlighted as an area to improve the maintainability of the code.
d) WRONG – Static analysis can apply indentation rules, but in the case of the TRICKY
program, sufficient indentation is present.
a) A test technique in which tests are derived based on the tester's knowledge of
past faults, or general knowledge of failures
a) WRONG – Exploratory testing is often carried out when timescales are short, so making
in-depth investigations of the background of the test object is unlikely
c) WRONG – Based on the Glossary definition of session-based testing, but with test
execution replaced by test analysis
Which of the following BEST matches the descriptions with the different
categories of test techniques?
FL-4.1.1 (K2) Explain the characteristics, commonalities, and differences between black-box
test techniques, white-box test techniques and experience-based test techniques
The correct pairing of descriptions with the different categories of test techniques is:
FL-4.1.1 (K2) Explain the characteristics, commonalities, and differences between black-box
test techniques, white-box test techniques, and experience-based test techniques
a) WRONG – Test conditions, test cases, and test data are derived from a test basis that
may include code, software architecture, detailed design, or any other source of
information regarding the structure of the software (see A4Q SDET Syllabus 2022;
Chapter 4.1.1 6th paragraph)
b) CORRECT – Test conditions, test cases, and test data are derived from a test basis that
may include knowledge and experience of testers, developers, users and other
stakeholders. (see A4Q SDET Syllabus 2022; Chapter 4.1.1 6th paragraph) – experience
with former failures are given.
d) WRONG – Test conditions, test cases, and test data are derived from a test basis that
may include software requirements, specifications, use cases, and user stories (see A4Q
SDET Syllabus 2022; Chapter 4.1.1 5th paragraph)
T1 1.5 v. low 10
T2 7.0 medium 60
T3 0.5 v. low 10
What is the minimum number of additional test cases that are needed to
ensure full coverage of ALL VALID INPUT equivalence partitions?
a) 1
b) 2
c) 3
d) 4
FL-4.2.1 (K3) Apply equivalence partitioning to derive test cases from given requirements
The given test cases cover the following valid input equivalence partitions:
T1 1.5 (1) Very low (4)
T2 7.0 (3) Medium (6)
T3 0.5 (1) Very low (4)
Thus, the missing valid input equivalence partitions are: (2), (5) and (7). These can be
covered by two test cases, as (2) can be combined with either (5) or (7).
A smart home app measures the average temperature in the house over
the previous week and provides feedback to the occupants.
The feedback for different average temperature ranges (to the nearest °C)
should be:
Using BVA (only Min- and Max values), which of the following sets of test
inputs provides the highest level of boundary coverage?
FL-4.2.2 (K3) Apply boundary value analysis to derive test cases from given requirements
For the input equivalence partitions given, the above used boundary value technique yields
the following 8 coverage items:
A company's employees are paid bonuses if they work more than a year
in the company and achieve a target which is individually agreed in
advance.
Test-ID T1 T2 T3 T4
Condition1 Employment for more YES NO NO YES
than 1 year?
Condition2 Agreed target? NO NO YES YES
Condition3 Achieved target? NO NO YES YES
Action Bonus payment NO NO NO YES
Which of the following test cases represents a situation that can happen
in practice, and is missing in the above decision table?
FL-4.2.3(K3) Apply decision table testing to derive test cases from given requirements
b) WRONG – The test case is objectively wrong, since under these conditions no bonus is
paid because the agreed target was not reached
d) CORRECT – The test case describes the situation that the too short period of
employment and the non-fulfilment of the agreed target leads to non-payment of the
bonus. This situation can occur in practice but is missing in the decision table
WAIT OFF
TRICKLE
FL-4.2.4 (K3) Apply state transition testing to derive test cases from given requirements
1
WAIT OFF
2
3 4
TRICKLE
5 6
8 9
LOW CHARGE HIGH
7 10
a) OFF (2) WAIT (1) OFF (2) WAIT (3) TRICKLE (5) CHARGE (9) HIGH (10) CHARGE (7)
LOW = 7 transitions (out of 10)
b) WAIT (3) TRICKLE (4) WAIT (1) OFF (2) WAIT (3) TRICKLE (5) CHARGE (7) LOW (8)
CHARGE = 7 transitions (out of 10)
c) HIGH (10) CHARGE (7) LOW (8) CHARGE (6) TRICKLE (4) WAIT (3) TRICKLE (4)
WAIT (3) TRICKLE (5) = 7 transitions (out of 10)
d) WAIT (3) TRICKLE (5) CHARGE (9) HIGH (10) CHARGE (6) TRICKLE (4) WAIT (1) OFF
(2) WAIT = 8 transitions (out of 10)
Which of the following statements BEST describes how test cases are
derived from a use case?
a) Test cases are created to exercise defined basic, exceptional and error
behaviors performed by the system under test in collaboration with actors
b) Test cases are derived by identifying the components included in the use case
and creating integration tests that exercise the interactions of these
components
c) Test cases are generated by analyzing the interactions of the actors with the
system to ensure the user interfaces are easy to use
d) Test cases are derived to exercise each of the decision points in the business
process flows of the use case, to achieve 100% decision coverage of these
flows
a) CORRECT – Syllabus Section 4.2.5 explains that each use case specifies some behavior
that a subject can perform in collaboration with one or more actors. It also (later) explains
that tests are designed to exercise the defined behaviors (basic, exceptional and errors)
b) WRONG – Use cases normally specify requirements, and so do not ‘include’ the
components that will implement them
c) WRONG – Tests based on use cases do exercise interactions between the actor and the
system, but they are focused on the functionality and do not consider the ease of use of
user interfaces
d) WRONG – Tests do cover the use case paths through the use case, but there is no
concept of decision coverage of these paths, and certainly not of business process flows
Based on this use case, the following abstract (logical) test cases were
created by the colleague. Which of these test cases fits to the use case
UC 3?
a) A sufficiently large parking space has been identified by the parking assistant.
b) The driver is informed on their display that the vehicle has been successfully
parked.
c) Parking is not possible because of a sudden obstacle; the parking process is
automatically aborted.
d) The parking space is not recognized, although the space is sufficient (sensors
dirty).
FL-4.2.5 (K2) Explain how to derive test cases from a use case
a) It is a metric, which is the percentage of test cases that have been executed
“When the code contains only a single ‘if’ statement and no loops or
CASE statements, and its execution is not nested within the test, any
single test case we run will result in 50% decision coverage.”
a) The statement is true. Any single test case provides 100% statement
coverage and therefore 50% decision coverage
b) The statement is true. Any single test case would cause the outcome of the
“if” statement to be either true or false
c) The statement is false. A single test case can only guarantee 25% decision
coverage in this case
d) The statement is false. The statement is too broad. It may be correct or not,
depending on the tested software
a) WRONG – While the given statement is true, the explanation is not. The relationship
between statement and decision coverage is misrepresented
b) CORRECT – Since any test case will cause the outcome of the “if” statement to be either
TRUE or FALSE, by definition we achieved 50% decision coverage
c) WRONG – A single test case can give more than 25% decision coverage, this means
according to the statement above always 50 % decision coverage
d) WRONG – The statement is specific and always true, because each test case achieves
50 % decision coverage
a) WRONG – A path through source code is one potential route through the code from the
entry point to the exit point that could exercise a range of decision outcomes. Two
different paths may exercise all but one of the same decision outcomes, and by just
changing a single decision outcome a new path is followed. Test cases that would
achieve decision coverage are typically a tiny subset of the test cases that would achieve
path coverage. In practice, most nontrivial programs (and all programs with
unconstrained loops, such as ‘while’ loops) have a potentially infinite number of possible
paths through them and so measuring the percentage covered is practically infeasible
b) WRONG – Coverage of business flows can be a focus of use case testing, but use cases
rarely cover a single component. It may be possible to cover the decisions within
business flows, but only if they were specified in enough detail, however this option only
suggests coverage of “business flows” as a whole. Even if business flows would cover
some decisions, the measure “Decision Coverage” don’t measure the percentage of
business flows, but the percentage of decision outcomes exercised by the business flows
c) WRONG – Achieving full decision coverage does require all ‘if’ statements to be
exercised with both true and false outcomes, however, there are typically several other
decision points in the code (e. g. ‘case’ statements and the code controlling loops) that
also need to be taken into consideration when measuring decision coverage
b) WRONG – The statement is false because achieving 100 % statement coverage does
not in any case mean that the decision coverage is 100%
c) WRONG – The statement is false, because we can only do statements about 100%
values
Below you find the pseudo code for the program EASY:
00 program EASY
01 var1, var2, var3: integer
02 easy: boolean
02 begin
03 read (var2)
04 read (var1)
05 read (easy)
06 If (easy = true) then
07 var3 = var2 + var1
08 print (var3)
09 if (var1 = 5) then
10 print (var1)
11 endif
12 var2 = var2 + 1
13 else
14 var2 = 0
15 write (“Wow – that was tricky!”)
16 endif
17 write (var2)
18 end program EASY
a) WRONG
b) WRONG
c) WRONG
d) CORRECT - All instructions can be tested with two test cases and all decisions with 3
test cases.
Switch on machine
IF sufficient water THEN
Boil water
Add tea
Show message “milk?”
IF milk = yes THEN
Show message “low fat?”
IF low fat = yes THEN
Add low fat milk
ELSE
Add normal milk
ENDIF
ENDIF
Show message “sugar?”
IF sugar = yes THEN
Add sugar
ENDIF
Stir
Wait 3 minutes
Show message “please take your tea”
ELSE
Show message “please fill up water”
ENDIF
How many test cases would you design to achieve 100% statement
coverage for the tea-making machine?
a) 3
b) 2
c) 5
d) 6
TTA-2.2.1 (K3) Write test cases from a given specification item by applying the Statement
testing test technique to achieve a defined level of coverage.
Justification:
a) CORRECT – The three test cases are defined by the following inputs:
b) WRONG
c) WRONG
d) WRONG
Statement P
IF A THEN
IF B THEN
Statement Q
ELSE
Statement R
ENDIF
ELSE
Statement S
IF C THEN
Statement T
ELSE
Statement U
ENDIF
ENDIF
Statement V
How many test cases would you design to achieve 100% decision
coverage?
a) 2
b) 3
c) 4
d) 5
TTA-2.3.1 (K3) Write test cases from a given specification item by applying the Decision
testing test technique to achieve a defined level of coverage.
Justification:
a) WRONG
b) WRONG
c) CORRECT – the following conditions ensure that all decision outcomes are tested:
1) A, B 2) A, not B 3) not A, C 4) not A, not C.
d) WRONG