ISTQB Foundation Level Quick Guide
ISTQB Foundation Level Quick Guide
ISTQB Foundation Level Quick Guide
com
9123820085
ISTQB Foundation
Level Quick Guide
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
Testing and debugging are two different activities that complement each other.
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
• It helps to identify and fix defects and errors before the software product or
system is released to users.
• It improves the quality of the software product or system, leading to
increased user satisfaction.
• It reduces the risks associated with using the software product or system,
leading to increased user confidence.
• It helps to ensure that the software product or system meets the specified
requirements and works as expected.
• It helps to identify opportunities for improvement in the software
development process.
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
➢ A defect has a root cause, which is the underlying reason why the defect
occurred.
➢ The effect is the result of the defect.
➢ Identifying the root cause of a defect helps in preventing similar defects
from occurring in the future.
• For example, if a software product crashes when a user enters invalid data,
the root cause of the defect could be an inadequate data validation
routine. The effect of the defect is the software product crashing, which
results in user frustration and potentially lost revenue.
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
The seven testing principles are a set of fundamental concepts that guide the
testing process. These principles include:
The test process is a set of activities and tasks that are performed to ensure that
the software meets the requirements and is free of defects.
The test process needs to be adapted to the specific context of the project. The
context factors that influence the test process include the following:
Objectives of testing:
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
Test activities are a set of actions and tasks that are performed during the testing
process. Test activities can be divided into the following categories:
-: Test tasks are specific activities that are performed during each test activity.
➢ Test work products are the deliverables that are created during the testing
process.
➢ Test work products can include:
1. Test plan
2. Test design specification
3. Test case specification
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
1.4.4 Traceability between the Test Basis and Test Work Products
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
➢ Testers and developers have different mindsets that can influence their
approach to testing.
➢ Testers tend to focus on finding defects, while developers tend to focus on
implementing functionality.
➢ Testers need to understand the developer’s mindset in order to effectively
communicate defects and ensure that they are fixed.
➢ Developers need to understand the tester’s mindset in order to effectively
prioritize and fix defects.
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
➢ Acceptance Testing is the process of verifying that the software meets the
user requirements and is acceptable for delivery.
➢ It is usually the final stage of testing and may include functional and non-
functional testing types.
Test Types are the techniques used to verify and validate the software.
Different types of testing are required at different test levels to ensure complete
testing of the software.
1. Functional Testing,
2. Non-Functional Testing,
3. White-box Testing,
4. Change-related Testing, and others.
Functional Testing is the process of verifying that the software meets the
functional requirements. It is usually performed at the System and Acceptance
testing levels.
Non-Functional Testing is the process of verifying that the software meets the
non-functional requirements, such as:
• performance,
• security,
• usability
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
• The goal of white-box testing is to ensure that all the code paths are
tested, and all possible outcomes and results are checked.
• statement coverage,
• branch coverage,
• path coverage.
The goal of change-related testing is to ensure that the changes have not
introduced any new defects or issues, and that the existing functionality has not
been affected.
• component,
• integration,
• system testing
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
• regression testing,
• functional testing,
• non-functional testing.
Test types and test levels are related to each other, as different test types are
usually performed at different test levels.
It is important to choose the appropriate test types and test levels based on the
project's objectives, requirements, and risks.
Maintenance testing is a type of testing that is performed after the software has
been released and is in use by the end-users.
The purpose of maintenance testing is to identify and fix defects and issues that
arise during the software's maintenance phase.
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
• user feedback,
• changes in the software environment,
• the introduction of new hardware or software components.
-: User feedback can include bug reports, feature requests, and usability issues.
-: The purpose of impact analysis is to determine the scope of the change and
identify the areas of the software that may be affected.
• static analysis,
• dynamic analysis,
• traceability analysis.
Static analysis involves examining the source code and other software artifacts
without executing the software.
Dynamic analysis involves executing the software with test cases and analyzing
its behavior.
Traceability analysis involves tracing the requirements, design, and test artifacts
to identify the impact of a change.
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
-: Static testing involves reviewing and analyzing documents, codes, or other work
products in a structured manner.
There are several work products that can be examined using static testing,
such as:
• requirements documents,
• design documents,
• test plans,
• test cases,
• code,
• user manuals, and other related documents.
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
Static testing has several benefits, including early detection of defects, improved
software quality, cost- effectiveness, and increased efficiency.
It also helps in identifying defects that may be difficult to find during dynamic
testing, such as boundary value analysis and equivalence partitioning.
Static testing differs from dynamic testing in terms of its approach and objectives.
This process can be formal or informal and can take place at different stages of
the software development lifecycle.
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
There are different roles involved in a formal review process, such as:
• the moderator,
• author,
• reviewer,
• scribe.
Each role has specific responsibilities, such as the moderator who ensures that
the review process is conducted in a structured manner and the author who
presents the work product being reviewed.
• informal reviews,
• formal reviews,
• technical reviews,
• walkthroughs.
Each type of review has its objectives and is conducted at different stages of the
software development lifecycle.
• walkthroughs,
• inspections,
• peer reviews.
These techniques can be applied to different types of work products and have
different objectives, such as identifying defects or improving software quality.
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
Success factors for reviews include having clear objectives, a structured review
process, adequate preparation, and ensuring that the right people are involved.
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
1. black-box techniques,
2. white-box techniques,
3. experience-based techniques.
Testers choose appropriate test techniques based on the software being tested,
the requirements, and other factors such as time and resources.
1. system's complexity,
2. risks associated with the system,
3. availability of documentation,
4. testing objectives,
5. skill level of the testing team.
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
Black-box test techniques are designed to test the system without knowledge of
its internal workings.
These techniques are useful in testing the functionality of the system and
ensuring that it meets the requirements.
The idea is to select a representative test case from each group, which should
provide similar testing results.
For example, if a system accepts inputs in the range of 1 to 100, then the input
domain can be divided into three equivalence classes: inputs less than 1, inputs
between 1 and 100, and inputs greater than 100.
Boundary value analysis is a black-box test technique that involves testing the
boundaries of an input domain.
The idea is to test the values at the boundaries of the input domain, as these are
likely to be more prone to errors.
For example, if a system accepts inputs in the range of 1 to 100, then the boundary
values to be tested would be 1, 100, and values just below or above these limits.
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
Decision table testing is a black-box test technique that involves creating a table
to represent the combinations of inputs and expected outputs for a given set of
business rules.
The tester can then use the decision table to derive test cases that cover all the
possible combinations of inputs and expected outputs.
State transition testing is a black-box test technique that involves testing the
behaviour of a system as it moves from one state to another.
The tester identifies the various states of the system and the events that cause it
to transition from one state to another.
The tester then creates test cases to ensure that the system behaves correctly
during these transitions.
Use case testing is a black-box test technique that involves testing the system's
behavior from the user's perspective.
The tester identifies the various use cases for the system and creates test cases
to ensure that the system behaves correctly for each use case.
These techniques are primarily used for testing software at the unit, integration,
and system levels, and rely on the knowledge of the internal code and structure of
the software.
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
It involves creating test cases that execute each statement in the code, ensuring
that all statements are covered.
Statement coverage is a useful technique for detecting issues with the flow of
control in the code.
A decision is a point in the code where the program has a choice of two or more
possible paths.
Decision coverage involves creating test cases that ensure that all decisions in
the code have been exercised, both true and false.
Statement and decision testing are important techniques for verifying the
correctness of the code.
These techniques can help to identify issues with the flow of control in the code,
and ensure that all possible paths through the code have been tested.
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
Experience-based test techniques rely on the knowledge, skill, and intuition of the
testers to identify potential issues with the software being tested.
These techniques are typically used in situations where the requirements are
unclear or incomplete, or where the software is complex or difficult to test.
The tester then creates test cases that specifically target those types of errors.
This technique can be particularly effective in identifying issues that may not be
apparent from the requirements or design documentation.
The checklist can include items such as common errors, performance issues, or
usability issues. The tester then creates test cases that specifically target the
items on the checklist.
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
This technique can be particularly useful in ensuring that important areas of the
software are not overlooked during testing.
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
This separation can help to increase objectivity, improve quality, and reduce the
possibility of conflicts of interest.
The tasks of a test manager may include defining the test strategy, planning and
controlling the testing effort, allocating resources, and communicating with
stakeholders.
The tasks of a tester may include designing test cases, executing tests, reporting
defects, and analyzing test results.
A test plan is a document that describes the objectives, scope, approach, and
schedule of the testing effort.
It should include details on the test strategy, test approach, entry criteria, exit
criteria, and test execution schedule.
The test plan should be reviewed and approved by all relevant stakeholders.
Test estimation involves predicting the amount of time, effort, and resources
required to complete the testing effort.
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
1. expert judgment,
2. historical data,
3. metrics-based approaches.
Test monitoring involves collecting and analyzing data on the testing effort to
determine its status and progress.
Test control involves taking corrective action to address issues that arise during
testing.
Test metrics can be used to monitor and control the testing effort, such as defect
metrics, test coverage metrics, and test progress metrics.
The contents of a test report may vary depending on the audience, but should be
accurate, relevant, and timely.
Testers must ensure that changes to the software do not affect the testing effort
and that any changes made during the testing process are controlled and
traceable.
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
In the context of testing, risk refers to the probability of a failure occurring and the
impact of that failure on the software, its users, and the organization as a whole.
Risk-based testing involves identifying the risks that could affect the software and
using that information to determine the types of tests that should be performed.
Testers must understand the different types of risks and how they can impact
testing, and they must work with the project team to develop a risk management
plan.
1. risk identification,
2. analysis,
3. mitigation strategies.
Testers must use a defect management tool to log defects, track their status, and
communicate with the development team about their resolution.
Testers must also perform root cause analysis to determine the underlying cause
of the defect and to prevent similar defects from occurring in the future.
Defect metrics can be used to track and report on the effectiveness of the defect
management process.
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
1. High initial cost: automation tools can be expensive, and setting up the
automation environment can be time-consuming.
2. Maintenance overhead: automated tests require maintenance, and any
changes in the application can break the automated tests.
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
• Tool selection: selecting the right tool for the right problem.
• Tool customization: customizing the tool to fit the specific needs of the
organization.
• Training: providing adequate training to the users to effectively use the
tool.
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
Question #1 (1 Point)
Which one of the following is the BEST description of a test condition?
Justification
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
Question #2 (1 Point)
Which of the following statements is a valid objective for testing?
Justification
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
Question #3 (1 Point)
Which of the following statements correctly describes the difference between
testing and debugging?
Justification
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
Question #4 (1 Point)
Which one of the statements below describes a failure discovered during testing
or in production?
a) The product crashed when the user selected an option in a dialog box.
b) The wrong version of one source code file was included in the build. c) The
computation algorithm used the wrong input variables.
c) The developer misinterpreted the requirement for the algorithm.
d) Select one option.
Justification
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
Question #5 (1 Point)
Which of the following statements CORRECTLY describes one of the seven key
principles of software testing?
Justification
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
Question #6 (1 Point)
In what way can testing be part of Quality assurance?
FL-1.2.2 (K2) Describe the relationship between testing and quality assurance
and give examples of how testing contributes to higher quality
Justification
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
Question #7 (1 Point)
Which of the below tasks is performed during the test analysis activity of the test
process?
FL-1.4.2 (K2) Describe the test activities and respective tasks within the test
process
Justification
a) Not correct
b) Not correct this activity is performed during the Test design activity.
c) Not correct this activity is performed during the Test implementation
activity. this activity is performed during the Test completion activity.
d) Correct-this activity is performed during the Test analysis activity, Syllabus
1.4.2.
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
Question #8 (1 Point)
Differentiate the following test work products, 1-4, by mapping them to the right
description, A- D.
1. Test suite.
2. Test case.
3. Test script
4. Test charter.
A. A group of test scripts or test execution schedule.
B. A set of instructions for the automated execution of test procedures,
C. Contains expected results.
D. An event that could be verified.
a) 1A, 20, 38, 40.
b) 10, 28, 3A, 4C.
c) 1A, 20, 30, 48.
d) 10, 20, 38, 4A.
FL-1.4.3 (K2) Differentiate the work products that support the test process
Justification
Test implementation work products also include test suites, which are groups of
test scripts, as well as a test execution schedule. (1A).
Test charter: Glossary, A statement of test objectives, and possibly test ideas
about how to test. Test charters are used in exploratory testing. (4D)
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
Thus
a) Correct
b) Not correct
c) Not correct
d) Not correct
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
Question #9 (1 Point)
How can white-box testing be applied during acceptance testing?
Justification
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
FL-2.2.1 (K2) Compare the different test levels from the perspective of
objectives, test basis, test objects, typical defects and failures, and approaches
and responsibilities
Justification
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
Justification
c) Correct-Syllabus 2.3.4
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
Justification
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
FL-3.2.2 (K1) Recognize the different roles and responsibilities in a formal review
Justification
a) Not correct-Tester and developer are NOT roles as per Syllabus, section
3.2.2
b) Not correct-Developer is NOT a role as per Syllabus, section 3.2.2.
c) Not correct-Designer is NOT a role as per Syllabus, section 3.2.2.
d) Correct-see Syllabus, section 3.2.2.
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
FL-3.2.1 (K2) Summarize the activities of the work product review process
Justification
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
a) Informal Review.
b) Technical Review
c) Inspection.
d) Walkthrough
FL-3.2.3 (K2) Explain the differences between different review types: informal
review, walkthrough, technical review and inspection
Justification
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
Justification
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
• Subscribers
• Technical support team
• Billing department
• Database administrator
Each type of user logs into the system through a different login interface (e.g.
subscribers login via a web page; technical support via an application).
Different reviewers were requested to review the system's login flow from the
perspective of the above user categories.
Which of the following review comments is MOST LIKELY to have been made by all
reviewers?
a) The login page on the web is cluttered with too much advertisement space.
As a result, it is hard to find the "forgot password?" link.
b) The login to access the billing information should also allow access to
subscribers' information and not force a second login session.
c) After logging-in to the database application, there is no log-out function.
d) The log in flow is un-intuitive since it requires entering the password first,
before the user name can be keyed-in.
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
Justification
a) Not correct this impacts only the subscribers; possibly others but for sure
not the technical support since they don't access the data via a web page.
b) Not correct this comment would come from the review that took the
perspective of the billing department, but not from other reviewers.
c) Not correct this comment would come from the review that took the
perspective of the database administrator, but not from other reviewers.
d) Correct-Every type of user must be authenticated before accessing to the
system, so all users of the system would note (and suffer) an un-intuitive
login flow.
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
a) A test technique in which tests are derived based on the tester's knowledge
of past fallures, or general knowledge of failure modes.
b) Procedure to derive and/or select test cases based on an analysis of the
specification. either functional or non-functional, of a component or
system without reference to its internal structure.
c) An experience-based test technique whereby the experienced tester uses
a high-level list of items to be noted, checked, or remembered, or a set of
rules or criteria against which a product has to be verified
d) An approach to testing where the tester dynamically designs and executes
tests based on their knowledge, exploration of the test item and the results
of previous tests.
Justification
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
Justification
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
"When the code contains only a single if statemer and no loops or CASE
statements, any single test case we run will result in 50% decision coverage."
a) The sentence is true. Any single test case provides 100% statement
coverage and therefore 50% decision coverage.
b) The sentence is true. Any single test case would cause the outcome of the
"if" statement to be either true or false.
c) The sentence is false. A single test case can only guarantee 25% decision
coverage in this case.
d) The sentence is false. The statement is too broad. It may be correct or not,
depending on the tested software.
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
Justification
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
Which TWO of the following statements about the relationship between statement
coverage and decision coverage are true?
Justification
See syllabus chapter 4.3.3: Achieving 100% decision coverage guarantees 100%
staternent coverage (but not vice versa).
Thus
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
Justification
a) Not correct-Syllabus 4.4.2: Exploratory testing is most useful when there are
few or significant time pressure on testing.
b) Not correct-exploratory testing can be used here.
c) Correct-exploratory tests should be performed by experienced testers with
knowledge of similar applications and technologies. The tester needs
constantly to make decisions during exploratory testing, eg what to test
next.
d) Not correct-exploratory testing can be used at any location.
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
The categories are: less than or equal to 2 years, more than 2 years but less than
5 years, 5 or more years, but less than 10 years, 10 years or longer.
What is the minimum number of test cases required to cover all valid equivalence
partitions for calculating the bonus?
a) 3.
b) 5
c) 2
d) 4
FL-4.2.1 (K3) Apply equivalence partitioning to derive test cases from given
requirements
Justification
d) Correct-Partions as below.
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
A speed control and reporting system has the following characteristics If you drive
50 km/h or less, nothing will happen. If you drive faster than 50 kmh, but 55 km/h
or less, you will be warned If you drive faster than 55 kmh but not more than 60
km/h, you will be fined If you drive faster than 60 km/h, your driving license will be
suspended. Which would be the most likely set of values (km/h) identified by two-
point boundary value analysis?
b) 50.55.60.
FL-4.2.2 (K3) Apply boundary value analysis to derive test cases from given
requirements
Justification
Thus
a) Not correct-Does not include all two-point boundary values. Also includes
values not necessary for two-point boundary value analysis.
b) Not correct-Does not include all two-point boundary values.
c) Not correct-Does not include all two-point boundary values. Also includes
values not necessary for two-point boundary value analysis.
d) Correct-Includes all two-point boundary values
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
The application shall allow playing a video on the following display sizes:
1. 640x480.
2. 1280x720.
3. 1600x1200
4. 1920x1080.
Which of the following list of test cases is a result of applying the Equivalence
Partitioning test technique to test this requirement?
a) Verify that the application can play a video on a display of size 1920x1080 (1
test).
b) Verify that the application can play a video on a display of size 640x480
and 1920x1080 (2 tests).
c) Verify that the application can play a video on each of the display sizes in
the requirement (4 tests).
d) Verify that the application can play a video on any one of the display sizes
in the requirement (1 test).
FL-4.2.1 (K3) Apply Equivalence Partitioning technique to derive test cases from
given requirements
Justification
a. Not correct-See c)
b. Not correct-See c)
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
a) The test manager plans testing activities and chooses the standards to be
followed. while the tester chooses the tools and controls to be used
b) The test manager plans, organizes, and controls the testing activities, while
the tester specifies and executes tests.
c) The test manager plans, monitors, and controls the testing activities, while
the tester designs tests and decides about automation frameworks.
d) The test manager plans and organizes the testing and specifies the test
cases, while the tester prioritizes and executes the tests.
Justification
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
Justification
a) Correct Syllabus 5.3.1: Test case execution (eg. number of test cases
run/not run, and test cases passed/failed).
b) Not correct-Should be monitored during test preparation
c) Not correct-Should be monitored during test preparation.
d) Not correct-Should be monitored during test preparation.
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
Which TWO of the following can affect and be part of test planning?
a) Budget Imitations
b) Test objectives.
c) Test log
d) Failure rate.
e) Use cases.
Justification
a) Correct-When you are planning the test and there are budget limitations,
prioritizing is needed, what should be tested and what should be omitted.
b) Correct-See syllabus 5.2.1.
c) Not correct-it is a part of test monitoring and control.
d) Not correct-it is a part of test monitoring and control.
e) Not correct-it is a part of test design.
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
Justification
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
FL-5.3.2 (K2) Summarize the purposes, content, and audiences for test reports
Justification
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
1. Analytical.
2. Methodical.
3. Model-based.
4. Consultative.
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
Justification
Analytical: Syllabus 5.2.2. This type of test strategy is based on an analysis of some
factor (eg. requirement or risk). (1B). چھا
Methodical: Syllabus 5.2.2, In this type of test strategy relies on making systematic
use of some predefined set of tests or test conditions, (2C).
Model-based: Syllabus 5.2.2, In this test strategy, tests are designed based on
some model of some required aspect of the product,... (3A).
Consultative (or Directed): Syllabus 5.2.2, This type of test strategy is driven
primarily by the advice, guidance, or instructions of stakeholders, business
domain experts, or technology experts, who may beoutside the test team or
outside the organization itself. (4D)
Thus:
a) Not correct.
b) Not correct.
c) Not correct.
d) Correct.
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
FL-5.2.6 (K2) Explain the difference between two estimation techniques: the
metrics- based technique and the expert-based technique
Justification
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
Now you are writing a defect report with the following information
Short summary: When you select coffee with milk, the time for preparing
coffee is too long and the temperature of the beverage is too low (less than
40 °C)
Priority: Normal
What valuable information is MOST likely to be omitted in the above defect report?
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
FL-5.6.1 (K3) Write a defect report, covering defects found during testing.
Justification
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
Justification
a) Not correct-The benefits are not when creating regressions tests, more in
executing them.
b) Not correct-This is done by configuration Management tools.
c) Not correct-This needs specialized tools.
d) Correct-Syllabus 6.1.2 Reduction in repetitive manual work (e.g. running
regression tests, environment set up/tear down tasks, re-entering the same
test data, and checking against coding standards), thus saving time.
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
A. Coverage tools
C. Review tools
D. Monitoring tools
hr@kasperanalytics.com
kasper-analytics
kasperanalytics.com
9123820085
FL-6.1.1 (K2) Classify test tools according to their purpose and the test activities
they support
Justification
Support for static testing: Syllabus 6.1.1. Tools that support reviews, (2C).
Support for test execution and logging: Syllabus 6.1.1, Coverage tools, (3A).
Thus
a) Not correct
b) Not correct
c) Not correct
d) Correct
hr@kasperanalytics.com
kasper-analytics