ISTQB foundation tài liệu tóm tắt

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 34

ISTQB FOUNDATION LEVEL

Tip trả lời


Câu trả lời thường đúng Câu trả lời thường sai
Should be Must be, have to
May be Only
Can be All, full
Prove

1.1.1 Test objectives


Objectives Phase
1 Evaluate the work products Review
( URD, SRS, Design, Test documents, code)
2 Verify all requirements All phases
3 Validate test object Acceptance test
4 Build confidence
5 Prevent defect Early phase: review documents,
design test case
6 Find defects and failures Development phase
7 Ensure that no new defect was introduced Maintenance phase
(regressed) by change
8 Provide information Smoke test (entry check), final test
9 Reduce risks All phases
10 Comply with contract, legal If contract is required.

Test methods
Verify ( verification) Validate ( validation)
- Compare with the first documents - Compare with user’s need or
expectations
- Do it right
- Do right it
Kỹ thuật review Requirement và make QnA: 5W1H
Kỹ thuật design test case: success và unsuccess ( valid và invalid)
Change
requirement
Fix bug X
X-
Regression
test
X
x x
X

1.1.2 Testing and Debugging


Debugging is the development activity that finds, analyzes, and fixes such defects ( remove,
repair)

1.2 Why is Testing Necessary?


- help to reduce the risk of problems (risk: failures, poor non-functional could happen in the
future and result negative consequences)
- contributes to the quality
- meet contractual or legal requirements
1.2.2 Quality Assurance and Testing

Quality Assurance (QA):


Purpose: prevent defects
Testing (Quality Control)
And provide confidences
Purpose: find defects
Action:
Action: review, run test,
-Build, follow Process
design test case
- Training
- Measurement Quality Management

1.2.3 Errors, Defects, and Failures

Defect= Bug= Fault


(Incorrect in documents or source code)
1.2.4 Defects, Root Causes and Effects
Identify the root causes of failure (Error/ mistake) can:

 Reduce the occurrence of similar defects in the future


 Lead to process improvements that prevent a significant number of future defects

1.3 Seven Principles of Testing


1. Testing can show that defects are present, but cannot prove that there are no defects
( không prove chứng minh gì hết)
2. Exhaustive testing is impossible
Test everything (all combination of inputs and preconditions) is not feasible, except trivial
case
3. Early testing
Perform the test design and review activities early can find defects early on when they are
cheap to find and fix
4. Defect clustering (defect density)
A small numbers of modules usually contains most of the defects
5. Pesticide paradox
If the same tests are repeated over and over again, no new defects can be found
6.Testing is context dependent
Testing is done differently in different context

7. Absence of error fallacy


All specified requirements and fixing all defects that does not fulfill the users’ needs and
expectations

1.4 Test Process (test activities)

Test activities Task


Test planning

- Create and update a test plan: scope, objectives, approach, schedule,


define entry & exit criteria

Test
monitoring
- Compare actual with plan
- Measurement: progress, quality,…

- Create a test report

Test control

- Make decisions (corrective actions)

- Evaluate exit criteria for test execution

Test analysis - Review and evaluate the test basis (make QnA)

- Identifying test conditions (list of features to be tested)

Test design - Designing and prioritizing test cases

- Identifying test data and test environment( tool, infrastructure)

Test - Automation test: test procedures, test scripts, test suites


implementation
- Building or verify the test environment

- Preparing test data

Test execution - Run tests

- Analyzing anomalies (discrepancies)

- Reporting defects

- Logging test results

- Retest and regression test

Test - Checking whether all defect reports are closed


completion
- Create Change Request for unresolved defects

- Creating a test summary report

- Collect and hand over testware

- Analyzing lessons learned

- improve test process


1.4.4 Traceability between the Test Basis and Test Work Products
- Analyzing the impact of changes

- Calculate requirement coverage

- Making testing auditable

- Meeting IT governance criteria

- Improving the understandability

1.5.1 Good communication

- Start with collaboration rather than battles.

- Remind everyone of the common goal of better quality systems

- Emphasize the benefits of testing

- Communicate test results and other findings in a neutral (objective), fact-focused


way

1.5.2 Tester’s and Developer’s Mindsets

Developer Tester

Objective design and build a product verifying and validating the product,
finding defects prior to release

Mindset more interested in designing and building curiosity, professional pessimism, a


solutions than in contemplating what might critical eye, attention to detail, and a
be wrong with those solutions motivation for good and positive
communications and relationships
difficult to find mistakes in their own work
developers should be able to test their own
code
CHAPTER 2. Testing Throughout The Software Development Lifecycle

2.1 Software Development Lifecycle Models


several characteristics of good testing:

- For every development activity, there is a corresponding test activity


- Each test level has test objectives
- Test analysis and design for a given test level begin during the corresponding
development activity
- Reviewing documents as soon as drafts are available

2 models:

1. Sequential development models: V-model


- integrates the test process throughout the development process
- early testing.
- includes test levels associated with each corresponding development phase
- test levels overlapping occurs
2. Iterative and incremental development models
- Establishing requirements, designing, building, and testing a system in pieces
- series of cycles
- Involve changes
- Regression testing is increasingly important

Software development lifecycle models must be selected and adapted to the context of
project and product characteristics

2.2 Test levels

System Testing Acceptance test

Item Component Integration


testing (Unit Testing
test)

focuses on focuses on a - Establishing


interaction whole confidence
Objective focuses on s and system
- Validating
components interfaces the system
that are
separately
testable

two different
levels:
Special required mock 4 Forms:
objects, stubs, + Component
and drivers. integration
testing
+ System
integration
testing - User
Test basis Detailed design acceptance
Code testing (UAT)
Global design requirement by end user
specifications
Use cases
Use cases

- Operational
Development Specific correspond to the acceptance testing
environment environment production (OAT) by
Environ with framework, environment Administrator
ment debug tool,...
- Contractual and
(Can’t find regulatory
operational acceptance testing
defects)
- Alpha and beta
testing:

Alpha testing is
performed at the
developing
organization’s site

Beta testing is
performed at
Customer’s
locations.

Functional Functional and all


and non- non-functional
Test types Functionality functional and data
Non-functional quality
characteristics characteristic
Structural based
( white box test)

Test-first Big-bang
approach integration
Approach
Test driven Incremental
development integration:
(TDD)
+Top-down
+Bottom up

2.3 Test Types

1. Software Characteristics (ISO 2. Change-related test 3.White box


standard (ISO/IEC 25010) (old ISO test
9126) (structure)
Functional and Non-functional Confirmation testing and Học ở chương
Regression testing 4
Functional/ Non-functional/ Confirmation testing
Suitability Quality attributes ( Retest): confirm whether
(What the system (How the system do?) the original defect has
do?) been successfully fixed
Includes: Includes:
- Completeness - Performance (time Regression testing: re-run
- Correctness behavior: load test, tests to detect such
(accuracy) stress test, volume unintended side-effects
- Appropriateness test) (introduced issues)
(usefull) - Compatibility
- Usability (How easy
to use?)
- Reliability
(Recoverability)
- Security
- Maintainability
(How easy to
modify?)
- Portability ( how
easy to install?)
2.4 Maintenance Testing

Modify
Fix bug X
Migrate X-
Regression
test
Retirement X
x x
X

Maintenance testing focuses on:


+ testing the changed parts
+ testing unchanged parts that might have been affected by the changes (regression test)

2.4.1 Triggers for Maintenance

- Modification: enhancements, corrective and emergency changes, changes of the


operational environment, upgrades, and patches for defects
- Migration: data conversion (when all old application retired)
- Retirement

2.4.2 Impact Analysis for Maintenance


- identify the areas in the system that will be affected by the change

- identify the scope of existing tests for regression test.

- Impact analysis may be done before a change is made, to help decide if the change
should be made

Chapter 3. Static Testing techniques

3.1.2 Benefits of Static Testing (Review and Static Analysis)

- Detecting defects prior to dynamic


- Identifying defects which are not easily found by dynamic testing
- Preventing defects
- Increasing productivity (velocity)
- Reducing cost and time
- Improving communication

3.1.3 Differences between Static and Dynamic Testing


Static testing Dynamic testing

Find defects without execution of Find failures with execution of


code code (run package)

Include: Review (manual), Static Include techniques to design test


analysis (tool, e.g: compiler, case, test data, test input,
Jenkins) expected results.

Retest, regression test,


automation test, dynamic analysis

Find problems: (Typical defects) Find problems: Failures, poor


non-functional (performance,
Review: Requirement defects,
security), code Coverage,
Design defects, Incorrect
memory leak
interface specifications,

Static analysis: Coding defects,


Deviations from standards
(coding conventions), Security
vulnerabilities, Maintainability
defects

Calculate code metric:

cyclomatic complexity =
number of single condition +1
3.2 Review Process
3.2.1 & 3.2.2 Review Process & Responsibility
Review process Main tasks Roles & Responsibility
Planning - Defining the scope, objectives Management
- Estimate time, effort
- Select people, roles
- Review planning
- Define entry & exit criteria - Decides & Monitors
- Check entry criteria - Assigns staff, budget,
and time

Facilitator (moderator)

- Lead of review
- Create plan for review

Initiate review Facilitator (moderator)


(Kick off - Run meeting
- Distribute work products
meeting)
- Explaining
- Answering any questions

Individual - Self-review Reviewers


review
- Note comments, defects Review leader
(preparation)
Scribe ( recorder)
- Collects defects

Issue - Communicating Facilitator (moderator)


communication
- Analyzing - Run meeting
and analysis
- Evaluating Scribe (or recorder)
(Review
meeting) - Records new potential
defects

Fixing - Creating & update defect reports Author


(Rework) - Fixing defects
- Communicating defects to reviewer
Reporting - Gathering metrics Facilitator (moderator)
(Follow up) - Checking that exit criteria
- Accepting the work product

3.2.3 Review Types & 3.2.4 Applying Review Techniques


Item Informal review Formal review
(Pair review) Technical review Inspection
Walkthrough (Peer review)
(Demo)
Exchanging ideas
Purpose Find minor Gain consensus Find defects as much
problems Training as possible
Cheap to find Evaluate the work
defects products
Considering
alternative,
solutions

Leader no Author ideally led by a By a trained


trained moderator moderator

Review No process Follow review Follow review Follow most formal


Process May be documented process process review process
(undocumented) Optional: Mandatory:
Optional:
Use in Agile project - Individual - Review meeting - based on rules and
preparation checklists
- Management
- Review report and participations - entry and exit criteria
defect reports
- metrics are collected
- improve process
Ad hoc: Scenarios and dry Role-based or Checklist-based
3.2.4 runs: Perspective-
Review - little or no - List of questions
based:
techniq guidance - Better guidelines from past defects or
ues - miss other defect - Based on standards
- dependent on
types (e.g., missing different - Miss defects outside
reviewer skills
features) stakeholder checklist.
viewpoints
3.2.5 Success Factors for Reviews
Process People
clear objectives right people
Review types Testers are seen as valued reviewers
Review techniques adequate time and attention to detail
checklists used on small chunks
in small chunks Defects found are objectively
adequate time The meeting is well-managed
adequate notice an atmosphere of trust
Management supports avoid body language
Adequate training

Chapter 4. Test design techniques


Test case includes inputs, expected result, steps by steps, pre-conditions
Category test case or test data
Valid ( The system work) Invalid (the system doesn’t work)
Successful Unsuccessful
Happy Unhappy
Normal Abnormal
Constructive Negative

High level TC Detail level TC


- A test without test data ( input), A test with test data ( input), output, step
output, step by step by step
- Early phase, poor requirement - Detail Requirement
- Experience tester - Inexperience tester
4.1.2 Categories of Test Techniques and Their Characteristics
Black box test White box test Experience based
( Specification based or ( Structure based)
Requirement based)
- Design tests from documents - Design tests from how the - Design tests from
software is constructed knowledge or
experience
- Measure code coverage
- Find defects that was
miss by black box,
white box
- Formal or systematical - Formal or systematical - Informal
Process: Process:

1. 1. Equivalence partitioning 1. Statement coverage 1. Error guessing


2. 2. Boundary Value analysis 2. Decision coverage 2. Exploratory
testing
3. 3. Decision table 3. Path coverage
3. Checklist
4. 4. State transition testing 4. LCSAJ
5. 5. Use case testing 5. Condition coverage
6. Condition decision
coverage
7. Condition Determination
coverage
8. Multiple coverage

4.2 Black-box Test Techniques (Specification based/ Requirement based)

4.2.1 Equivalence partitioning/ class (EP)


- Divide (partition) the inputs, outputs, etc. into areas
- One value for each area, test both valid and invalid areas
4.2.2 Boundary value analysis (BVA)
- Test at the edge of each equivalence partition
- Two-point boundary: The maximum and minimum values
- Three- point boundary: Before, at, over

4.2.3 Decision tables


- combinations of inputs, situations or events
- expressing the input conditions by TRUE or FALSE

Example: Login of gmail

Input conditions Full decision = all combinations of inputs= 2*2*2


valid username? F F F F T T T T
Valid password? F F T T F F T T
Space is enough? F T F T F T F T
Output
Login success F F F F F F T T
Restricted turn on
Input conditions Collapse decision
valid username? F T T T
Valid password? - F T T
Don’t case
Space is enough? - - F T
Output
Login success F F T T
Restricted turn on - - T F

Câu 3: You have been given the following conditions and results from those condition
combinations. You can only have one form of payment. A PIN is only needed for a debit card.
Given this information, using the decision table technique, what is the minimum number of
test cases you would need to test these conditions?

a. 7 b. 13 c. 15 d. 18

4.2.4 State transition testing


four basic parts:
- State , transition, event, action (có thể có hoặc không)
State transition testing is much used within the embedded software and automotive system
2 Test case types:
- a typical scenario (a normal situation: start to end) to the coverage every states/ every
transitions
- specific sequences of transitions: N-1 Switch ----- N Transitions
0 --- 1

4.2.5 Use case testing


- Test the whole system
- Test from system test level and over
- Describe interactions between actors (user, system) and system
- Useful to uncover the defect types:
+ integration defects caused by interaction and interference
+ in the process flows during real-world use of the system
1 Use case includes:
+ 1 basic flow (mainstream)
+ n alternate flow (exception)
+ some errors

Test case types


GUI Function Flow
Purpose Test each field or item on Test combination of inputs, Test end to end of
a screen events, pre-conditions system
- Xác nhận dữ liệu đúng Test environment
format, đúng định dạng corresponding to
hay chưa production
environment
Test level Integration test Integration test System test or system
integration test
Acceptance test

Test Equivalence partition or Decision table Use case


techniques Boundary value
State transition test
Checklist
Experience based

4.3 While Box test design (Structure based)

4.3.1 Statement coverage (statement testing)


Line of code: Statement, comments ( // , /* */), Blank
Percentage of executable statements exercised.

4.3.2 Decision coverage/ Decision testing (Branch coverage)


T, F  Decision outcomes
Percentage of decision outcomes exercised

4.3.3 Path coverage (path testing)


Percentage of paths exercised.

4.3.4 LCSAJ coverage (Linear Code Sequence And Jump)

Summary White box test


Control flow Data flow
Statement coverage Condition coverage: % of condition outcomes
exercised.
Decision coverage
Condition decision coverage
Path coverage
Condition Determination coverage
LCSAJ
Multiple condition coverage
4.4 Experience_base techniques

4.4.1 Error Guessing


Design test from past failures or common mistakes by developer

4.4.2 Exploratory Testing ( Free test, Monkey test, Random test)


- informal tests ( no process, no documents) are designed, executed, logged, and evaluated
dynamically
- use session-based: write a test charter contain some guidelines for test within a defined time-
box
- most useful when there are few or inadequate specifications or time pressure, experience
testers

4.4.3 Checklist-based Testing


List of questions to remind, checked (questions from standards or common defects)
Chapter 5: Test Management

5.1 Test independence

outsource
Testers external
Testers from
the business
Test team or organization
group
Other
developers or
Author testers within
developers test team
their own code

Benefits of test independence include:

- Recognize different kinds of failures (unbiased)


- Verify assumptions

Drawbacks of test independence include:

- Isolation from the development team


- Developers may lose a sense of responsibility for quality
- Independent testers may be seen as a bottleneck or blamed for delays in release
- Independent testers may lack some important information

5.1.2 Tasks of a Test Manager and Tester


Testing Test Manager (leader) tasks Tester tasks
Activities
Planning

- Write and update the test plan - Review and contribute to test
- Coordinate plans
- Create the detailed schedule
- Share testing perspectives

Analysis
and design
- Initiate - Analyze, review, and assess the
Implement
- Support test basis
and
- Choose tools - Identify test conditions
execution
- Set up configuration - Design, set up, and verify test
management environment(s)
- Decide - Design test cases and test
procedures
- Priority
- Prepare and acquire test data
- Execute tests, evaluate the results
- Automate tests (decide,
implement)
- Evaluate non-functional
- Review tests developed by others

Monitoring
and control
- Monitor test progress and - Use management tools
results, and check the status of
exit criteria
- Create test progress reports
- Adapt planning
- Take corrective actions
(decision)

5.2 Test Planning and Estimation


5.2.1 Purpose and Content of a Test Plan ( IEEE 829)
- scope, objectives, and risks
- test approach: test activities (test process), test levels, test types, test techniques,…
- resources ( people, tool, environment)
- Schedule
- Selecting metrics
- Define entry & exit criteria
- Budgeting for the test activities
- Determining the level of detail for test documentation

5.2.2 Test Strategy and Test Approach


1. Analytical: requirement based or risk based
2. Model based: based on aspect of product (model, embedded)
3. Methodical based: error guessing or checklist from standard (ISO 25010)
4. Process compliant: follow Agile process (rules, user story, acceptance criteria)
5. Consultative: guide by expert, user
6. Regression averse: highly automation test
7. Reactive/ Dynamic/ Heuristic: exploratory testing

5.2.3 Entry Criteria and Exit Criteria


Entry Criteria (smoke test) Exit Criteria
When to start a test level? When to stop a test level?
When to release?
How much testing is enough?
Check available (readiness): Check 5 criteria:
- Documents - Coverage ( thoroughness)
- Prior test levels met exit criteria - Defect (functional, non-functional:
reliability)
- Test environment
- Cost/ effort
- Test tool
- Time
- Test data
- (importance) Residual risks: issues/
open serious defect, untested.

5.2.5 Factors Influencing the Test Effort (mh, md, mm)


Test effort estimation predicts the amount of test-related work.
+ Product characteristics
+ Development process characteristics
+ People characteristics
+ Test results/ Test outcomes
5.2.6 Test Estimation Techniques
1. metrics-based: based on historical of similar projects, or typical values
2. expert-based: wisdom (predict) by owner or expert ( test manager, PM) use WBS-
Work Breakdown Structure
5.3 Test Monitoring and Control
Test Monitoring Test control
- Compare actual with plan to assess - Take decision (corrective actions):
test progress, quality, cost, time.
+ Re-prioritizing
- Create test reports
+ Changing
+ Re-evaluating
+ Set an entry criterion for bug
fixing
5.3.1 Metrics Used in Testing
Metrics can be collected during and at the
end of test activities in order to assess:
+ Progress
+ Quality
+ Approach
+ Effectiveness
5.3.2 Contents for Test Reports ( IEEE 829)

- Summary of testing
- Analysis
- (Variances) Deviations from plan

- Metrics
- (Evaluation) Residual risks

5.4 Configuration Management


The purpose is to establish and maintain the integrity of system
All test items are uniquely identified, version controlled, tracked for changes, and related
to each other
5.5 Risks and Testing
Risk: an event could happen in the future which has negative consequences
Risk level = likelihood (probability) x Impact (harm)
Project risk Product risk (quality risks)
A risk to the project’s capability: A risk to quality of product
Scope, cost, Time
Related to management and control: Directly related to the test object

- people - Failure in the software delivered


- tool - The potential that the software/hardware
- Customer could cause harm to an individual or
- Technical company
- Schedules - Poor software characteristics (e.g.,
- Budget functionality, reliability, usability and
performance)
- Work product: SRS, code, - Poor data integrity and quality
design, test documents - Software that does not perform its
intended functions

Actions: PM and Test manager Actions: Tester


- Mitigation or reduce risk - 5.5.3 Risk-based testing
+ test techniques
+ levels and types of testing
+ the extent of testing
+ Prioritize testing
+ any activities in addition to testing
5.6 Defect Management
Incidents: the discrepancies between actual and expected outcomes
An incident must be investigated and may turn out to be a defect
Incident reports have the following objectives:
- Provide information to enable fixing defects.
- Provide a means of tracking quality
- Provide ideas for test process improvement
A defect include fields:

- A title and a short summary


- Date
- Identification of the test item (version) and environment
- A description including logs, database dumps, screenshots, or recordings
- Expected and actual results
- Severity (impact)
- Priority (business importance)
- State
Chapter 6: Tool Support for Testing
6.1 Test Tool Category

Support management Support for static testing


Test management tools Tools that support reviews
Requirements management tools Static analysis tools (D)
Defect management tools
Configuration management tools
Continuous integration tools (D)

Support for test execution and logging Support for performance measurement
(automation test) and dynamic analysis
Test execution tools Performance testing tools
Coverage tools Monitoring tools
Test harnesses (D) Dynamic analysis tools (D)
Unit test framework tools (D)

Support for test design and Support for specialized testing needs
implementation
Test design tools Usability testing
Test data preparation tools Security testing
Portability testing

6.1.2 Benefits and Risks of Test Automation


Potential benefits of using tools:
- Reduction in repetitive manual work
- Greater consistency and repeatability
- More objective assessment
- Easier access to information
Potential risks of using tools
- Expectations may be unrealistic
- The time, cost and effort may be under-estimated
- Version control of test assets may be neglected
- Risk from Vendor, open source, new platform

6.1.3 Special Considerations for Test Automation tool

Data-driven scripting technique Keyword-driven scripting technique

Data files store test input and expected results data files store test input, expected results
in table or spreadsheet and keywords in table or spreadsheet

support capture/playback tools writing script manually


-

You might also like