Software Testing Unit-3

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 65

Unit III:-Test Management (14

Marks)
• Test Planning: Preparing a Test Plan, Deciding a Test Approach,
Setting up Criteria for Testing, Identifying
Responsibilities, Staffing, Resource Requirements,
Test Deliverables, Testing Tasks

• Test Management : Test Infrastructure Management,


Test People Management.

• Test Process : Base Lining a Test Plan, Test case Specification.

• Test Reporting : Executing Test Case, Preparing Test Summary Report.


Introduction

• Test management most commonly refers to the activity of managing


the computer software testing process.
• The general goal of test management is to allow teams to plan,
develop, execute and assess all testing activities within the overall s/w
development effort.
Test Planning
• Test planning in software testing is like creating a roadmap for testing a
computer program.
• It involves making a plan that says what needs to be tested, how it will be
tested, who will do the testing, and when it will happen.
• This plan helps ensure that the testing is well-organized, and all the
important parts of the program are checked to make sure they work
correctly.
• It's like making a to-do list before you start testing to make sure nothing
gets missed, and everything goes smoothly.
Test Planning(Conti…)
• Plan is strategic document which describes how to perform a task in an
effective, efficient and optimized way.
• A test plan is a document describing the scope, approach, objectives,
resources and schedule of a s/w testing effort.
• A test plan identifies the items to be tested, items not to be tested, who will
do the testing, the test approach followed, what will be the pass/ fail criteria,
training needs for team.
• The goal of test planning is to take into account the important issues of testing
strategy, resource utilization, responsibilities, risk and priorities.
• Test planning issues are reflected in the overall s/w project planning.
• The output of test planning is the test plan document. They are developed for
each level of testing
Test Planning(Conti…)
• Preparing a test plan in software testing is like making a detailed recipe
before cooking a meal.
• It's a step-by-step document that explains how you'll test a computer
program.

• Ingredients (Test Objectives): Start by listing what you want to achieve


with your testing. What parts of the software need checking? What should
work correctly? These are your "ingredients.“
• Recipe (Test Strategy): Think about how you'll do the testing. Will you test
one piece at a time or everything together? This is your "recipe" for testing.
Test Planning(Conti…)
• Cooking Steps (Test Cases): Write down the specific things you'll do to test
the software, like clicking buttons, entering data, or checking results. These
are your "cooking steps."
• Chef (Testing Team): Decide who will do the testing. These are your "chefs"
who follow the recipe.
• Kitchen (Test Environment): Make sure you have the right tools and settings
for testing, like the right pots and pans in a kitchen.
• Taste Test (Quality Standards): Define what "good" looks like for your
software. What's the right taste? This helps you know when it's ready.
• Cooking Time (Test Schedule): Plan when you'll start and finish testing. Just
like cooking, you need to know when it'll be done.
• Serve (Test Deliverables): Decide what reports and documents you'll create
to show how the testing went. It's like serving the meal to others.
Test Planning(Conti…)
• Check for Problems (Risk Assessment): Think about what could go wrong
during testing and how you'll handle it. It's like preparing for accidents while
cooking.
• Leftovers (Test Closure): After testing, make a note of what you've learned
and what worked well. It's like remembering what you'd change for next
time.
• Enjoy the Meal (Approval): Finally, get approval from your team or boss to
start testing. It's like getting the green light to start cooking.
• A test plan is your "recipe" for ensuring the software works as expected. It
keeps your testing organized, efficient, and helps you serve up a reliable
software product.
Test Planning(Conti…)
• Preparing a Test Plan
• Testing any project should be driven by plan
• The test plan acts as the anchor for the execution, tracking and reporting of the
entire testing project and covers-
• What needs to be tested- the scope of testing, including clear identification of
what will be tested and what will not be tested.
• How the testing is going to be performed- breaking down the testing into small
and manageable tasks and identifying the strategies to be used for carrying out
the tasks.
• What resources are needed for testing- computer as well as human resources.
• The time lines by which the testing activities will be performed.
• Risks that may be faced in all of the above, with appropriate mitigation and
contingency plan.
Test Planning(Conti…)
• Scope Management: Deciding features to be tested / not tested
• Scope describes the activities included and excluded from quality
assurance/ quality control activities.
• The scope of testing is to be decided at the test planning phase and is
an important part of the master test plan.
• Scope of tests include what items, features, procedures, functions,
objects, clusters and sub system will be tested.
Test Planning(Conti…)
• For testing scope management involve-
• Understanding what constitutes a release of a product.
• Breaking down the release into features
• Prioritizing the features for testing
• Deciding which features will be tested and which will not be
• Gathering details to prepare for estimation of resources for testing.
• It is always good to decide the scope and priority of testing.
• During the planning stages of release, the features that constitute the
release are identified.
• The Following Factors Drive the Choice and Prioritization of Features to be
Tested-
• 1. Features that is new and critical for the release:
• The new features of a release set the expectations of the customers and must perform properly.
• These new features result in new program code and thus have a higher susceptibility and
exposure to defects.

• 2. Features whose failures can be disastrous:


• Any feature that produce adverse business impact have to be high on the list of features to be
tested. E.g. database recovery mechanism

• 3. Features that are expected to be complex to test:


• Starting the work on these features early and line up appropriate resources in time.

• 4. Features which are extensions of earlier features that have been defect prone:
• Old defects can creep in again. Such features that are defect prone should be include ahead of
more stable features for testing.
Deciding Test Approach
• Deciding Test Approach:
• Once the features list is prioritized then the next step is to go into more details
of what to be tested, to enable estimation of size, effort and schedule. This
includes identifying-
• 1. What type of testing would you use for testing the functionality?
• UT, IT, AT, ST, PT, Security Testing

• 2. What are the configuration or scenarios for testing the features?


• Positive Test Scenarios, Negative Test Scenarios, Boundary Test Scenarios,
Concurrency Test Scenarios, Load and Performance Test Scenarios

• 3. What integration testing would you do to ensure these features work


together?
• Bottom-up , Top- Down, Bidirectional etc.
Deciding Test Approach(Conti…)
• 4. What localization validations would be needed?
• Language Verification, Date and Time Formats, Currency and
Monetary Values, Phone Number Formats,

• 5. What “non functional” tests would you need to do?


• Compatibility, Endurance, Integrity, Documentation etc.
• The test approach/strategy part of the test plan identifies the right
type of testing to effectively test a given feature or combination.
• There should also be objective criteria for measuring the success of a
test.
Setting up Criteria for Testing
• Setting up Criteria for Testing:
• The test strategies for the various features and combinations determined
how these features and combinations would be tested.
• Ideally test must run as early as possible so that last minute pressure of
running tests after development delays is minimized.
• The entry criteria for a test specify threshold criteria for each phase or type
of test.
• The completion/ exit criteria specify when a test cycle or a testing activity
can be complete.
• A test cycle or a test activity will not be isolated, continuous activity that can
be carried out at one go. It may be suspended at various points of time
because it is not possible to proceed further.
Setting up Criteria for Testing
• Scope management identifies what needs to be tested and test strategy
outlines how to do it.
• Identifying responsibilities, staffing and training needs addresses the aspect
of who part.
• A testing project requires different people to play different roles.
• These are the roles of the test engineers, test leads and test manager.
• There is also role definition on the dimensions of the module being tested or
the type of testing
Identifying Responsibilities, Staffing, and Training
Needs:
• The different role definitions should—
• 1. Ensure there is clear accountability for a given task, so that each person
knows what he or she has to do.
• 2. Clearly list the responsibilities for various functions to various people, so
everyone knows how his or her work fits into the entire project.
• 3. Compliment each other ensuring that no one steps on an others toes.
• 4. Supplement each other so that no task is left unassigned.
•  Role definitions should not only address technical roles but also list the
management and reporting responsibilities.
•  Staffing is done based on estimation of effort involved and the availability
of time for release.
•  In case there are gap between requirements and availability of skills, the
staff should be addressed with appropriate training programs.
Identifying Responsibilities, Staffing, and Training
Needs:
Identifying Resource Requirements:

• As a part of planning for a testing project, the project manager should provide
estimates for the various hardware and software resources required.
• Some of the following factors need to be considered.
• 1. Machine configuration(RAM, processor, disk etc) needed to run the product
under test.
• 2. Overheads required by the test automation tool, if any.
• 3. Supporting tools such as compilers, test data generators, configuration
management tools and so on.
• 4. The different configurations of the supporting s/w that must be present.
• 5. Special requirements for running machine intensive tests such as load tests and
performance tests.
• 6. Appropriate number of licenses of all the s/w.
• In addition to this there are also other requirements such as office space,
support functions like HR and so on.
• Underestimation of these resources can lead to considerable slowing down
of the testing efforts and this can lead to delayed product release and
demotivated testing team.
Identifying Test Deliverables
• Test Deliverables are the artifacts which are given to the stakeholders of software
project during the software development lifecycle.
• There are different test deliverables at every phase of the software development
lifecycle.
• Some test deliverables are provided before testing phase, some are provided during
the testing phase and some after the testing cycles is over.
• The different types of Test deliverables are:
⮚ Test cases Documents
⮚ Test Plan
⮚ Testing Strategy
⮚ Test Scripts
⮚ Test Data
⮚ Test Traceability Matrix
⮚ Test Results/reports
Identifying Test Deliverables(Conti…)
⮚ Test summary report
⮚ Install/config guides
⮚ Defect Reports
⮚ Release notes
• The test plan describes the overall method to be used to verify that the software
meets the product specification and the customer's needs. It includes the quality
objectives, resource needs, schedules, assignments, methods, and so forth.
• Test cases list the specific items that will be tested and describe the detailed steps
that will be followed to verify the software.
• Bug reports describe the problems found as the test cases are followed. These
could be done on paper but are often tracked in a database.
• Test tools and automation are listed and described which are used to test the
software. If the team is using automated methods to test software, the tools used,
either purchased or written in-house, must be documented
Identifying Test Deliverables(Conti…)

• Metrics, statistics, and summaries convey the progress being made as


the test work progresses. They take the form of graphs, charts, and
written reports.
• Milestones: milestones are the dates of completion given for various
tasks to be performed in testing. These are thoroughly tracked by the
test manager and are kept in the documents such as Gantt charts, etc.
Different types of Test Deliverables
• Following are different types of deliverables that are
generated at every phase of SDLC:
Testing Tasks: Size and Effort Estimation
• Testing Tasks: Size and Effort Estimation
• Testing task identify the set of task necessary to prepare for and perform
testing activities.
• Estimation happens broadly in three phases such as size, effort and schedule
estimation.
• Size estimate quantifies amount of testing need to be done.

• Several factors contribute to size and effort estimate of a testing project-


• 1. Size of product under test:
• This determines the amount of testing that needs to be done.
• Line of code represent size estimate only for coding phase
• Function point provides a representation of application size, independent of
programming language. (inputs, outputs, interfaces, external data files)
Testing Tasks: Size and Effort Estimation(Conti…)
• 2. Extent of automation required:
• When automation is involved, the size of work to be done for testing
increase.
• For automation basic test case design is required. Also scripting this test
case into programming language of the automated tool.

• 3. Number of platforms and interoperability environment to be tested:


• If the product is to be tested under several different platforms or under
several different configuration then the size of testing task increases.
Testing Tasks: Size and Effort Estimation(Conti…)
• 4. Productivity data:
• It refers to the speed at which the various activities of testing can be carried
out.

• 5. Reuse opportunity:
• If the test architecture has been designed keeping reuse in mind, then the
effort required to cover a given size of testing can come down.

• 6. Robustness of processes:
• Reuse is a specific example of process maturity of an organization.
• Existence of well defined processes will go a long way in reducing the effort
involved in any activity.
Test Management

• It is a method of organizing test assets and artifacts such as test


requirements, test cases and test results to enable accessibility and reuse.
• Choice of Standards:
• Standards comprise an important part of planning in any organization.
• Standards are of two types- External and Internal standards.
• External standards are standards that a product should comply with, are
externally visible and are usually set by external group. (test supplied by
external group, acceptance tests supplied by customer)
• Internal standards are one that standardize the processes and methods of
working within the organization.
Test Management(Conti…)

• Some of the internal standards include-


• 1. Naming and storage conventions for test artifacts
• 2. Document standards
• 3. Test coding standards and
• 4. Test reporting standards
Test Management(Conti…)
• 1. Naming and storage conventions for test artifacts
• Every test artifact (test specification, test case, test results and so on)
have to be named appropriately and meaningfully.
• It enables
• a) Easy identification of the product functionality.
• b) Reverse mapping to identify the functionality corresponding to a given
set of tests.
• E.g. modules shall be M01, M02. Files types can be .sh, .SQL.
Test Management(Conti…)
• 2. Documentation standards:
• a) Appropriate header level comments at the beginning of a file that outlines the
functions to be served by the test.
• b) Sufficient inline comments, spread throughout the file
• c) Up-to-Date change history information, reading all the changes made to the test
file.

• 3. Test coding standards:


• a) Enforce right type of initialization
• b) Stipulate ways of naming variables.
• c) Encourage reusability of test artifacts
• d) Provide standard interfaces to external entities like operating system, hardware
and so on.
Test Management(Conti…)

• 4. Test reporting standard:


• All the stakeholders must get a consistent and timely view of the
progress of tests.
• It provides guidelines on the level of details that should be present in
the test report, their standard formats and contents.
Test Management(Conti…)
• 5.External Standards:
• These are the standards made by an entity external to an organization.
These standards are standards that a product should comply with, are
externally visible and are usually stipulated by external parties.
• The three types of external standards are:
• 1. Customer standard: refer to something defined by the customer as per
his/her business requirement for the given product.
• 2. National Standard: refer to something defined by the regulatory entities
of the country where the supplier customer resides.
• 3. International Standard: are defined at international level and these are
applicable to all customers across the globe.
Testing Infrastructure Management

• It is Made up of Three Essential Elements-


• 1. A test case database (TCDB)
• 2. A defect repository
• 3. Software Configuration management repository and tool
Testing Infrastructure Management (Conti…)
• A test case database captures all relevant information about test cases in an
organization.
• Some of the entities and attributes in each of the entities in such a TCDB are—
• A defect repository captures all the relevant details of defects reported for a
product.
• The information that a defect repository includes is given in the table—
Testing Infrastructure Management (Conti…)
• A Software Configuration Management (SCM) repository also known as CM
repository keeps track of change control and version control of all the files/
entities that make up a software product.
• • Change control ensures that —
• 1. Changes to test files are made in a controlled fashion and only with
proper approvals.
• 2. Changes made by one test engineer are not accidently lost or overwritten
by other changes.
• 3. Each change produces a distinct version of the file that is recreatable at
any point of time.
• 4. At any point of time, everyone gets access to only the most recent version
of test files.
Testing Infrastructure Management (Conti…)

• TCDB, defect repository and SCM repository should complement each


other and work together in an integrated fashion as shown in figure.
• E.g. The defect repository links the defects, fixes and tests. The files for all
these will be now in SCM.
• The meta data about the modified test files will be in the TCDB.
• Starting with the a given defect, one can trace all the test cases that test
defect(from TCDB) and then find the corresponding test case files and
source files from SCM repository.
Testing Infrastructure Management (Conti…)
Testing People Management

• People management is an integral part of any project management


and test planning.
• People management also requires the ability to hire, motivate, and
retain the right people.
• These skills are seldom(rarely) formally taught.
• Testing projects present several additional challenges.
• We believe that the success of a testing organization depends vitally
on judicious people management skills.
Testing People Management
❑Test Lead Responsibilities and Activities:
• Identify how the test teams formed and aligned within organization
• Decide the roadmap for the project
• Identify the scope of testing using SRS documents.
• Discuss test plan, review and approve by management/ development team.
• Identify required metrics
• Calculate size of project and estimate efforts and corresponding plan.
• Identify skill gap and balance resources and need for training education.
• Identify the tools for test reporting , test management, test automation,
• Create healthy environment for all resources to gain maximum throughput.
• Identify how the test teams formed and aligned within organization
management/ development team.
Testing People Management(Conti…)
❑Test Team Responsibilities and Activities:
• Initiate the test plan for test case design
• Conduct review meetings
• Monitor test progress , check for resources, balancing and allocation
• Check for delays in schedule discuss, resolve risks if any.
Test Process : Base Lining a Test Plan
• The format and content of a software test plan vary depending on the
processes, standards, and test management tools being implemented.
• Nevertheless, the following format, which is based on IEEE standard
for software test documentation, provides a summary of what a test
plan can/should contain.
Test Process : Base Lining a Test Plan(Conti…)

❑Test plan template:


• Test Plan Identifier: A Test Plan Identifier is a unique identifier or code
assigned to a test plan document or test plan within a testing project or
quality assurance process. It helps in tracking and managing test plans,
especially in larger and more complex projects.
• Example - ACME-TP001-V2.0
• ACME: This is the prefix that represents the organization or project name.
In this case, it's "ACME."
• TP001: This is the sequential number assigned to the test plan. It
indicates that this is the first test plan for the project or organization.01 is
first test plan and so on.
• V2.0: This part indicates the version or revision of the test plan. It shows
that this is the second version, denoted as "V2.0."
Test Process : Base Lining a Test Plan (Conti…)
❑Introduction:
• Provide an overview of the test plan.
• Specify the goals/objectives.
• Specify any constraints.
• Example - The purpose of this document is to create an application test plan for
EDIT option of Notepad. The purpose of testing this program is to check the
correct operation of its functionality, ease of use.

❑ References: List the related documents, with links to them if available, including
the following:
• 1. Project Plan
• 2. Configuration Management Plan
Test Process : Base Lining a Test Plan (Conti…)

❑Test Items: List the test items (software/products) and their versions
▪ Example - Working with the document edit functions(select, cut, copy etc.)

❑Features to be Tested:
• 1. List the features of the software/product to be tested.
• 2. Provide references to the Requirements and/or Design specifications of the features to
be tested
▪ Example - Select all text , Cut some text, Paste the text, delete, copy, rename

❑ Features Not to Be Tested:


• 1. List the features of the software/product which will not be tested.
• 2. Specify the reasons these features won’t be tested.
▪ Example - Working with Help, Time and date option
Test Process : Base Lining a Test Plan (Conti…)
❑Approach:
• 1. Mention the overall approach to testing.
• 2. Specify the testing levels [if it’s a Master Test Plan], the testing types, and
the testing methods [Manual/Automated; White Box/Black Box/Gray Box]
• It provides an overview of how testing will be conducted, the scope of
testing, and the testing methodologies and techniques that will be used

❑Item Pass/Fail Criteria:


• 1. Specify the criteria that will be used to determine whether each test
item (software/product) has passed or failed testing.
• Pass Criteria: A test case is considered to have passed if all the steps within
it are executed successfully, and the actual outcomes match the expected
outcomes as specified in the test case.
Test Process : Base Lining a Test Plan (Conti…)
• Fail Criteria: A test case is considered to have failed if any of the following
occur:
• Any step within the test case fails to execute successfully.
• The actual outcomes differ from the expected outcomes.
• The test case encounters a critical defect, causing the entire test case to
be marked as failed.

❑Suspension Criteria and Resumption Requirements:


• 1. Specify criteria to be used to suspend the testing activity.
• 2. Specify testing activities which must be redone when testing is resumed.
• Testing may be suspended under the following conditions:
• Critical Defects, Infrastructure Failures, Resource Constraints
Test Process : Base Lining a Test Plan (Conti…)
❑Test Deliverables: List test deliverables, and links to them if available, including
the following:
• – Test Plan (this document itself)
• – Test Cases
• – Test Scripts
• – Defect/Enhancement Logs
• – Test Reports

❑Test Environment:
• 1. Specify the properties of test environment: hardware, software, network etc.
• 2. List any testing or related tools.
• Example –Notepad, Computer, Windows os
Test Process : Base Lining a Test Plan(Conti…)
❑Estimate: Provide a summary of test estimates (cost or effort) and/or
provide a link to the detailed estimation.

❑Schedule: Provide a summary of the schedule, specifying key test


milestones, and/or provide a link to the detailed schedule.
• Example - The deadline for completion of all works and delivery of the
project is 06/12/2019 by 5.00pm

❑Staffing and Training Needs:


• 1. Specify staffing needs by role and required skills.
• 2. Identify training that is necessary to provide those skills, if not already
acquired.
Test Process : Base Lining a Test Plan(Conti…)

• To perform the tasks, you need to have the following knowledge and skills:
⮚ knowledge and practical application of the notepad;
⮚ knowledge and ability to apply in practice the basic techniques of test design
⮚ Knowledge of various types of testing including functional and non-functional.
❑Responsibilities: List the responsibilities of each
team/role/individual.
Test Process : Base Lining a Test Plan(Conti…)

❑Risks:
• 1. List the risks that have been identified.
• 2. Specify the mitigation plan and the contingency plan for each risk.
• Example –
• Possible risks during testing:
⮚ Insufficient human resources for testing the application in deadlines.
⮚ Changing the requirements for the product.
Test Process : Base Lining a Test Plan(Conti…)
❑Assumptions and Dependencies:
• 1. List the assumptions that have been made during the preparation of this
plan.
• 2. List the dependencies.

❑Approvals:
• 1. Specify the names and roles of all persons who must approve the plan.
• 2. Provide space for signatures and dates. (If the document is to be printed.)
• Team Lead
• Test engineer 1
• Test engineer 2
• Test engineer 3
• Test engineer 4
Test case Specification.
• The test case specifications should be developed from the test plan
and are the second phase of the test development life cycle.
• The test specification should explain "how" to implement the test
cases described in the test plan.
• Test case specifications are useful as it enlists the specification details
of the items.
• Test Specification Items are must for each test specification should contain the following items:
• 1. Case No.: The test case number should be a three digit identifier of the following form: c.s.t, where: c- is the
chapter number, s- is the section number, and t- is the test case number.
• Example - ACME-TCS-001
▪ TC001

• 2. Title: is the title of the test.


• Example - Test Case Specification for Login Functionality

• 3. Program: typically refers to the specific software program, application, or system component that is being
tested.
• Example - "ACME E-commerce Website"

• 4. Author: is the person who wrote the test specification.

• 5. Date: is the date of the last revision to the test case.

• 6. Background: In a Test Case Specification document, the "Background" section provides context and
background information about the system, feature, or scenario being tested
Test case Specification(Conti…)

• 7. Expected Error(s): Describes any errors expected

• 8. Reference(s): Lists reference documentation used to design the


specification.

• 9. Data: (Tx Data, Predicted Rx Data): In a Test Case Specification, you


can include a section that specifies the test data to be used, as well as
the expected or predicted results
Test case Specification(Conti…)

• Test Case Specification for Login Functionality


• Author: John Smith
• Program: ACME E-commerce Website
• Test Case Numbers:
• TC001 - Verify Guest User Login:
• Test Objective: To verify that a guest user can log in successfully.
• Preconditions: User is not logged in.
• TC002 - Verify Registered User Login:
• Test Objective: To verify that a registered user can log in successfully.
• Preconditions: User is registered and not logged in.
• Tx Data (Input Data): Provide the data to be entered into the registration form.
• Username: john_doe
• Email: john@example.com
• Password: P@ssw0rd
• Confirm Password: P@ssw0rd
• Address: 123 Main St, Cityville
• Postal Code: 12345
• Phone Number: 555-555-5555
• Predicted Rx Data (Expected Results): Specify the expected results or outcomes.
Test Reporting
• Test reporting is a means of achieving communication through the
testing cycle.
• There are 3 types of test reporting.
• 1. Test incident report:
• A test incident report is communication that happens through the
testing cycle as and when defects are encountered .
• A test incident report is an entry made in the defect repository each
defect has a unique id to identify incident .
• The high impact test incident are highlighted in the test summary
report.
Test Reporting(Conti…)
• 2. Test cycle report:
• A test cycle entails planning and running certain test in cycle , each cycle
using a different build of the product .As the product progresses through
the various cycles it is expected to stabilize.
• Test cycle report gives
• 1. A summary of the activities carried out during that cycle.
• 2. Defects that are uncovered during that cycle based on severity and
impact
• 3. Progress from the previous cycle to the current cycle in terms of defect
fixed
• 4. Outstanding defects that not yet to be fixed in cycle
• 5. Any variation observed in effort or schedule
Test Reporting(Conti…)
• Test Summary Report:
• The final step in a test cycle is to recommend the suitability of a product for
release. A report that summarizes the result of a test cycle is the test
summary report.
• There are two types of test summary report:
• 1. Phase wise test summary, which is produced at the end of every phase
• 2. Final test summary report.
• A Summary report should present
• 1. Test Summary report Identifier
• 2 Description: - Identify the test items being reported in this report with
test id
• 3 Variances: - Mention any deviation from test plans, test procedures, if any
Test Reporting(Conti…)

• 4 Summary of results: - All the results are mentioned here with the
resolved incidents and their solutions.
• 5 Comprehensive assessment and recommendation for release should
include Fit for release assessment and recommendation of release
Preparing Test Summary Report:
• At the completion of a test cycle, a test summary report is produced.
•This report gives insights to the senior management about the fitness of
the product for release.
•Test summary report is prepared after testing is completed.
•Summary report template provided is a useful guideline for what goes into
such a report.
•The test summary report template is as given below---
• Test summary report identifier: Evaluation:
• Summary: Summary of activities:
• Variances: Approval:
• Comprehensive assessment: Summary of results:
Executing Test Cases
• Executing Test cases :
• • The prepared test cases have to be executed at the appropriate times
during a project.
• • For example, smoke testing on daily basis. System test cases will be run
during system testing.
• • As the test cases are executed during a test cycle, the defect repository is
updated with-
• 1. Defects from the earlier test cycles that are fixed in the current build and
• 2. New defect that get uncovered in the current run of the tests.
Executing Test Cases(Conti…)
• 3. The defect repository should be the primary vehicle of communication
between the test team and development team. All stakeholders should be
referring to the defect repository for knowing the current status of all the
defects.
• This communication can be augmented by other means like emails,
conference calls and so on.
• 4. A test may have to be suspended during its run because of show stopper
defects due to which the test case should wait till the resumption criteria
are satisfied.
• Likewise a test should be run only when entry criteria are satisfied and
should be considered complete only when exit criteria are satisfied.
End of Chapter 3

You might also like