Manual Testing Notes ...
Manual Testing Notes ...
Manual Testing Notes ...
1. Introduction
2. Principle of Testing
7. Project Management
8. Quality Management
9. Risk Management
❖ Software Testing
Introduction
1. Product development
2. Project development
Product development is done assuming a wide range of customers and their needs.
This type of development involves customers from all domains and collecting requirements
from many different environments.
Testing is a necessary stage in the software life cycle: it gives the programmer and
user some sense of correctness, though never "proof of correctness. With effective
testing techniques, software is more easily debugged, less likely to "break," more
"correct", and, in summary, better.
2
developers convince themselves that the overlooked errors can be rectified in
subsequent releases.
The definition of testing is not well understood. People use a totally incorrect
definition of the word testing, and that this is the primary cause for poor program
testing.
Testing the product means adding value to it by raising the quality or reliability of
the product. Raising the reliability of the product means finding and removing errors.
Hence one should not test a product to show that it works; rather, one should start
with the assumption that the program contains errors and then test the program to
find as many of the errors as possible.
Definitions of Testing:
3
Why software Testing?
Software testing helps to deliver quality software products that satisfy user’s
requirements, needs and expectations. If done poorly,
➢ defects are found during operation,
➢ it results in high maintenance cost and user dissatisfaction
➢ It may cause mission failure
➢ Impact on operational performance and reliability
In the fall of 1994, Disney company Released its first multimedia CD-ROM game for
children, The Lion King Animated storybook. This was Disney’s first venture into the
market and it was highly promoted and advertised. Sales were huge. It was “the
game to buy” for children that holiday season. What happened, however, was a huge
debacle. On December 26, the day after Christmas, Disney’s customer support
phones began to ring, and ring, and ring. Soon the phones support technicians were
swamped with calls from angry parents with crying children who couldn’t get the
software to work. Numerous stories appeared in newspapers and on TV news. This
problem later was found out, due to non performance of software testing for all
conditions.
4
Software Bug: A Formal Definition
Calling any and all software problems bugs may sound simple enough, but doing so
hasn’t really addressed the issue. To keep from running in circular definitions, there
needs to be a definitive description of what a bug is.
A software bug occurs when one or more of the following five rules are true:
From the above Examples you have seen how nasty bugs can be and you know what
is the definition of a bug is, and you can think how costly they can be. So main goal
of tester is
As a software tester you shouldn’t be content at just finding bugs, you should think
about how to find them sooner in the development process, thus making them
cheaper to fix.
“The goal of a Software Tester is to find bugs, and find them as early as
possible”.
“The goal of a Software Tester is to find bugs, and find them as early as
possible and make sure they get fixed”
5
2
Principle of Testing
Test cases must be written for invalid and unexpected, as well as for valid and expected
input conditions. A necessary part of a test case is a definition of the expected output or
result. A good test case is one that has high probability of detecting an as-yet
undiscovered error.
The probability of locating more errors in any one module is directly proportional to
6
Best Testing Practices to be followed during testing
7
3
Let us look at the Traditional Software Development life cycle vs Presently or Mostly
commonly used life cycle.
Requirements Requirements
T
Design Design E
S
Development Development T
I
N
Testing Implementation G
Implementation Maintenance
Maintenance
Fig A (Traditional)
In the above Fig A, the Testing Phase comes after the Development or coding is
complete and before the product is launched and goes into Maintenance phase. We
have some disadvantages using this model - cost of fixing errors will be high because
we are not able to find errors until coding is completed. If there is error at
Requirements phase then all phases should be changed. So, total cost becomes very
high.
The Fig B shows the recommended Test Process involves testing in every phase of
the life cycle. During the Requirements phase, the emphasis is upon validation to
determine that the defined requirements meet the needs of the organization. During
Design and Development phases, the emphasis is on verification to ensure that the
design and program accomplish the defined requirements. During the Test and
8
Installation phases, the emphasis is on inspection to determine that the implemented
system meets the system specification. During the maintenance phases, the system
will be re-tested to determine that the changes work and that the unchanged portion
continues to work.
Design phase
High – level Design gives the overall System Design in terms of Functional
Architecture and Database design. This is very useful for the developers to
understand the flow of the system. In this phase design team, review team (testers)
and customers plays a major role. For this the entry criteria are the requirement
document that is SRS. And the exit criteria will be HLD, projects standards, the
functional design documents, and the database design document.
During the detailed phase, the view of the application developed during the high
level design is broken down into modules and programs. Logic design is done for
every program and then documented as program specifications. For every
program, a unit test plan is created.
9
The entry criteria for this will be the HLD document. And the exit criteria will the
program specification and unit test plan (LLD).
Development Phase
This is the phase where actually coding starts. After the preparation of HLD and LLD,
the developers know what is their role and according to the specifications they
develop the project. This stage produces the source code, executables, and
database. The output of this phase is the subject to subsequent testing and
validation.
The inputs for this phase are the physical database design document, project
standards, program specification, unit test plan, program skeletons, and utilities
tools. The output will be test data, source data, executables, and code reviews.
Testing phase
This phase is intended to find defects that can be exposed only by testing the entire
system. This can be done by Static Testing or Dynamic Testing. Static testing means
testing the product, which is not executing, we do it by examining and conducting
the reviews. Dynamic testing is what you would normally think of testing. We test
the executing part of the project.
A series of different tests are done to verify that all system elements have been
properly integrated and the system performs all its functions.
Note that the system test planning can occur before coding is completed. Indeed, it
is often done in parallel with coding. The input for this is requirements specification
document, and the output are the system test plan and test result.
10
Implementation phase or the Acceptance phase
Maintenance phase
This phase is for all modifications, which is not meeting the customer requirements
or any thing to append to the present system. All types of corrections for the project
or product take place in this phase. The cost of risk will be very high in this phase.
This is the last phase of software development life cycle. The input to this will be
project to be corrected and the output will be modified version of the project.
11
4
The process used to create a software product from its initial conception to its public
release is known as the software development lifecycle model.
There are many different methods that can be used for developing software, and no
model is necessarily the best for a particular project. There are four frequently used
models:
The Big- Bang Model is the one in which we put huge amount of matter (people or
money) is put together, a lot of energy is expended – often violently – and out
comes the perfect software product or it doesn’t.
The beauty of this model is that it’s simple. There is little planning, scheduling, or
Formal development process. All the effort is spent developing the software and
writing the code. It’s and ideal process if the product requirements aren’t well
understood and the final release date is flexible. It’s also important to have flexible
customers, too, because they won’t know what they’re getting until the very end.
Waterfall Model
A project using waterfall model moves down a series of steps starting from an initial
idea to a final product. At the end of each step, the project team holds a review to
determine if they’re ready to move to the next step. If the project isn’t ready to
progress, it stays at that level until it’s ready. Each phase requires well-defined
information, utilizes well-defined process, and results in well-defined outputs.
Resources are required to complete the process in each phase and each phase is
accomplished through the application of explicit methods, tools and techniques.
12
The Waterfall model is also called the Phased model because of the sequential move
from one phase to another, the implication being that systems cascade from one level
to the next in smooth progression. It has the following seven phases of development:
Requirement phase
Analysis phase
Design phase
Development phase
Testing phase
Implementation phase
Maintenance phase
13
Prototype model
The Prototyping model, also known as the Evolutionary model, came into SDLC
because of certain failures in the first version of application software. A failure in the
first version of an application inevitably leads to need for redoing it. To avoid failure
of SDLC, the concept of Prototyping is used. The basic idea of Prototyping is that
instead of fixing requirements before the design and coding can begin, a prototype is
to understand the requirements. The prototype is built using known requirements.
By viewing or using the prototype, the user can actually feel how the system will
work.
Prototyping Process
• The developer and die user work together to define the specifications of the
critical parts of the system.
• The developer constructs a working model of the system.
• The resulting prototype is a partial representation of the system.
• The prototype is demonstrated to the user.
• The user identifies problems and redefines the requirements.
• The designer uses the validated requirements as a basis for designing the
actual or production software
14
Spiral model
The traditional software process models don't deal with the risks that may be faced
during project development. One of the major causes of project failure in the past
has been negligence of project risks. Due to this, nobody was prepared when
something unforeseen happened. Barry Boehm recognized this and tried to
incorporate the factor, project risk, into a life cycle model. The result is the Spiral
model, which was first presented in 1986. The new model aims at incorporating the
strengths and avoiding the different of the other models by shifting the management
emphasis to risk evaluation and resolution.
Each phase in the spiral model is split into four sectors of major activities.
Objective setting:
This activity involves specifying the project and process objectives in terms of their
functionality and performance.
Risk analysis:
Engineering:
Customer evaluation:
During this phase, the customer evaluates the product for any errors and
modifications.
15
5
16
6
Verification and validation are often used interchangeably but have different
definitions. These differences are important to software testing.
Types of Reviews
• In-process Reviews :-
They look at the product during a specific time period of life cycle,
such as during the design activity. They are usually limited to a
segment of a project, with the goal of identifying defects as work
progresses, rather than at the close of a phase or even later, when
they are more costly to correct.
17
the overall process after release and identify any opportunities for
process improvements.
Classes of Reviews
18
7
Project Management
• Project planning.
• Project scheduling.
• Iterative Code/Test/Release Phases
• Production Phase
• Post Mortem
Project planning
Project scheduling
This activity involves splitting project into tasks and estimate time and resources
required to complete each task. Organize tasks concurrently to make optional use of
workforce. Minimize task dependencies to avoid delays caused by one task waiting
for another to complete. Project Manager has to take into consideration various
aspects like scheduling, estimating manpower resources, so that the cost of
developing a solution is within the limits. Project Manager also has to allow for
contingency in planning.
19
Iterative Code/Test/Release Phases
After the planning and design phases, the client and development team has to agree
on the feature set and the timeframe in which the product will be delivered. This
includes iterative releases of the product as to let the client see fully implemented
functionality early and to allow the developers to discover performance and
architectural issues early in the development. Each iterative release is treated as if
the product were going to production. Full testing and user acceptance is performed
for each iterative release. Experience shows that one should space iterations at least
2 – 3 months a part. If iterations are closer than that, more time will be spent on
convergence and the project timeframe expands. During this phase, code reviews
must be done weekly to ensure that the developers are delivering to specification
and all source code is put under source control. Also, full installation routines are to
be used for each iterative release, as it would be done in production.
Deliverables
• Triage
• Weekly Status with Project Plan and Budget Analysis
• Risk Assessment
• System Documentation
• User Documentation (if needed)
• Test Signoff for each iteration
• Customer Signoff for each iteration
Production Phase
Once all iterations are complete, the final product is presented to the client for a final
signoff. Since the client has been involved in all iterations, this phase should go very
smoothly.
Deliverables
The post mortem phase allows to step back and review the things that went well and
the things that need improvement. Post mortem reviews cover processes that need
adjustment, highlight the most effective processes and provide action items that will
improve future projects.
20
To conduct a post mortem review, announce the meeting at least a week in advance
so that everyone has time to reflect on the project issues they faced. Everyone has
to be asked to come to the meeting with the following:
During the meeting, collection of the information listed above is required. As each
person offers their input, categorize the input so that all comments are collected.
This will allow one to see how many people had the same observations during the
project. At the end of observation review, a list of the items will be available that
were mentioned most often. The list of items allowing the team to prioritize the
importance of each item has to be perused. This will allow drawing a distinction of
the most important items. Finally, a list of action items has to be made that will be
used to improve the process and publish the results. When the next project begins,
everyone on the team should review the Post Mortem Report from the prior release
as to improve the next release.
21
8
Quality Management
The project quality management knowledge area is comprised of the set of processes
that ensure the result of a project meets the needs for which the project was
executed. Processes such as quality planning, assurance, and control are included in
this area. Each process has a set of input and a set of output. Each process also has
a set of tools and techniques that are used to turn input into output.
Definition of Quality:
• Fitness for use. (Is the product or service capable of being used?)
• Fitness for purpose. (Does the product or service meet its
intended purpose?)
• Customer satisfaction. (Does the product or service meet the
customer's expectations?)
22
Quality Management Processes
Quality Planning:
The process of identifying which quality standards is relevant to the project and
determining how to satisfy them.
Quality Assurance
Quality Control
Quality Policy
The overall quality intentions and direction of an organization as regards quality,
as formally expressed by top management
23
Total Quality Management (TQM)
Quality Concepts
• Zero Defects
• The Customer is the Next Person in the Process
• Do the Right Thing Right the First Time (DTRTRTFT)
• Continuous Improvement Process (CIP) (From Japanese word, Kaizen)
• Pareto Chart
1. Ranks defects in order of frequency of occurrence to depict
100% of the defects. (Displayed as a histogram)
2. Defects with most frequent occurrence should be targeted for
corrective action.
3. 80-20 rule: 80% of problems are found in 20% of the work.
4. Does not account for severity of the defects
• Histograms
1. Shows frequency of occurrence of items within a range of
activity.
2. Can be used to organize data collected for measurements done
on a product or process.
• Scatter diagrams
1. Used to determine the relationship between two or more pieces
of corresponding data.
24
2. The data are plotted on an "X-Y" chart to determine correlation
(highly positive, positive, no correlation, negative, and highly
negative)
1. Graphs
2. Check sheets (tic sheets) and check lists
3. Flowcharts
25
9
Risk Management
Risk management must be an integral part of any project. Everything does not
always happen as planned. Project risk management contains the processes for
identifying, analyzing, and responding to project risk. Each process has a set of input
and a set of output. Each process also has a set of tools and techniques that are
used to turn the input into output
Used to decide how to approach and plan the risk management activities for a
project.
• Input includes: The project charter, risk management policies, and WBS all
serve as input to this process
• Methods used: Many planning meeting will be held in order to generate the
risk management plan
• Output includes: The major output is the risk management plan, which does
not include the response to specific risks. However, it does include
methodology to be used, budgeting, timing, and other information
Risk Identification
Determining which risks might affect the project and documenting their
characteristics
• Input includes: The risk management plan is used as input to this process
• Methods used: Documentation reviews should be performed in this process.
Diagramming techniques can also be used
• Output includes: Risk and risk symptoms are identified as part of this
process. There are generally two types of risks. They are business risks that
are risks of gain or loss. Then there are pure risks that represent only a risk
of loss. Pure risks are also known as insurable risks
26
Risk Analysis
Used to monitor risks, identify new risks, execute risk reduction plans, and
evaluate their effectiveness throughout the project life cycle.
• Input includes: Input to this process includes the risk management plan,
risk identification and analysis, and scope changes
• Methods used: Audits should be used in this process to ensure that risks are
still risks as well as discover other conditions that may arise.
• Output includes: Output includes work-around plans, corrective action,
project change requests, as well as other items
Decision Trees
A diagram that depicts key interactions among decisions and associated chance
events as understood by the decision maker. Can be used in conjunction with EMV
since risk events can occur individually or in groups and in parallel or in sequence.
27
10
Configuration Management
• Version control.
• Changes made in the project.
28
Changes made in the project
This is one of most useful way of configuring the system. All changes will have to be
maintained that were made to the previous versions of the software. This is more
important when the system fails or not meeting the requirements. By making note of
it one can get the original functionality. This can include documents, data, or
simulation.
Configuration Management Planning
This starts at the early phases of the project and must define the documents or
document classes, which are to be managed. Documents, which might be required
for future system maintenance, should be identified and included as managed
documents. It defines
➢ document-naming scheme
Change management
Software systems are subject to continual change requests from users, from
developers, from market forces. Change management is concerned with keeping,
managing of changes and ensuring that they are implemented in the most cost-
effective way.
required, reason "why change -was suggested and urgency of change ( from
requestor of the change). It also records change evaluation, impact analysis, change
29
cost and recommendations (System maintenance staff), A major problem in change
management is tracking change status. Change tracking tools keep track the status
of each change request and automatically ensure that change requests are sent to
the right people at the right time. Integrated with Email systems allowing electronic
A group, who decide, whether or not they are cost-effective from a strategic,
organizational and technical viewpoint, should review the changes. This group is
sometimes called a change control board and includes members from project team.
11
30
❖ Types of Software Testing
Testing
Static Dynamic
Structural Functional
Testing Testing
Static Testing
Static testing refers to testing something that’s not running. It is examining and
reviewing it. The specification is a document and not an executing program, so it’s
considered as static. It’s also something that was created using written or graphical
documents or a combination of both.
Dynamic Testing
Structural tests verify the structure of the software itself and require complete
access to the source code. This is known as ‘white box’ testing because you see into
the internal workings of the code.
31
White-box tests make sure that the software structure itself contributes to proper
and efficient program execution. Complicated loop structures, common data areas,
100,000 lines of spaghetti code and nests of ifs are evil. Well-designed control
structures, sub-routines and reusable modular programs are good.
White-box testing strength is also its weakness. The code needs to be examined by
highly skilled technicians. That means that tools and skills are highly specialized to
the particular language and environment. Also, large or distributed system
execution goes beyond one program, so a correct procedure might call another
program that provides bad data. In large systems, it is the execution path as
defined by the program calls, their input and output and the structure of common
files that is important. This gets into a hybrid kind of testing that is often employed
in intermediate or integration stages of testing.
Functional or Black box tests better address the modern programming paradigm. As
object-oriented programming, automatic code generation and code re-use becomes
more prevalent, analysis of source code itself becomes less important and functional
tests become more important. Black box tests also better attack the quality target.
Since only the people paying for an application can determine if it meets their needs,
it is an advantage to create the quality criteria from this point of view from the
beginning.
Black box tests have a basis in the scientific method. Like the process of
science, Black box tests must have a hypothesis (specifications), a defined method
or procedure (test plan), reproducible components (test data), and a standard
notation to record the results. One can re-run black box tests after a change to make
sure the change only produced intended results with no inadvertent effects.
32
12
Testing levels
There are several types of testing in a comprehensive software test process, many of
which occur simultaneously.
• Unit Testing
• Integration Testing
• System Testing
• Performance / Stress Test
• Regression Test
• Quality Assurance Test
• User Acceptance Test and Installation Test
Unit Testing
Testing each module individually is called Unit Testing. This follows a White-Box
testing. In some organizations, a peer review panel performs the design and/or code
inspections. Unit or component tests usually involve some combination of structural
and functional tests by programmers in their own systems. Component tests often
require building some kind of supporting framework that allows components to
execute.
Integration testing
The individual components are combined with other components to make sure that
necessary communications, links and data sharing occur properly. It is not truly
system testing because the components are not implemented in the operating
environment. The integration phase requires more planning and some reasonable
sub-set of production-type data. Larger systems often require several integration
steps.
• All-at-once
• Bottom-up
• Top-down
33
The all-at-once method provides a useful solution for simple
integration problems, involving a small program possibly using a few
previously tested modules.
System Testing
The system test phase begins once modules are integrated enough to perform tests
in a whole system environment. System testing can occur in parallel with integration
test, especially with the top-down method.
A drawback of performance testing is it confirms the system can handle heavy loads,
but cannot so easily determine if the system is producing the correct information.
34
Regression Testing
Regression tests confirm that implementation of changes have not adversely affected
other functions. Regression testing is a type of test as opposed to a phase in
testing. Regression tests apply at all phases whenever a change is made.
Some organizations maintain a Quality Group that provides a different point of view,
uses a different set of tests, and applies the tests in a different, more complete test
environment. The group might look to see that organization standards have been
followed in the specification, coding and documentation of the software. They might
check to see that the original requirement is documented, verify that the software
properly implements the required functions, and see that everything is ready for the
users to take a crack at it.
Traditionally, this is where the users ‘get their first crack’ at the software.
Unfortunately, by this time, it's usually too late. If the users have not seen
prototypes, been involved with the design, and understood the evolution of the
system, they are inevitably going to be unhappy with the result. If one can perform
every test as user acceptance tests, there is much better chance of a successful
project.
35
13
White box testing examines the basic program structure and it derives the test data
from the program logic, ensuring that all statements and conditions have been
executed at least once.
White box tests verify that the software design is valid and also whether it was built
according to the specified design.
Statement coverage – executes all statements at least once. (each and every line)
Condition coverage – executes each and every condition in the program with all
possible outcomes at least once.
36
Black Box Testing Technique
Three successful techniques for managing the amount of input data required includes
:
• Equivalence Partitioning
• Boundary Analysis
• Error Guessing
Equivalence Partitioning:
For example
A program that edits credit limits within a given range ($20,000-$50,000) would
have three equivalence classes:
37
Boundary value analysis:
If one can safely and confidently walk along the edge of a cliff without falling off, he
can almost certainly walk in the middle of a field. If software can operate on the
edge of its capabilities, it will almost certainly operate well under normal conditions.
This technique consist of developing test cases and data that focus on the input and
output boundaries of a given function. In same credit limit example, boundary
analysis would test:
Error Guessing
This is based on the theory that test cases can be developed based upon the intuition
and experience of the Test-Engineer.
Example: In the example of date, where one of the inputs is the date, a test may try
February 29, 2000 or 9.9.99
Incremental testing
Top-down: - This begins testing from top of the module hierarchy and work
down to the bottom using interim stubs to simulate lower interfacing modules
or programs. Modules are added in descending hierarchical order.
Bottom-up: - This begins testing from the bottom of the hierarchy and works
up to the top. Modules are added in ascending hierarchical order. Bottom-up
testing requires the development of driver modules, which provide the test
input, call the module or program being tested, and display test output.
There are procedures and constraints associated with each of these methods,
although bottom-up testing is often thought to be easier to use. Drivers are often
easier to create than stubs, and can serve multiple purposes. Output is also often
38
easier to examine in bottom-up testing, as the output always comes from the
module directly above the module under test.
Thread testing
This test technique, which is often used during early integration testing,
demonstrates key functional capabilities by testing a string of units that accomplish a
specific function in the application. Thread testing and incremental testing are
usually utilized together. For example, units can undergo incremental until enough
units are integrated and a single business function can be performed, threading
Release Closure
Design
Internal
Development Testing
39
Release process
This phase is essential to plan the Installation and release Plan. The phase is
important as it ensures that the code delivered meets the customer’s criteria in all
respect – the technical and non- technical.
User Acceptance
This phase focuses on functionality testing to check whether the system
meets user acceptance criteria or not.
Maintenance
This phase focuses on post delivery support provided at the client site. This includes
handling change requests & documentation support.
Closure
This process is essential to understand the learning’s gained at the end of the
project. This phase is important as it ensures that the project resources are released
and the metrics are analyzed.
14
Defect Tracking
40
The software test plan is the primary means by which software testers communicate
to the product development team what they intend to do. The purpose of the
software test plan is to prescribe the scope, approach, resource, and schedule of the
testing activities. To identify the items being tested, the features to be tested, the
testing tasks to be preformed, the personnel responsible for each task, and the risks
associated with the plan.
The test plan is simply a by-product of the detailed planning process that’s
undertaken to create it. It’s the planning that matters, not the resulting documents.
The ultimate goal of the test planning process is communicating the software test
team’s intent, its expectations, and its understanding of the testing that’s to be
performed.
The following are the important topics, which helps in preparation of Test plan.
• High-Level Expectations
The first topics to address in the planning process are the ones that
define the test team’s high-level expectations. They are fundamental
topics that must be agreed to, by everyone on the project team, but
they are often overlooked. They might be considered “too obvious” and
assumed to be understood by everyone, but a good tester knows never
to assume anything.
Test plan needs to identify the people working on the project, what they
do, and how to contact them. The test team will likely work with all of
them and knowing who they are and how to contact them is very
important.
41
• Inter-Group Responsibilities
• Test phases
To plan the test phases, the test team will look at the proposed
development model and decide whether unique phases, or stages, of
testing should be performed over the course of the project. The test
planning process should identify each proposed test phase and make
each phase known to the project team. This process often helps the
entire team from and understands the overall development model.
• Test strategy
The test strategy describes the approach that the test team will use to
test the software both overall and in each phase. Deciding on the
strategy is a complex task- one that needs to be made by very
experienced testers because it can determine the successes or failure of
the test effort.
• Bug Reporting
Metrics and statistics are the means by which the progress and the
success of the project, and the testing, are tracked. The test planning
process should identify exactly what information will be gathered, what
decisions will be made with them, and who will be responsible for
collecting them.
42
The test case design specification refines the test approach and identifies the
features to be covered by the design and its associated tests. It also identifies the
test cases and test procedures, if any, required to accomplish the testing and
specifics the feature pass or fail criteria. The purpose of the test design specification
is to organize and describe the testing needs to be performed on a specific feature.
The following topics address this purpose and should be part of the test design
specification that is created:
A unique identifier that can be used to reference and locate the test
design specification the specification should also reference the overall
test plan and contain pointers to any other plans or specifications that it
references.
It is the input the data to be tested using the test case. The input may
be in any form. Different inputs can be tried for the same test case and
test the data entered is correct or not.
• Expected result
43
After test case design, each and every test case is checked and actual result
obtained. After getting actual result, with the expected column in the design stage is
compared, if both the actual and expected are same, then the test is passed
otherwise it will be treated as failed.
Now the test log is prepared, which consists of entire data that were recorded,
whether the test failed or passed. It records each and every test case so that it will
be useful at the time of revision.
Example
44
15
Defect Tracking
A defect can be defined in one or two ways. From the producer's viewpoint, a defect
is a deviation from specifications, whether missing, wrong, etc. From the Customer's
viewpoint, a defect is any that causes customer dissatisfaction, whether in the
requirements or not, this is known as "fit for use". It is critical that defects identified
at each stage of the project life cycle be tracked to resolution.
Most project teams utilize some type of tool to support the defect tracking process.
This tool could be as simple as a white board or a table created and maintained in a
word processor or one of the more robust tools available today, on the market, such
as Mercury's Test Director etc. Tools marketed for this purpose usually come with
some number of customizable fields for tracking project specific data in addition to
the basics. They also provide advanced features such as standard and ad-hoc
reporting, e-mail notification to developers and/or testers when a problem is
assigned to them, and graphing capabilities.
45
At a minimum, the tool selected should support the recording and communication
significant information about a defect. For example, a defect log could include:
• Defect ID number
• Descriptive defect name and type
• Source of defect -test case or other source
• Defect severity
• Defect priority
• Defect status (e.g. open, fixed, closed, user error, design, and so on)
-more robust tools provide a status history for the defect
• Date and time tracking for either the most recent status change, or
for each change in the status history
• Detailed description, including the steps necessary to reproduce the
defect
• Component or program where defect was found
• Screen prints, logs, etc. that will aid the developer in resolution
process
• Stage of origination
• Person assigned to research and/or correct the defect
The severity of a defect should be assigned objectively by the test team based on
pre defined severity descriptions. For example a "severity one" defects maybe
defined as one that causes data corruption, a system crash, security violations, etc.
In large project, it
may also be necessary to assign a priority to the defect, which determines the order
in
which defects should be fixed. The priority assigned to a defect is usually more
subjective based upon input from users regarding which defects are most important
to them, and therefore should be fixed first.
It is recommended that severity levels be defined at the start of the project so that
they intently assigned and understood by the team. This foresight can help test
teams avoid the common disagreements with development teams about the
criticality of a defect.
46
Some general principles
• Defect prevention
• Deliverable base-lining
• Defect discovery/defect naming
• Defect resolution
• Process improvement
• Management reporting
Management Reporting
47
16
Test Reports
A final test report should be prepared at the conclusion of each test activity. This
might include
The test reports are designed to document the results of testing as defined in the
test plan. Without a well-developed test plan, which has been executed in
accordance with its criteria, it is difficult to develop a meaningful test report.
The test report may be a combination of electronic data and hard copy. For example,
if the function test matrix is maintained electronically, there is no reason to print
that, as the paper report will summarize that data, draws the appropriate
conclusions, and present recommendations.
The test report has one immediate and three long-term purposes. The immediate
purpose is to provide information to the customers of the software system so that
they can determine whether the system is ready for production: and if so, to assess
the potential consequences and initiate appropriate actions to minimize those
consequences.
The first of the three long-term uses is for the project to trace problems in the event
the application malfunctions in production. Knowing which functions have been
correctly tested and which ones still contain defects can assist in taking corrective
action.
48
The second long-term purpose is to use the data to analyze the rework process for
making changes to prevent defects from occurring in the future. Accumulating the
results of many test reports to identify which components of the rework process are
detect-prone does this. These defect-prone components identify tasks/steps that, if
improved, could eliminate or minimize the occurrence of high-frequency defects.
These reports focus on individual projects (e.g., software system). When different
testers test individual projects, they should prepare a report on their results.
Integration testing tests the interfaces between individual projects. A good test plan
will identify the interfaces and institute test conditions that will validate interfaces.
Given this, the interface report follows the same format as the individual Project Test
report, except that the conditions tested are the interfaces.
A system test plan standard that identified the objectives of testing, what was to be
tested, how it was to be tested and when tests should occur. The System Test report
should present the results of executing that test plan. If this is maintained
electronically, it need only be referenced, not included in the report.
There are two primary objectives for testing. The first is to ensure that the system as
implemented meets the real operating needs of the user or customer. If the defined
requirements are those true needs, the testing should have accomplished this
objective. The second objective is to ensure that the software system can operate in
the real-world user environment, which includes people skills and attitudes, time
pressures, changing business conditions, and so forth.
49
Eight Interim Reports:
This report will show percentages of the functions, which have been:
• Fully Tested
• Tested With Open Defects
• Not Tested
This report will show the actual plan to have all functions working verses the current
status of functions working. An ideal format could be a line graph.
This report will provide an analysis between the number of defects being generated
against the expected number of defects expected from the planning stage
This report, ideally in a line graph format, will show the number of defects uncovered
verses the number of defects being corrected and accepted by the testing group. If
the gap grows too large, the project may not be ready when originally planned.
50
Average Age Detected Defects by Type report
This report will show the average outstanding defects by type (severity 1, severity 2,
etc.). In the planning stage, it is benefic determine the acceptable open days by
defect type.
This report will show the defect distribution by function or module. It can also include
items such as numbers of tests completed.
This report will take the previous report (Defect Distribution) and normalize the level
of defects. An example would be one application might be more in depth than
another, and would probably have a higher level of defects. However, when
normalized over the number of functions or lines of code, would show a more
accurate level of defects.
This report can show many different things, including possible shortfalls in testing.
Examples of data to show might be number of severity defects, tests that are behind
schedule, and other information that would present an accurate testing picture
51
17
Software Metric
• Process Metric
• Product Metric
software system.
52
The metrics for the test process would include status of test activities against the
plan, test coverage achieved so far, among others. An important metric is the
number of defects found in internal testing compared to the defects found in
customer tests, which indicate the effectiveness of the test process itself.
Test Metrics
Test Cost
=
No of Defects located in the Testing
Production Defect
Test Automation
53
18
❖ Other Testing Terms
Usability Testing
Determines how well the user will be able to understand and interact with the
system. It identifies areas of poor human factors design that may make the system
difficult to use. Ideally this test is conducted on a system prototype before
development actually beings. If a navigational or operational prototype is not
available, screen prints of all of the applications screens or windows can be used to
walk the user through various business scenarios.
Conversion Testing
Specifically designed to validate the effectiveness of the conversion process. This test
may be conducted jointly by developers and testers during integration testing, or at
the start of system testing, since system testing must be conducted with the
converted data. Field -to -Field mapping and data translation is validated and, if a foil
copy of production data will be used in the test.
Verifies that the functionality of contracted or third party software meets the
organization's requirements, prior to accepting it and installing it into a production
environment. This test can be conducted jointly by the software vendor and the test
team, and focuses on ensuring that all requested functionality has been delivered.
54
Stress / Load Testing
Conducted to validate the application, database, and network, they may handle
projected volumes of users and data effectively. The test is conducted jointly by
developers, testers, DBA's and network associates after the system testing. During
the test, the complete system is subjected to environmental conditions that defer
expectations to answer question such as:
Performance Testing
Usually conducted in parallel with stress and load testing in order to measure
performance against specified service-level objectives under various conditions. For
instance, one may need to ensure that batch processing will complete within the
allocated amount of time, or that on-line response times meet performance
requirements.
Recovery Testing
Evaluates the contingency features built into the application for handling inter and
for returning to specific points in the application processing. Any restoration, and
restart capabilities are also tested here. The test team may conduct this test during
system test or by another team specifically gathered for this purpose.
Configuration Testing
55
Benefits Realization Test
With the increased focus on the value of business returns obtained from investments
information technology this type of test or analysis is becoming more critical. The
benefits Realization Test is a test or analysis conducted after an application is moved
into production in order to determine whether the application is likely to deliver the
original projected benefits. The analysis is usually conducted by- the business user or
client group who requested the project, and results are reported back to executive
management.
56
19
Test Standards
External Standards- Familiarity with and adoption of industry test standards from
Organizations.
IEEE
57
10. 1028-1997 IEEE Standard for Software Reviews
Other Standards:
• DoD-Department of Defense
Internal Standards
• Simplifies communication
• Promotes consistency and uniformity
• Eliminates the need to invent yet another solution to the same
problem
• Provides continuity
• Presents a way of preserving proven practices
• Supplies benchmarks and framework
58
20
Web Testing
Introduction
• Usability
• Functionality
• Server side Interface
• Client side Compatibility
• Performance
• Security
Usability
One of the reasons the web browser is being used as the front end to applications is
the ease of use. Users who have been on the web before will probably know how to
navigate a well-built web site. While 7012 are concentrating on tin's portion of
testing it is important to verify that the application is easy to use. Many will believe
that this is the least important area to test, the site should be better and easy to
use. Even if the web site is simple, there will always be some one who needs some
clarification. Additionally, the documentation needs also to be verified, so that the
instructions are correct.
The following are the some of the things to be checked for easy navigation through
website:
Site map or navigational bar
Does the site have a map? Sometimes power users know exactly where they want
to go and don't want to go through lengthy introductions. Or new users get lost
easily. Either way a site map and/or ever-present navigational map can guide the
user. The site map needs to be verified for its correctness. Does each link on the
map actually exist? Are there links on the site that are not represented on the
map? Is the navigational bar present on every screen? Is it consistent? Does each
link work on each page? Is it organized in an intuitive manner?
59
• Content
• Colors/backgrounds
Ever since the web became popular, everyone thinks they are a graphic designer.
Unfortunately, some developers are more interested in their new backgrounds,
than ease of use. Sites will have yellow text on a purple picture of a fractal pattern.
This may seem "pretty neat", but it's not easy to use. Usually, the best idea is to
use little or no background. If there is a background, it might be a single color on
the left side of the page, containing the navigational bar. But, patterns and pictures
distract the user.
• Images
Whether it's a screen grab or a little icon that points the way, a picture is worth a
thousand words. Sometimes, the best way to tell the user something is to simply
show them. However, bandwidth is precious to the client and the server, so you
need to conserve memory usage. Do all the images and value to each page, or do
they simply waste bandwidth? Can a different file type (.GIF, JPG) be used for 30k
less? In general, one doesn't want large pictures on the front page, since most
users who abandon a load will do it on the front page. If the front page is available
quickly, it will increase the chance they will stay.
60
• Tables
It has to be verified that tables are setup properly. Does the user constantly have
to scroll right to see the price of the item? Would it be more efficient to put the
price closer to the left and put miniscule details to the right? Are the columns wide
enough or does every row have to wrap around? Are certain rows excessively high
because of one entry? These are some of the points to be taken care of.
• Wrap-around
Finally, it has to be verified whether the wrap-around occurs properly. If the text
refers to a picture on the right, make sure the picture is on the right. Make sure
that widow and orphan sentences and paragraphs don't layout in an awkward
manner because of pictures.
Functionality
The functionality of the web site is why the company hired a developer and not just
an artist. This is the part that interfaces with the server and actually "does stuff".
• Links
A link is the vehicle that gets the user from page to page. Two things has to be
verified for each link - that the link which brings to the page it said it would and
that the pages it is trying to link, exist. It may sound a little silly but many of the
web sites exist with internal broken links.
• Forms
When a user submits information through a form it needs to work properly.
The submit button needs to work If the form is for an online registration, the user
should be given login information (that works) after successful completion. If the
form gathers shipping information, it should be handled properly and the customer
should receive their package. In order to test this, you need to verify that the
server stores the information properly and that systems down the line can interpret
and use that information.
• Data verification
If the system verifies user input according to business rules, then that needs
to work properly. For example, a State field may be checked against a list of valid
values. If this is the case, you need to verify that the list is complete and that the
program actually calls the list properly (add a bogus value to the list and make
sure the system accepts it).
61
• Cookies
Most users only like the kind with sugar, but developers love web cookies. If the
system uses them, you need to check them. If they store login information, make
sure the cookies work and make sure it's encrypted in the cookie file. If the cookie
is used for statistics, verify that totals are being counted properly. And you'll
probably want to make sure those cookies are encrypted too, otherwise people can
edit their cookies and skew your statistics.
Most importantly, one may want to verify the application specific functional
requirements, Try to perform all functions a user would: place an order, change an
order, cancel an order, check the status of the order, change shipping information
before an order is shipped, pay online, ad naseum. This is why users will show up
on the developer’s doorstep, so one need to make sure that he can do what is
advertised.
Many times, a web site is not an island. The site will call external servers for
additional data, verification of data or fulfillment of orders.
• Server interface
The first interface one should test is the interface between the browser and the
server, transactions should be attempted, then the server logs viewed and verified
that what is seen in the browser is actually happening on the server. It's also a
good idea to run queries on the database to make sure the transaction data is
being stored properly.
• External interfaces
Some web systems have external interfaces. For example, a merchant might verify
credit card transactions real-time in order to reduce fraud. Several test
transactions may have to be sent using the web interface. Try credit cards that are
valid, invalid, and stolen. If the merchant only takes Visa and MasterCard, try using
a Discover card. (A simple client-side script can check 3 for American Express, 4
for Visa, 5 for MasterCard, or 6 for Discover, before the transaction is sent.)
62
Basically, it has to be ensured that the software can handle every possible message
returned by the external server.
• Error handling
One of the areas left untested most often is interface error handling. Usually we try
to make sure our system can handle all our errors, but we never plan for the other
systems' errors or for the unexpected. Try leaving the site mid-transaction - what
happens? Does the order complete anyway? Try losing the Internet connection
from the user to the server. Try losing the connection from the server to the credit
card verification server. Is there proper error handling for all these situations? Are
charges still made to credit cards? Is the interruption is not user initiated, does the
order get stored so customer service reps can call back if the user doesn't come
back to the site?
It has to be verified that the application can work on the machines your customers
will be using. If the product is going to the web for the world to use, every operating
system, browser, video setting and modem speed has to be tried with various
combinations.
• Operating systems
Does the site work for both MAC and IBM Compatibles? Some fonts are not
available on both systems, so make sure that secondary fonts are selected. Make
sure that the site doesn't use plug-ins only available for one OS, if the users using
both.
• Browsers
Does the site work with Netscape? Internet Explorer? Linux? Some HTML
commands or scripts only work for certain browsers. Make sure there are alternate
tags for images, in case someone is using a text browser. If SSL security is used, it
has to be checked whether browsers 3.0 and higher, but it has to be verified that
there is a message for those using older browsers.
• Video settings
Does the layout still look good on 640x400 or 600x800? Are fonts too small to
read? Are they too big? Does all the text and graphic alignment still work?
63
• Modem/connection speeds
Does it take 10 minutes to load a page with a 28.8 modem, but whether it is tested
after hooking up to high-speed connections? Users will expect long download times
when they are grabbing documents or demos, but not on the front page. It has to
be ensured that the images aren't too large. Make sure that marketing don't put
50k of font size -6 keywords for search engines.
• Printers
Users like to print. The concept behind the web should save paper and reduce
printing, but most people would rather read on paper than on the screen. So, you
need to verify that the pages print properly. Sometimes images and text align on
the screen differently than on the printed page. It has to be verified that order
confirmation screens can be printed properly.
• Combinations
A different combination has to be tried. Maybe 600x800 looks good on the MAC but
not on the IBM. Maybe IBM with Netscape works, but not with Linux. If the web
site will be used internally it might make testing a little easier. If the company has
an official web browser choke, then it has to be verified that it works for that
browser. If everyone has a high-speed connection, load times need not be
checked. (But it has to be kept in mind, some people may dial in from home.) With
internal applications, the development team can make disclaimers about system
requirements and only support those systems setups. But, ideally, the site should
work on all machines without limit growth and changes in the future.
Performance Testing
It need to be verified that the system can handle a large number of users at the
same time, a large amount of data from each user, and a long period of continuous
use.
Accessibility is extremely important to users. If they get a "busy signal", they hang
up and call the competition. Not only must the system be checked so the customers
can gain access, but many times hackers will attempt to gain access to a system by
overloading it, For the sake of security, the system needs to know what to do when
it's overloaded; not simply blow up.
64
• Concurrent users at the same time
If the site just put up the results of a national lottery, it will be better to handle
millions of users right after the winning numbers are posted. A load test tool would
be able simulate concurrent users accessing the site at the same time.
Most customers may only order 1-5 books from your new online bookstore, but
what if a university bookstore decides to order 5000 copies of Intro to Psychology?
Or what if one user wants to send a gift to larger number of his/her friends for
Christmas (separate mailing addresses for each, of course.) Can the system handle
large amounts of from a single user?
If the site is intended to take orders for specific occasion, then it will be better to
handle well before the occasion. If the site offers web-based email, it will be better
to run months or even years, without downtimes. It may probably be required to
use an automated test tool to implement these types of tests, since they are
difficult to do manually. Imagine coordinating 100 people to hit the site at the
same time. Now try 100,000 people. Generally, the tool will pay for itself the
second time you use it. Once the tool is set up, running another test is just a click
away.
Security
Even if credit card payments are not accepted, security is very important. The web
site will be the only exposure for some customers to know about a company. And, if
that exposure is a hacked page, the customers won't feel safe doing business with
the company using internet.
• Directory setup
The most elementary step of web security is proper setup of directories. Each
directory should have an index.html or main.html page so a directory listing
doesn't appear.
65
• SSL (Secured Socket Layer)
Many sites use SSL for secure transactions. While entering an SSL site, there will
be a browser warning and the HTTP in the location field on the browser will change
to HTTPS. If the development group uses SSL it is to be ensured that, there is an
alternate page for browser with versions less than 3.0, since SSL is not compatible
with those browsers. Sufficient warnings while entering and leaving the secured
site are to be provided. Also it needs to be checked whether there is a time-out
limit or what happens if the user tries a transaction after the timeout?
• Logins
In order to validate users, several sites require customers to login. This makes it
easier for the customer since they don't have to re-enter personal information
every time. You need to verify that the system does not allow invalid
usernames/password and that does allow valid logins. Is there a maximum number
of failed logins allowed before the server locks out the current user? Is the lockout
based on IP? What happens after the maximum failed login attempts; what are the
rules for password selection – these needs to be checked.
• Log files
Behind the scenes, it needs to be verified that server logs are working properly.
Does the log track every transaction? Does it track unsuccessful login attempts?
Does it only track stolen credit card usage? What does it store for each
transaction? IP address? User name?
• Scripting languages
Scripting languages are a constant source of security holes. The details are
different for each language. Some allow access to the root directory. Others only
allow access to the mail server, but a resourceful hacker could mail the servers
username and password files to themselves. Find out what scripting languages are
being used and research the loopholes. It might also be a good idea to subscribe to
a security newsgroup that discusses the language that is being tested.
Conclusion
66
21
Testing Terms
Application: A single software product that mayor may not fully support a
business function.
plans, policies, and procedures, and ensures that resources are conserved. Audit
business case.
driven.
67
Boundary Value Analysis: A data selection technique in which test data is chosen
from the "boundaries" of the input or output domain classes, data structures, and
procedure parameters. Choices often include the actual minimum and maximum
boundary values, the maximum value plus or minus one, and the minimum value
been validated by the agent or after its validity has been demonstrated to the agent.
for each key project deliverable, and verification and validation must be done for
percentage of decision outcomes covered by the test cases designed. 100% Condition
coverage would indicate that every possible outcome of each decision had been executed
Cost of Quality (COQ): Money spent above and beyond expected production costs
(labor, materials, equipment) to ensure that the product the customer receives is a quality
(defect free) product The Cost of Quality includes prevention, appraisal, and correction or
repair costs.
68
Conversion Testing: Validates the effectiveness of data conversion processes, including
Decision Coverage: A white-box testing technique that measures the number of -or
percentage -of decision directions executed by the test case designed. 100% Decision
coverage would indicate that all decision directions had been executed at least once
during testing. Alternatively, each logical path through the program can be tested. Often,
paths through the program are grouped into a finite set of classes, and one path from
Defect: Operationally, it is useful to work with two definitions of a defect: (1) From the
producer's viewpoint: a product requirement that has not been met or a product attribute
of requirements that define the product; or (2) From the customer's viewpoint: anything
Driver: Code that sets up an environment and calls a module for test.
Defect Tracking Tools: Tools for documenting defects as they are found during
Desk Checking: The most traditional means for analyzing a system to a program.
The developer of a system or program conducts desk checking. The process involves
reviewing the complete product to ensure that it is structurally sound and that the
standards and requirements have been met. This tool can also be used on artifacts
69
Entrance Criteria: Required conditions and standards for work product quality that
must be present or met for entry into the next stage of the software development
process.
each value of the larger class of data. For example, a business rule that indicates that a
program should edit salaries within a given range ($10,000 -$15,000) might have 3
(invalid)
Error Guessing: The data selection technique for picking values that seems likely to
cause defects. This technique is based upon the theory that test cases and test data can be
values for program variables. Exit Criteria: Standards for work product quality, which
70
Functional Testing: Application of test data derived from the specified functional
deliverables exist. An inspection identifies defects, but does not attempt to correct them.
Integration Testing: This test begins after two or more programs or application
components have been successfully unit tested. The development team to validate the
technical quality or design of the application conducts it. It is the first level of testing which
messages or files (a client and its server(s), a string of batch programs, or a set of on-line
Life Cycle Testing: The process of verifying the consistency, completeness, and
Performance Test: Validates that both the on-line response time and batch run times
quality product if it meets or conforms to the statement of requirements that defines the
product. This statement is usually shortened to: quality means meets requirements. From
Quality Assurance (QA): The set of support activities (including facilitation, training,
measurement, and analysis) needed to provide adequate confidence that processes are
71
established and continuously improved to produce products that meet specifications and
Quality Control (QC): The process by which product quality is compared with
applicable standards, and the action taken when nonconformance is detected. Its focus is
defect detection and removal. This is a line function; that is, the performance of these
Recovery Test: Evaluate the contingency features built into the application for handling
interruptions and for returning to specific points in Life application processing cycle,
including. -checkpoints, backups, restores, and restarts. This test also assures that disaster
recovery is possible.
detect errors that may have been caused by program changes. The technique
requires the use of a set of test cases that have been developed to test all of the
intention of stress testing is to identify constraints and to ensu4re that there are no
performance problems,
Structural Testing: A testing method in which the test data are derived solely from
Stub: Special code segments -that when invoked by a code segment under testing
sinuate the behavior of designed and specified modules not yet constructed. I
72
System test: During this event, the entire system is tested to verify that all
satisfy management that the system meets specifications. System testing verifies
the functional quality of the system in addition to all external interfaces, manual
that interfaces between the application and open environment work correctly, that
JCL functions correctly, and that the application functions appropriately with the
systems.
Test Case -
A test case is a document that describes an input, action, or event and an expected
should contain particulars such as test case identifier, test case name, objective, test
Test Case Specification: -An individual test condition, executed as part of a larger test
contributes to the test's objectives. Test cases document the input, expected results,
execution conditions of a given test item. Test cases are broken down into one or more
Test Data Set: Set of input elements used in the testing process
Test Design Specification: A document that specifies the details of the test
associated tests.
Test Log: A chronological record of relevant details about the execution of tests.
73
Test Plan: A document describing the intended scope, approach, resources, and
schedule of testing activities. It identifies test items, the features to be tested, the
testing tasks, the personnel performing each task, and any risks requiring
contingency planning.
Test Summary Report A document that describes testing activities and results and
by executing the program on sample data sets to verify that it satisfies specified
Test Scripts: A tool that specifies an order of actions that should be performed
during a test session. The script also contains expected results. Test scripts may be
Usability Test: The purpose of this event is to review the application user interface
and other human factors of the application with the people who will be using the
application. This is to ensure that the design (layout and sequence, etc.) enables the
includes assuring that the user interface adheres to documented User Interface
Ideally, an application prototype is used to walk the client group through various
business scenarios, although paper copies of screens, windows, menus, and reports
can be used.
74
User Acceptance Test: User Acceptance Testing (UAT) is conducted to ensure that
the system meets the needs of the organization and the end user/customer. It
validates that the system will work as intended by the test in the real world, and is
based on real world business scenarios, not system requirements. Essentially, this
produced from a development project with respect to the user needs and
Verification:
I) The process of determining whether the products of a given phase of
error detection, not correction. Will usually sue a formal set of standards or criteria
White-box Testing: A testing technique that assumes that the path of the logic in a
used during tests executed by the development team, such as Unit or Component
testing.
75
22
Technical Questions
Black box testing, White box testing is the basic type of testing testers performs.
Apart from that they also perform a lot of tests like Ad - Hoc testing, Cookie Testing,
CET (Customer Experience Test), Client-Server Test, Configuration Tests,
Compatibility testing, Conformance Testing
The Primary need is to match requirements get satisfied with the functionality and
also to answer two questions
• Whether the system is doing what it supposes to do?
• Whether the system is not performing what it is not suppose to do?
76
6. What are the entry criteria for Functionality and Performance testing?
Functional testing:
7. Why do you go for White box testing, when Black box testing is available?
Application should be stable. Clear Design and Flow of the application is needed
A baseline document, which starts the understanding of the application before the
tester, starts actual testing. Functional Specification and Business Requirement
Document
11. Tell names of some testing type which you learnt or experienced?
Any 5 or 6 types which are related to companies profile is good to say in the
interview,
• Ad - Hoc testing
• Cookie Testing
• CET (Customer Experience Test)
• Depth Test
• Event-Driven
77
• Performance Testing
• Recovery testing
• Sanity Test
• Security Testing
• Smoke testing
• Web Testing
13. After completing testing, what would you deliver to the client?
Test deliverables namely Test plan Test Data Test design Documents
(Condition/Cases)
• Defect Reports
• Test Closure Documents
• Test Metrics
Before Starting the Actual testing the element; which supports the testing activity
such as Test data, Data guide lines. Are collectively called as test Bed.
Data Guidelines are used to specify the data required to populate the test bed and
prepare test scripts. It includes all data parameters that are required to test the
conditions derived from the requirement / specification The Document, which
supports in preparing test data are called Data guidelines
When Test Condition is executed its result should be compared to Test result
(expected result), as Test data is needed for this here comes the role of test Bed
where Test data is made ready.
78
17. Can Automation testing replace manual testing? If it so, how?
Automated testing can never replace manual Testing. As these tools to Follow GIGO
principle of computer tools. Absence of creativity and innovative thinking. But It
speeds up the process. Follow a clear Process, which can be reviewed easily. Better
Suited for Regression testing of Manually tested Application and Performance testing.
"Quality is giving more cushions for user to use system with all its expected
characteristics”. It is usually said as Journey towards Excellence.
SQA is responsible for prevention of the defects while Testing detects the defects
SQA is concerned with the process used to develop the product whereas Testing is
concerned with the product developed
SQA involves Verification while Testing involves Validation
19. Why do we prepare test condition, test cases, test script (Before
Starting Testing)?
These are test design document which are used to execute the actual testing Without
which execution of testing is impossible, finally this execution is going to find the
bugs to be fixed so we have prepare this documents.
20. Is it not waste of time in preparing the test condition, test case & Test
Script?
No document prepared in any process is waste of rime, That too test design
documents which plays vital role in test execution can never be said waste of time as
without which proper testing cannot be done.
To approach a web application testing, the first attack on the application should be
on its performance behavior as that is very important for a web application and then
79
transfer of data between web server and. front-end server, security server and back
end server.
22. What kind of Document you need for going for a Functional testing?
No, .The system as a whole can be tested only if all modules arc integrated and all
modules work correctly System testing should be done before UAT (User Acceptance
testing) and Before Unit Testing.
Mutation testing is a powerful fault-based testing technique for unit level testing.
Since it is a fault-based testing technique, it is aimed at testing and uncovering some
specific kinds of faults, namely simple syntactic changes to a program. Mutation
testing is based on two assumptions: the competent programmer hypothesis and the
coupling effect. The competent programmer hypothesis assumes that competent
programmers turn to write nearly "correct" programs. The coupling effect stated that
a set of test data that can uncover all simple faults in a program is also capable of
detecting more complex faults. Mutation testing injects faults into code to determine
optimal test inputs.
With any software other than the smallest and simplest program, there are too many
inputs, too many outputs, and too many path combinations to fully test. Also,
software specifications can be subjective and be interpreted in different ways.
80
Test Automation:
The automation testing tools are used for Regression and Performance testing.
Several problems are encountered while working with test automation tools like,
Planning is the most important task in Test Automation. Test Automation Plan should
cover the following task items,
81
30. Can test automation improve test effectiveness?
Yes, Definitely Test Automation plays a vital role in improving Test Effectiveness in
various ways like,
Data Driven Automation is the most important part of test automation where the
requirement is to execute the same test cases for different set of test input data so
that test can executed for pre-defined iterations with different set of test input data
for each iteration.
Here are some of the attributes of test automation that can be measured,
Maintainability
• Definition: The effort needed to update the test automation suites for each
new release.
• Possible measurements: The possible measurements can be e.g. the average
work effort in hours to update a test suite.
Reliability
Flexibility
• Definition: The ease of working with all the different kinds of automation test
ware.
• Possible measurements: The time and effort needed to identify, locate,
restore, combine and execute the different test automation test ware.
Efficiency
• Definition: The total cost related to the effort needed for the automation.
• Possible measurements: Monitoring over time the total cost of automated
testing, i.e. resources, material, etc.
Portability
82
• Definition: The ability of the automated test to run on different environments.
• Possible measurements: The effort and time needed to set-up and run test
automation in a new environment.
Robustness
Usability
We cannot actually replace manual testing 100% using Automation but yes definitely
it can replace almost 90% of the manual test efforts if the automation is done
efficiently.
35. How one will evaluate the tool for test automation?
83
f. Tool’s Compatibility with our Application Architecture and Development
Technologies.
g. Tool Configuration & Deployment Requirements.
h. Tools Limitations Analysis.
While using Test Automation there are various factors that can affect the testing
process like,
84
39. What testing activities one may want to automate?
In Test Automation we come across several problems, out of which I would like to
highlight few as given below,
41. What are the types of scripting techniques for test automation ?
85
d. Techniques to Generalize the Scripts.
e. Increasing the factor of Reusability of the Script.
43. What tools are available for support of testing during software
development life cycle?
Test Director for Test Management, Bugzilla for Bug Tracking and Notification etc are
the tools for Support of Testing.
44. Can the activities of test case design be automated?
Yes, Test Director is one of such tool, which has the feature of Test Case Design and
execution.
45. What are the limitations of automating software testing?
If one talk about limitations of automating software testing, then to mention few,
a. Automation Needs lots of time in the initial stage of automation.
b. Every tool will have its own limitations with respect to protocol support,
technologies supported, object recognition, platform supported etc due
to which not 100% of the Application can be automation because there
is always something limited to the tool which we have to overcome
with R&D.
c. Tool’s Memory Utilization is also one the important factor which blocks
the application’s memory resources and creates problems to application
in few cases like Java Applications etc.
86