0% found this document useful (0 votes)
11 views

Software Testing

Software testing notes
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

Software Testing

Software testing notes
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 96

UNIT -1

SOFTWARE TESTING METHODOLOGIES


Software Testing: Software testing can be stated as the process of verifying and
validating whether a software or application is bug-free, meets the technical
requirements as guided by its design and development, and meets the user
requirements effectively and efficiently by handling all the exceptional and
boundary cases.
The process of software testing aims not only at finding faults in the existing
software but also at finding measures to improve the software in terms of
efficiency, accuracy, and usability. It mainly aims at measuring the specification,
functionality, and performance of a software program or application.
Software testing can be divided into two steps:
1. Verification: it refers to the set of tasks that ensure that the software correctly
implements a specific function.
2. Validation: it refers to a different set of tasks that ensure that the software that
has been built is traceable to customer requirements.
Myths:
Myth 1: Testing is Too Expensive
Reality − There is a saying, pay less for testing during software development or pay more for
maintenance or correction later. Early testing saves both time and cost in many aspects,
however reducing the cost without testing may result in improper design of a software
application rendering the product useless.

Myth 2: Testing is Time-Consuming


Reality − During the SDLC phases, testing is never a time-consuming process. However
diagnosing and fixing the errors identified during proper testing is a time-consuming but
productive activity.

Myth 3: Only Fully Developed Products are Tested


Reality − No doubt, testing depends on the source code but reviewing requirements and
developing test cases is independent from the developed code. However iterative or
incremental approach as a development life cycle model may reduce the dependency of
testing on the fully developed software.

Myth 4: Complete Testing is Possible


Reality − It becomes an issue when a client or tester thinks that complete testing is possible.
It is possible that all paths have been tested by the team but occurrence of complete testing
is never possible. There might be some scenarios that are never executed by the test team or
the client during the software development life cycle and may be executed once the project
has been deployed.

Myth 5: A Tested Software is Bug-Free


Reality − This is a very common myth that the clients, project managers, and the management
team believes in. No one can claim with absolute certainty that a software application is 100%
bug-free even if a tester with superb testing skills has tested the application.

Goals of Software Testing:


The goals of software testing may be classified into three major categories as
follows:
1. Immediate Goals
2. Long-term Goals
3. Post-Implementation Goals

1.Immediate Goals: These objectives are the direct outcomes of testing.


These objectives may be set at any time during the SDLC process. Some of
these are covered in detail below:
 Bug Discovery: This is the immediate goal of software testing to find
errors at any stage of software development. The number of bugs is
discovered in the early stage of testing. The primary purpose of software
testing is to detect flaws at any step of the development process. The
higher the number of issues detected at an early stage, the higher the
software testing success rate.

 Bug Prevention: This is the immediate action of bug discovery, that


occurs as a result of bug discovery. Everyone in the software
development team learns how to code from the behavior and analysis of
issues detected, ensuring that bugs are not duplicated in subsequent
phases or future projects.
2. Long-Term Goals: These objectives have an impact on product quality in the long
run after one cycle of the SDLC is completed. Some of these are covered in detail
below:
 Quality: This goal enhances the quality of the software product. Because
software is also a product, the user’s priority is its quality. Superior
quality is ensured by thorough testing. Correctness, integrity, efficiency,
and reliability are all aspects that influence quality. To attain quality, you
must achieve all of the above-mentioned quality characteristics.
 Customer Satisfaction: This goal verifies the customer’s satisfaction with
a developed software product. The primary purpose of software testing,
from the user’s standpoint, is customer satisfaction. Testing should be
extensive and thorough if we want the client and customer to be happy
with the software product.
 Reliability: It is a matter of confidence that the software will not fail. In
short, reliability means gaining the confidence of the customers by
providing them with a quality product.
 Risk Management: Risk is the probability of occurrence of uncertain
events in the organization and the potential loss that could result in
negative consequences. Risk management must be done to reduce the
failure of the product and to manage risk in different situations.
3. Post-Implemented Goals: After the product is released, these objectives become
critical. Some of these are covered in detail below:
 Reduce Maintenance Cost: Post-released errors are costlier to fix and
difficult to identify. Because effective software does not wear out, the
maintenance cost of any software product is not the same as the physical
cost. The failure of a software product due to faults is the only expense of
maintenance. Because they are difficult to discover, post-release
mistakes always cost more to rectify. As a result, if testing is done
thoroughly and effectively, the risk of failure is lowered, and
maintenance costs are reduced as a result.
 Improved Software Testing Process: These goals improve the testing
process for future use or software projects. These goals are known as
post-implementation goals. A project’s testing procedure may not be
completely successful, and there may be room for improvement. As a
result, the bug history and post-implementation results can be evaluated
to identify stumbling blocks in the current testing process that can be
avoided in future projects.

Psychological Testing in Software Testing:

Software development, including software testing, involves human beings. Therefore, human

psychology has important effect on software testing.

In software testing, psychology plays an extremely important role. It is one of those factors

that stay behind the scene, but has a great impact on the end result. Categorized into three

sections, the psychology of testing enables smooth testing as well as makes the process
hassle-free. It is mainly dependent on the mindset of the developers and testers, as well as

the quality of communication between them. Moreover, the psychology of testing improves

mutual understanding among team members and helps them work towards a common goal.

The three sections of the psychology of testing are:

 The mindset of Developers and Testers.

 Communication in a Constructive Manner.

 Test Independence.

 Mindset of Developers and Testers: Software development life cycle is a combination of

various activities, which are performed by different individuals using their expertise and

knowledge. It is not an unknown fact that to accomplish successful development of a

software, people with different skill and mindset are required. From coding to software

implementation, engineers with different skill set are required to perform tasks, in which

they are certified. As testing and analyzing the software is much different than developing or

programming it, it is quite clear that to develop a software with unique features and superior

quality, developers and testers of different mindset are required. Moreover, the introduction
of new development and testing techniques and methodologies has further made this a

necessary requirement.

 Communication in a Constructive Manner: Any process, whether technical or non-technical

requires constant communication between team members to successfully achieve their goals

and aims. Communication, if done in an polite and courteous manner can help build a strong

and reliable relationship between team members and help them avoid any

misunderstanding. Similarly, during the process of testing also, the requirement of

communication in constructive manner is extremely high. As testers are responsible for

finding bugs and reporting them, it becomes vital for them to communicate it in a courteous,

polite, and suitable way to avoid hurting and disappointing someone. Finding and reporting

defects can be an achievement for the tester but is merely an inconvenience for

programmers, designers, developers, and other stakeholders of the project. As testing can be

a destructive activity, it is important for software testers to report defects and failures as

objectively and politely as possible to avoid hurting and upsetting other members related to

the project.

 Test Independence: The importance of independent software testing is comparatively

higher than self testing or group testing, as the former helps avoid author bias and often is

more effective in finding and detecting bugs and defects. This type of testing is mainly

performed by individuals who are not related to the project directly or are from different

organization and are hired mainly to test the quality as well as the effectiveness of the

developed product. Test independence, therefore, helps developers and other stakeholders

get more accurate test results, which helps them build a better software product, with

innovative and unique features and functionality.

 Model Based Testing :


 Model based testing is nothing but simple testing technique in which we get
different test cases that described by the model. In this type the test cases
are generated via both online and offline test case models.
 In this case by considering the testing technique functionally we find out the
model based test cases. For checking the functionality of software the unit
testing is not sufficient for this case so this is considered.
 Model based testing is very familiar for the test cases are performing actions
in same sequence or not? This testing technique is adopted and integrated
with the testing techniques. A number of business tools are developed for
supporting this type of technique now-a-days.
 In this type the software behaviour is checked during the run time against
the prediction that has been made by the model itself. Behaviour of a system
basically based upon the actions, sequence, conditions and the input output
flow of a process that is made. When this is practically implemented we
should know the concept i.e. sharable or valuable for the system and it
should be very précised manner.
The system has main role for this model to perform different behaviour like
data flow, control flow, state transition machines, decision tables and
dependency graphs. Generally we say that the model based testing are
online based/on-the-fly and offline based/a priori , In online based testing
test suites are generated during the execution and In offline based testing
test suites are generated before the execution.
Real Case Scenario of a Model :
When a user is ready to go through web application then the user has multiple
sections like sign In, forgot password and reset password options i.e. total 3 fields
are there to enter into the home page so that case is only for one user and when
considering for model based multiple users some permutation and combinations
can be used to testing the product in a model type. So the state transition diagrams
are involved to fulfil the requirement of the user.
Multiple states with multiple transitions are possible to reduce the complexity of
the task that has been performed by different permutation and combination
techniques. Validation of the test cases and state transition diagrams are created
automatically and provide better solutions for many users present in a queue for
requesting the access of the specific model.

Difference between Effective Testing and Exhausting Testing:

Sr. Effective Testing Exhaustive Testing


No.

Effective testing emphasizes efficient Exhaustive or complete testing means that


1 techniques to test the s/w�/� so that energy statement in the program of every
possible path combination with every possible
combination of data must be executed.
Sr. Effective Testing Exhaustive Testing
No.

important features will be tested within the


constrained resources.

2 It is a practical method It is not possible to perform complete testing

3 It is feasible because: a) It checks for s/w It is not feasible because: a) Achieving deadlines
reliability and no Bugs in the final product b) b) Various possible outputs c) Timing constraints
It tests in each phase c) It uses constrained d) No. of possible test environments.
resources

4 It is cost effective It is not cost effective

5 It is less complex and less time consuming It is complex and time consuming

6 It is adopted such that critical test cases are It corners all the test cases
concerned first

Software Testing Terminologies:


It is the process of finding the yet undiscovered bugs/errors in the program. The user tests the
program, find the error and bugs which could hamper the normal functioning of the program.

Types of Testing Techniques / Terminologies

 Unit Testing – It refers to individual units or components of the software that are tested. The
purpose of unit testing is to see that each software unit performs as per the expected
requirement and functionality. It may be termed as first-level testing.
 Integration Testing - It refers to testing in which the individual software components or
modules are combined together and tested as a group. It is used to see whether there is any
fault in the integrated units. It is normally performed after the unit testing.
 System Testing – The testing is performed on the system as a whole and to see whether the
system is working as per the requirements and functionality specified. It takes the
components of the integration testing as input and performs the testing.
 Regression Testing – This testing is performed to check whether the changes to the software
do not produce undesirable results. It is used to ensure that the previously developed
software components and tested components still perform correctly after the changes. It
also depicts that the changes have not broken any functionality.
 UAT – User Acceptance testing – It is the testing done by the end-user or client to
accept the software system before moving the software to the production
environment. It is used to check the functionality of the software end to end related
to the business flow.
The testers should possess good knowledge of the software or application. There should not any
critical bug or error should be left in the application.

 Black Box Testing – In this type of testing, the emphasis is on the functionality of the
application. The code for the tester is like a black box. He concentrates on the
functionality rather than the internal structures of the application. It is used to check
the application from the user’s perspective and per requirements stated by the
customer.
 White Box Testing - In this type of testing, the emphasis is given on the internal
structure of the program in the application/software, code structure, internal design
to verify the input and output of the process in the software.
 Grey Box Testing – This software testing method is a combination of Black Box
testing and White Box Testing. It includes the testing of internal coding structure so
involves the inputs from both developer and tester to ensure a quality product.
 Ad-hoc Testing – It is also one of the terms used in Software Testing. The testing is
performed without planning and other formal documentation. It is normally
intended to run once. This testing is carried out on a random basis to check the
functionality of the application.
 Exploratory Testing – In this testing, the tester explores the application what intends
to do, or its purpose. The tester uses his experience and knowledge to check the
application and try to find errors/bugs. We can say it learning along with the test
execution.
 Alpha Testing – This testing is performed before the final product is released to the
end-users or the customer. This testing is basically performed by the QA Testers
internally to the organization or you can say performed at the developer’s site. The
Bugs can be addressed by the developer immediately and can be fixed.
 Beta Testing – This testing is performed by the end-users or at the customer's place
who are not part of the organization. It is normally performed at the client’s place.
The issues collected from the testing will be implemented for future versions of the
product or application. The user inputs are gathered on the product and ensure it is
ready for the customers.
 Automated Testing - In this testing, the Test cases execution takes using an
Automation tool. The Test data is prepared, test cases are executed using the
required data. Test scripts are created for the testing of the application.
 Software Testing Life Cycle (STLC):

 The Software Testing Life Cycle (STLC) is a systematic approach to testing a


software application to ensure that it meets the requirements and is free of
defects. It is a process that follows a series of steps or phases, and each phase
has specific objectives and deliverables. The STLC is used to ensure that the
software is of high quality, reliable, and meets the needs of the end-users.
 The main goal of the STLC is to identify and document any defects or issues in
the software application as early as possible in the development process. This
allows for issues to be addressed and resolved before the software is released
to the public.
 The stages of the STLC include Test Planning, Test Analysis, Test Design, Test
Environment Setup, Test Execution, Test Closure, and Defect Retesting. Each
of these stages includes specific activities and deliverables that help to ensure
that the software is thoroughly tested and meets the requirements of the end
users.
Overall, the STLC is an important process that helps to ensure the quality of
software applications and provides a systematic approach to testing. It allows
organizations to release high-quality software that meets the needs of their
customers, ultimately leading to customer satisfaction and business success.
Characteristics of STLC
 STLC is a fundamental part of the Software Development Life Cycle (SDLC) but
STLC consists of only the testing phases.
 STLC starts as soon as requirements are defined or software requirement
document is shared by stakeholders.
 STLC yields a step-by-step process to ensure quality software.
In the initial stages of STLC, while the software product or the application is being
developed, the testing team analyzes and defines the scope of testing, entry and exit
criteria, and also test cases. It helps to reduce the test cycle time and also enhances
product quality. As soon as the development phase is over, the testing team is ready
with test cases and starts the execution. This helps in finding bugs in the early phase.
Phases of STLC
1. Requirement Analysis: Requirement Analysis is the first step of the Software
Testing Life Cycle (STLC). In this phase quality assurance team understands the
requirements like what is to be tested. If anything is missing or not understandable
then the quality assurance team meets with the stakeholders to better understand
the detailed knowledge of requirements.
2. Test Planning: Test Planning is the most efficient phase of the software testing
life cycle where all testing plans are defined. In this phase manager of the testing,
team calculates the estimated effort and cost for the testing work. This phase gets
started once the requirement-gathering phase is completed. 3. Test Case
Development: The test case development phase gets started once the test planning
phase is completed. In this phase testing team notes down the detailed test cases.
The testing team also prepares the required test data for the testing. When the test
cases are prepared then they are reviewed by the quality assurance team.
4. Test Environment Setup: Test environment setup is a vital part of the STLC.
Basically, the test environment decides the conditions on which software is tested.
This is independent activity and can be started along with test case development. In
this process, the testing team is not involved. either the developer or the customer
creates the testing environment.
5. Test Execution: After the test case development and test environment setup test
execution phase gets started. In this phase testing team starts executing test cases
based on prepared test cases in the earlier step.
6. Test Closure: Test closure is the final stage of the Software Testing Life Cycle (STLC)
where all testing-related activities are completed and documented. The main
objective of the test closure stage is to ensure that all testing-related activities have
been completed and that the software is ready for release.
At the end of the test closure stage, the testing team should have a clear
understanding of the software’s quality and reliability, and any defects or issues that
were identified during testing should have been resolved. The test closure stage also
includes documenting the testing process and any lessons learned so that they can
be used to improve future testing processes
Test closure is the final stage of the Software Testing Life Cycle (STLC) where all
testing-related activities are completed and documented.

Software Testing Techniques:


Software testing techniques are methods used to design and execute tests to
evaluate software applications. The following are common testing techniques:
1. Manual testing – Involves manual inspection and testing of the software
by a human tester.
2. Automated testing – Involves using software tools to automate the
testing process.
3. Functional testing – Tests the functional requirements of the software to
ensure they are met.
4. Non-functional testing – Tests non-functional requirements such as
performance, security, and usability.
5. Unit testing – Tests individual units or components of the software to
ensure they are functioning as intended.
6. Integration testing – Tests the integration of different components of the
software to ensure they work together as a system.
7. System testing – Tests the complete software system to ensure it meets
the specified requirements.
8. Acceptance testing – Tests the software to ensure it meets the customer’s or
end-user’s expectations.
9. Regression testing – Tests the software after changes or modifications have
been made to ensure the changes have not introduced new defects.
10. Performance testing – Tests the software to determine its performance
characteristics such as speed, scalability, and stability.
11. Security testing – Tests the software to identify vulnerabilities and ensure it
meets security requirements.
12. Exploratory testing – A type of testing where the tester actively explores the
software to find defects, without following a specific test plan.
13. Boundary value testing – Tests the software at the boundaries of input
values to identify any defects.
14. Usability testing – Tests the software to evaluate its user-friendliness and
ease of use.
15. User acceptance testing (UAT) – Tests the software to determine if it meets
the end-user’s needs and expectations.
Software testing techniques are the ways employed to test the application
under test against the functional or non-functional requirements gathered from
business. Each testing technique helps to find a specific type of defect. For example,
Techniques which may find structural defects might not be able to find the defects
against the end-to-end business flow. Hence, multiple testing techniques are
applied in a testing project to conclude it with acceptable quality.

VERIFICATION AND VALIDATION:


Verification and Validation Testing
In this section, we will learn about verification and validation testing and their major
differences.
Verification testing
Verification testing includes different activities such as business requirements, system
requirements, design review, and code walkthrough while developing a product.

It is also known as static testing, where we are ensuring that "we are developing the right
product or not". And it also checks that the developed application fulfilling all the
requirements given by the client.

Validation testing
Validation testing is testing where tester performed functional and non-functional testing.
Here functional testing includes Unit Testing (UT), Integration Testing (IT) and System
Testing (ST), and non-functional testing includes User acceptance testing (UAT).

Validation testing is also known as dynamic testing, where we are ensuring that "we have
developed the product right." And it also checks that the software meets the business needs
of the client.
Difference between verification and validation testing
Verification Validation

We check whether we are developing the right product We check whether the developed product is right.
or not.

Verification is also known as static testing. Validation is also known as dynamic testing.

Verification includes different methods like Validation includes testing like functional testing, system
Inspections, Reviews, and Walkthroughs. testing, integration, and User acceptance testing.

It is a process of checking the work-products (not the It is a process of checking the software during or at the
final product) of a development cycle to decide end of the development cycle to decide whether the
whether the product meets the specified software follow the specified business requirements.
requirements.

Quality assurance comes under verification testing. Quality control comes under validation testing.

The execution of code does not happen in the In validation testing, the execution of code happens.
verification testing.

In verification testing, we can find the bugs early in the In the validation testing, we can find those bugs, which
development phase of the product. are not caught in the verification process.

Verification testing is executed by the Quality Validation testing is executed by the testing team to test
assurance team to make sure that the product is the application.
developed according to customers' requirements.

Verification is done before the validation testing. After verification testing, validation testing takes place.

In this type of testing, we can verify that the inputs In this type of testing, we can validate that the user
follow the outputs or not. accepts the product or not.

Verification Testing:
Verification is the process of evaluating work-products of a development phase to determine
whether they meet the specified requirements.
verification ensures that the product is built according to the requirements and design
specifications. It also answers to the question, Are we building the product right?

Verification Testing - Workflow:


verification testing can be best demonstrated using V-Model. The artefacts such as test Plans,
requirement specification, design, code and test cases are evaluated.

Verification of Requirements:
In this type of verification, all the requirements gathered from the user's viewpoint are
verified. For this purpose, an acceptance criterion is prepared. An acceptance criterion
defines the goals and requirements of the proposed system and acceptance limits for each
of the goals and requirements. The acceptance criteria matter the most in case of real-time
systems where performance is a critical
issue in certain events. For example, if a real-time system has to process and take decision
for a weapon within x seconds, then any performance below this. would lead to rejection of
the system. Therefore, it can be concluded that acceptance criteria for a requirement must
be defined by the designers of the system and should not be overlooked, as they can create
problems while testing the system.
The tester works in parallel by performing the following two tasks:

1. The tester reviews the acceptance criteria in terms of its completeness, clarity, and
testability. Moreover, the tester understands the proposed system well in advance
so that necessary resources can be planned for the project..
2. The tester prepares the Acceptance Test Plan which is referred at the time of
Acceptance Testing

Verification of Objectives
After gathering requirements, specific objectives are prepared considering every
specification. These objectives are prepared in a document called software requirement
specification (SRS). In this activity also, two parallel activities are performed by the tester:

1. The tester verifies all the objectives mentioned in SRS. The purpose of this
verification is to ensure that the user's needs are properly understood before
proceeding with the project.
2. The tester also prepares the System Test Plan which is based on SRS. This plan will be
referenced at the time of System testing.

In verifying the requirements and objectives, the tester must consider both functional and
non- functional requirements. Functional requirements may be easy to comprehend,
whereas non-functional requirements pose a challenge to testers in terms of understanding,
quantifying, test planning, and test execution.
How to Verify Requirements and Objectives
Requirement and objectives verification has a high potential of detecting bugs. Therefore,
requirements must be verified. As stated above, the testers use the SRS for verification of
objectives. One characteristic of a good SRS is that it can be verified. An SRS can be verified,
if and only if, every requirement stated herein can be verified. A requirement can be
verified, if, and only if, there is some procedure to check that the software meets its
requirement. It is a good idea to specify the requirements in a quantification manner. It
means that ambiguous statements or terms such as 'good quality', "usually" and 'may
happen' should be avoided. Instead of this, quantified specifications should be provided. An
example of a verifiable statement is

High Level Design and Low Level Design:


1. High Level Design :
High Level Design in short HLD is the general system design means it refers to the
overall system design. It describes the overall description/architecture of the
application. It includes the description of system architecture, data base design,
brief description on systems, services, platforms and relationship among modules.
It is also known as macro level/system design. It is created by solution architect. It
converts the Business/client requirement into High Level Solution. It is created first
means before Low Level Design.
2. Low Level Design :
Low Level Design in short LLD is like detailing HLD means it refers to component-
level design process. It describes detailed description of each and every module
means it includes actual logic for every system component and it goes deep into
each modules specification. It is also known as micro level/detailed design. It is
created by designers and developers. It converts the High Level Solution into
Detailed solution. It is created second means after High Level Design.
Difference between High Level Design and Low Level Design :
S.No. HIGH LEVEL DESIGN LOW LEVEL DESIGN

High Level Design is the general system Low Level Design is like detailing
01. design means it refers to the overall HLD means it refers to
system design. component-level design process.

Low Level Design in short called


02. High Level Design in short called as HLD.
as LLD.

It is also known as macro level/system It is also known as micro


03.
design. level/detailed design.

It describes the overall


It describes detailed description
04. description/architecture of the
of each and every module.
application.

Low Level Design expresses


High Level Design expresses the brief
05. details functional logic of the
functionality of each module.
module.

It is created by designers and


06. It is created by solution architect.
developers.

Here in Low Level Design


Here in High Level Design the
participants are design team,
07. participants are design team, review
Operation Teams, and
team, and client team.
Implementers.

It is created first means before Low Level It is created second means after
08.
Design. High Level Design.

In HLD the input criteria is Software In LLD the input criteria is


09.
Requirement Specification (SRS). reviewed High Level Design (HLD).

High Level Solution converts the Low Level Design converts the
10. Business/client requirement into High High Level Solution into Detailed
Level Solution. solution.
S.No. HIGH LEVEL DESIGN LOW LEVEL DESIGN

In HLD the output criteria is data base In LLD the output criteria is
11. design, functional design and review program specification and unit
record. test plan.

Code Verification:
Code verification is the process used for checking the software code for errors introduced in
the coding phase. The objective of code verification process is to check the software code in
all aspects. This process includes checking the consistency of user requirements with the
design phase. Note that code verification process does not concentrate on proving the
correctness of programs. Instead, it verifies whether the software code has been translated
according to the requirements of the user.
The code verification techniques are classified into two categories, namely, dynamic and
static. The dynamic technique is performed by executing some test data. The outputs of the
program are tested to find errors in the software code. This technique follows the
conventional approach for testing the software code. In the static technique, the program is
executed conceptually and without any data. In other words, the static technique does not
use any traditional approach as used in the dynamic technique. Some of the commonly used
static techniques are code reading, static analysis, symbolic execution, and code inspection
and reviews.
Code
Reading

Code reading is a technique that concentrates on how to read and understand


a computer program. It is essential for a software developer to know code reading. The
process of reading a software program in order to understand it is known as code reading or
program reading. In this process, attempts are made to understand the documents,
software specifications, or software designs. The purpose of reading programs is to
determine the correctness and consistency of the code. In addition, code reading is
performed to enhance the software code without entirely changing the program or with
minimal disruption in the current functionality of’ the program. Code reading also aims at
inspecting the code and removing (fixing) errors from it.
Code reading is a passive process and needs concentration. An effective code reading activity
primarily focuses on reviewing ‘what is important’. The general conventions that can be
followed while reading the software code are listed below.

 Figure out what is important: While reading the code, emphasis should be on finding
graphical techniques (bold, italics) or positions (beginning or end of the section).
Important comments may be highlighted in the introduction or at the end of the
software code. The level of details should be according to the requirements of the
software code.
 Read what is important: Code reading should be done with the intent to check syntax
and structure such as brackets, nested loops, and functions rather than the non-
essentials such as name of the software developer who has written the software code.

Validation Testing:
The process of evaluating software during the development process or at the end of the
development process to determine whether it satisfies specified business requirements.
Validation Testing ensures that the product actually meets the client's needs. It can also be
defined as to demonstrate that the product fulfills its intended use when deployed on
appropriate environment.
It answers to the question, Are we building the right product?

Validation Testing - Workflow:


Validation testing can be best demonstrated using V-Model. The Software/product under test
is evaluated during this type of testing.

UNIT-2
DYNAMIC TESTING-BLACKBOX TESTING TECHNIQUES:
Boundary Value Analysis:Boundary Value Analysis
Boundary Value Analysis is based on testing the boundary values of valid and invalid
partitions. The behavior at the edge of the equivalence partition is more likely to be
incorrect than the behavior within the partition, so boundaries are an area where
testing is likely to yield defects.
It checks for the input values near the boundary that have a higher chance of error.
Every partition has its maximum and minimum values and these maximum and
minimum values are the boundary values of a partition.
A boundary value for a valid partition is a valid boundary value.
 A boundary value for an invalid partition is an invalid boundary value.
 For each variable we check-
 Minimum value.
 Just above the minimum.
 Nominal Value.
 Just below Max value.
 Max value.
Example: Consider a system that accepts ages from 18 to 56.
Boundary Value Analysis(Age accepts 18 to 56)

Invalid Valid Invalid


(min-1) (min, min + 1, nominal, max – 1, max) (max + 1)

17 18, 19, 37, 55, 56 57

Valid Test cases: Valid test cases for the above can be any value entered greater
than 17 and less than 57.
 Enter the value- 18.
 Enter the value- 19.
 Enter the value- 37.
 Enter the value- 55.
 Enter the value- 56.
 Invalid Testcases: When any value less than 18 and greater than 56 is
entered. Enter the value- 17.
 Enter the value- 57.
Single Fault Assumption: When more than one variable for the same application is
checked then one can use a single fault assumption. Holding all but one variable to
the extreme value and allowing the remaining variable to take the extreme value.
For n variable to be checked:
Maximum of 4n+1 test cases
Problem: Consider a Program for determining the Previous Data.
Input: Day, Month, Year with valid ranges as-
1 ≤ Month≤12
1 ≤ Day ≤31
1900 ≤ Year ≤ 2000
Design Boundary Value Test Cases.
Solution: Taking the year as a Single Fault Assumption i.e. year will be having values
varying from 1900 to 2000 and others will have nominal values.
Test Cases Month Day Year Output

1 6 15 1990 14 June 1990

2 6 15 1901 14 June 1901

3 6 15 1960 14 June 1960

4 6 15 1999 14 June 1999

5 6 15 2000 14 June 2000

Taking Day as Single Fault Assumption i.e. Day will be having values varying from 1
to 31 and others will have nominal values.

Test Case Month Day Year Output

6 6 1 1960 31 May 1960

7 6 2 1960 1 June 1960

8 6 30 1960 29 June 1960

9 6 31 1960 Invalid day

Taking Month as Single Fault Assumption i.e. Month will be having values varying
from 1 to 12 and others will have nominal values.

Test Case Month Day Year Output

10 1 15 1960 14 Jan 1960

11 2 15 1960 14 Feb 1960


Test Case Month Day Year Output

12 11 15 1960 14 Nov 1960

13 12 15 1960 14 Dec 1960

For the n variable to be checked Maximum of 4n + 1 test case will be required.


Therefore, for n = 3, the maximum test cases are-
4 × 3 + 1 =13
The focus of BVA: BVA focuses on the input variable of the function. Let’s define
two variables X1 and X2, where X1 lies between a and b and X2 lies between c and
d.

Showing legitimate domain

The idea and motivation behind BVA are that errors tend to occur near the
extremes of the variables. The defect on the boundary value can be the result of
countless possibilities.
Typing of Languages: BVA is not suitable for free-form languages such as COBOL
and FORTRAN, These languages are known as weakly typed languages. This can be
useful and can cause bugs also.
PASCAL, ADA is the strongly typed language that requires all constants or variables
defined with an associated data type.
Limitation of Boundary Value Analysis:
 It works well when the product is under test.
 It cannot consider the nature of the functional dependencies of variables.
 BVA is quite rudimentary.
Equivalence Partitioning:
It is a type of black-box testing that can be applied to all levels of software testing.
In this technique, input data are divided into the equivalent partitions that can be
used to derive test cases-
 In this input data are divided into different equivalence data classes.
 It is applied when there is a range of input values.
Example: Below is the example to combine Equivalence Partitioning and Boundary
Value.
Consider a field that accepts a minimum of 6 characters and a maximum of 10
characters. Then the partition of the test cases ranges 0 – 5, 6 – 10, 11 – 14.

Test Scenario Test Description Expected Outcome

1 Enter value 0 to 5 character Not accepted

2 Enter 6 to 10 character Accepted

3 Enter 11 to 14 character Not Accepted

Why Combine Equivalence Partitioning and Boundary Analysis Testing: Following


are some of the reasons why to combine the two approaches:
 In this test cases are reduced into manageable chunks.
 The effectiveness of the testing is not compromised on test cases.
 Works well with a large number of variables.

Equivalence Partitioning Technique:


Equivalence partitioning is a technique of software testing in which input data is divided into
partitions of valid and invalid values, and it is mandatory that all partitions must exhibit the
same behavior. If a condition of one partition is true, then the condition of another equal
partition must also be true, and if a condition of one partition is false, then the condition of
another equal partition must also be false. The principle of equivalence partitioning is, test
cases should be designed to cover each partition at least once. Each value of every equal
partition must exhibit the same behavior as other.

The equivalence partitions are derived from requirements and specifications of the software.
The advantage of this approach is, it helps to reduce the time of testing due to a smaller
number of test cases from infinite to finite. It is applicable at all levels of the testing process.
Examples of Equivalence Partitioning technique
Assume that there is a function of a software application that accepts a particular number of
digits, not greater and less than that particular number. For example, an OTP number which
contains only six digits, less or more than six digits will not be accepted, and the application
will redirect the user to the error page.

1. OTP Number = 6 digits

We
can perform equivalence partitioning in two ways which are as follows:
Let us see how pressman and general practice approaches are going to use in different
conditions:

Condition1

If the requirement is a range of values, then derive the test case for one valid and two
invalid inputs.

Here, the Range of values implies that whenever we want to identify the range values, we
go for equivalence partitioning to achieve the minimum test coverage. And after that, we go
for error guessing to achieve maximum test coverage.

According to pressman:

For example, the Amount of test field accepts a Range (100-400) of values:

According to General Practice method:

Whenever the requirement is Range + criteria, then divide the Range into the internals and
check for all these values.

For example:

n the below image, the pressman technique is enough to test for an age text field for one
valid and two invalids. But, if we have the condition for insurance of ten years and above are
required and multiple policies for various age groups in the age text field, then we need to
use the practice method.
Condition2

If the requirement is a set of values, then derive the test case for one valid and two
invalid inputs.

Here, Set of values implies that whenever we have to test a set of values, we go for one
positive and two negative inputs, then we moved for error guessing, and we also need to
verify that all the sets of values are as per the requirement.

Example 1

Based on the Pressman Method

If the Amount Transfer is (100000-700000)

Then for, 1 lakh →Accept

And according to General Practice method

The Range + Percentage given to 1 lakh - 7 lakh

Like: 1lak - 3lak →5.60%

3lak - 6lak →3.66%

6lak - 7lak →Free


If we have things like loans, we should go for the general practice approach and separate the stuff
into the intervals to achieve the minimum test coverage.
State Table Based Testing:

Tables are useful tools for representing and documenting many types of information
relating to test case design. These are beneficial for the applications; which can be described
using state transition diagrams and state tables. First we define some basic terms related to
state tables,
1. Finite State Machine
A finite state machine (FSM) is a behavioral model whose outcome depends upon both
previous and current inputs. FSM models can be prepared for software structure or
software behavior and it can be used as a tool for functional testing. Many testers prefer to
use FSM model as a guide to design functional tests.
2. State Transition Diagrams or State Graph
A system or its components may have a number of states depending on its input and time.
For example, a task in an operating system can have the following states:
New State: When a task is newly created
Ready: When the task is waiting in the ready queue for its turn
Running: When instructions of the task are being executed by CPU
Waiting: When the task is waiting for an I/O event or reception of a signal
Terminated The task has finished execution
States are represented by nodes. Now with the help of nodes and transition links between
the nodes, a state transition diagram or state graph is prepared. A state graph is the
pictorial representation of an FSM. Its purpose is to depict the states that a system or its
components can assume. It shows the events or circumstances that cause or result from a
change from one state to another.
Whatever is being modelled is subjected to inputs. As a result of these inputs, when one
state is changed to another, it is called a transition. Transitions are represented by links that
join the nodes.
The state graph of task states is shown in Fig.1.
Each arrow link provides two types of information:

1. Transition events such as admitted, dispatch, and interrupt


2. The resulting output from states such as T1,T2,T1,T2, and T3T3 .

T0=T0= Task is in new state and waiting for admission to ready queue
T1=T1= A new task admitted to ready queue
T2=T2= A ready task has started running
T3=T3= Running task has been interrupted
T4=T4= Running task is waiting for I/O or event
T5=T5= Wait period of waiting task is over
T6=T6= Task has completed execution
3. State Table

State graphs of larger systems may not be easy to understand. Therefore, state graphs are
converted into tabular form for convenience sake, which are known as state tables. State
tables also specify states, inputs, transitions, and outputs. The following conventions are
used for state table:

 Each row of the table corresponds to a state.


 Each column corresponds to an input condition.
 The box at the intersection of a row and a column specifies the next state (transition) and the
output, if any.

The state table for task states is given in below table.

The highlighted cells of the table are valid inputs causing a change of state. Other cells are
invalid inputs, which do not cause any transition in the state of a task.
4. State Table-Based Testing

After reviewing the basics, we can start functional testing with state tables. A state graph
and its companion state table contain information that is converted into test cases.

Decision Table Based Testing:


Decision tables are used in various engineering fields to represent complex logical
relationships. This testing is a very effective tool in testing the software and its
requirements management. The output may be dependent on many input
conditions and decision tables give a tabular view of various combinations of input
conditions and these conditions are in the form of True(T) and False(F). Also, it
provides a set of conditions and its corresponding actions required in the testing.
Parts of Decision Tables :
In software testing, the decision table has 4 parts which are divided into portions
and are given below :

1.Condition Stubs : The conditions are listed in this first upper left part of
the decision table that is used to determine a particular action or set of
actions.
1. Action Stubs : All the possible actions are given in the first lower left
portion (i.e, below condition stub) of the decision table.
2. Condition Entries : In the condition entry, the values are inputted in the
upper right portion of the decision table. In the condition entries part of
the table, there are multiple rows and columns which are known as Rule.
3. Action Entries : In the action entry, every entry has some associated
action or set of actions in the lower right portion of the decision table
and these values are called outputs.
Types of Decision Tables :
The decision tables are categorized into two types and these are given below:
1. Limited Entry : In the limited entry decision tables, the condition entries
are restricted to binary values.
2. Extended Entry : In the extended entry decision table, the condition
entries have more than two values. The decision tables use multiple
conditions where a condition may have many possibilities instead of only
‘true’ and ‘false’ are known as extended entry decision tables.
Applicability of Decision Tables :
 The order of rule evaluation has no effect on the resulting action.
 The decision tables can be applied easily at the unit level only.
 Once a rule is satisfied and the action selected, n another rule needs to
be examined.
 The restrictions do not eliminate many applications.
Example of Decision Table Based testing :
Below is the decision table of the program for determining the largest amongst
three numbers in which its input is a triple of positive integers (x,y, and z) and
values are from the interval [1, 300].
Table 1 : Decision Table of largest amongst three numbers :

Conditi R R R R R R R R1 R1 R1 R1 R1
R1 R2
ons 3 4 5 6 7 8 9 0 1 2 3 4

c1: x > =
F T T T T T T T T T T T T T
1?

c2: x <=
F T T T T T T T T T T T T
300?

c3: y > =
F T T T T T T T T T T T
1?

c4: x <=
F T T T T T T T T T T
300?

c5: z > =
F T T T T T T T T T
1?

c6: z <=
F T T T T T T T T
300?
c7: x>y? T T T T F F F F

c8: y>z? T T F F T T F F

c9: z>x? T F T F T F T F

Rule 25 12 6 3 1
8 1 1 1 1 1 1 1 1
Count 6 8 4 2 6

a1 :
Invalid X X X X X X
input

a2 : x is
X X
largest

a3 : y is
X X
largest

a4 : z is
X X
largest

a5 :
Impossi
ble

Cause and Effect Graph in Black box Testing:


Cause-effect graph comes under the black box testing technique which underlines the
relationship between a given result and all the factors affecting the result. It is used to write
dynamic test cases.

The dynamic test cases are used when code works dynamically based on user input. For
example, while using email account, on entering valid email, the system accepts it but, when
you enter invalid email, it throws an error message. In this technique, the input conditions
are assigned with causes and the result of these input conditions with effects.

Cause-Effect graph technique is based on a collection of requirements and used to determine


minimum possible test cases which can cover a maximum test area of the software.
The main advantage of cause-effect graph testing is, it reduces the time of test execution and
cost.

This technique aims to reduce the number of test cases but still covers all necessary test cases
with maximum coverage to achieve the desired application quality.

Cause-Effect graph technique converts the requirements specification into a logical


relationship between the input and output conditions by using logical operators like AND, OR
and NOT.

Notations used in the Cause-Effect Graph


AND - E1 is an effect and C1 and C2 are the causes. If both C1 and C2 are true, then effect E1
will be true.

be true.

OR - If any cause from C1 and C2 is true, then effect E1 will be true.

NOT - If cause C1 is false, then effect E1 will be true.

Mutually Exclusive - When only one cause is true.


Let's try to understand this technique with some examples:

Situation:
The character in column 1 should be either A or B and in the column 2 should be a digit. If
both columns contain appropriate values then update is made. If the input of column 1 is
incorrect, i.e. neither A nor B, then message X will be displayed. If the input in column 2 is
incorrect, i.e. input is not a digit, then message Y will be displayed.

o A file must be updated, if the character in the first column is either "A" or "B" and in the second
column it should be a digit.
o If the value in the first column is incorrect (the character is neither A nor B) then massage X
will be displayed.
o If the value in the second column is incorrect (the character is not a digit) then massage Y will
be displayed.

Now, we are going to make a Cause-Effect graph for the above situation:

Causes are:

o C1 - Character in column 1 is A
o C2 - Character in column 1 is B
o C3 - Character in column 2 is digit!

Effects:

o E1 - Update made (C1 OR C2) AND C3


o E2 - Displays Massage X (NOT C1 AND NOT C2)
o E3 - Displays Massage Y (NOT C3)

Where AND, OR, NOT are the logical gates.

Effect E1- Update made- The logic for the existence of effect E1 is "(C1 OR C2) AND C3".
For C1 OR C2, any one from C1 and C2 should be true. For logic AND C3 (Character in
column 2 should be a digit), C3 must be true. In other words, for the existence of effect E1
(Update made) any one from C1 and C2 but the C3 must be true. We can see in graph cause
C1 and C2 are connected through OR logic and effect E1 is connected with AND logic.

Effect E2 - Displays Massage X - The logic for the existence of effect E2 is "NOT C1 AND NOT
C2" that means both C1 (Character in column 1 should be A) and C2 (Character in column 1
should be B) should be false. In other words, for the existence of effect E2 the character in
column 1 should not be either A or B. We can see in the graph, C1 OR C2 is connected
through NOT logic with effect E2.

Effect E3 - Displays Massage Y- The logic for the existence of effect E3 is "NOT C3" that
means cause C3 (Character in column 2 is a digit) should be false. In other words, for the
existence of effect E3, the character in column 2 should not be a digit. We can see in the
graph, C3 is connected through NOT logic with effect E3.

So, it is the cause-effect graph for the given situation. A tester needs to convert causes and
effects into logical statements and then design cause-effect graph. If function gives output
(effect) according to the input (cause) so, it is considered as defect free, and if not doing so,
then it is sent to the development team for the correction.

Error Guessing Technique:


The test case design technique or methods or approaches that need to be followed by every
test engineer while writing the test cases to achieve the maximum test coverage. If we follow
the test case design technique, then it became process-oriented rather than person-oriented.
The test case design technique ensures that all the possible values that are both positive and
negative are required for the testing purposes. In software testing, we have three different
test case design techniques which are as follows:

o Error Guessing
o Equivalence Partitioning
o Boundary Value Analysis[BVA]

In this section, we will understand the first test case design technique that is Error guessing
techniques.

Error guessing is a technique in which there is no specific method for identifying the error. It
is based on the experience of the test analyst, where the tester uses the experience to guess
the problematic areas of the software. It is a type of black box testing technique which does
not have any defined structure to find the error.

In this approach, every test engineer will derive the values or inputs based on their
understanding or assumption of the requirements, and we do not follow any kind of rules to
perform error guessing technique.

The accomplishment of the error guessing technique is dependent on the ability and product
knowledge of the tester because a good test engineer knows where the bugs are most likely
to be, which helps to save lots of time.
the error guessing technique be implemented:
The implementation of this technique depends on the experience of the tester or analyst
having prior experience with similar applications. It requires only well-experienced testers
with quick error guessing technique. This technique is used to find errors that may not be
easily captured by formal black box testing techniques, and that is the reason, it is done after
all formal techniques.

The scope of the error guessing technique entirely depends on the tester and type of
experience in the previous testing involvements because it does not follow any method and
guidelines. Test cases are prepared by the analyst to identify conditions. The conditions are
prepared by identifying most error probable areas and then test cases are designed for them.

The main purpose of this technique is to identify common errors at any level of testing by
exercising the following tasks:

o Enter blank space into the text fields.


o Null pointer exception.
o Enter invalid parameters.
o Divide by zero.
o Use maximum limit of files to be uploaded.
o Check buttons without entering values.

The increment of test cases depends upon the ability and experience of the tester.

Purpose of Error guessing:


The main purpose of the error guessing technique is to deal with all possible errors which
cannot be identified as informal testing.

o The main purpose of error guessing technique is to deal with all possible errors which cannot
be identified informal testing.
o It must contain the all-inclusive sets of test cases without skipping any problematic areas and
without involving redundant test cases.
o This technique accomplishes the characteristics left incomplete during the formal testing.

Depending on the tester's intuition and experience, all the defects cannot be corrected. There
are some factors that can be used by the examiner while using their experience -

o Tester's intuition
o Historical learning
o Review checklist
o Risk reports of the software
o Application UI
o General testing rules
o Previous test results
o Defects occurred in the past
o Variety of data which is used for testing
o Knowledge of AUT

Advantages
The benefits of error guessing technique are as follows:

o It is a good approach to find the challenging parts of the software.


o It is beneficial when we will use this technique with the grouping of other formal
testing techniques.
o It is used to enhance the formal test design techniques.

Disadvantage
Following are the drawbacks of error guessing technique:

o The error guessing technique is person-oriented rather than process-oriented because it


depends on the person's thinking.
o If we use this technique, we may not achieve the minimum test coverage.
o With the help of this, we may not cover all the input or boundary values.
o With this, we cannot give the surety of the product quality.

WHITE BOX TESTING:


The box testing approach of software testing consists of black box testing and white box testing. We
are discussing here white box testing which also known as glass box is testing, structural testing,
clear box testing, open box testing and transparent box testing. It tests internal coding and
infrastructure of a software focus on checking of predefined inputs against expected and desired
outputs. It is based on inner workings of an application and revolves around internal structure
testing. In this type of testing programming skills are required to design test cases. The primary goal
of white box testing is to focus on the flow of inputs and outputs through the software and
strengthening the security of the software.
The term 'white box' is used because of the internal perspective of the system. The clear box
or white box or transparent box name denote the ability to see through the software's outer
shell into its inner workings.

Developers do white box testing. In this, the developer will test every line of the code of the
program. The developers perform the White-box testing and then send the application or the
software to the testing team, where they will perform the black box testing and verify the
application along with the requirements and identify the bugs and sends it to the developer.

The developer fixes the bugs and does one round of white box testing and sends it to the
testing team. Here, fixing the bugs implies that the bug is deleted, and the particular feature
is working fine on the application.

Logic Coverage:
Logic corresponds to the internal structure of the code and this testing is adopted for safety-
critical applications such as softwares used in aviation industry. This Test verifies the subset
of the total number of truth assignments to the expressions.

Logic Coverage Sources:


Logic coverage comes from any of the below mentioned sources:
 Decisions in programs
 Finite State Machines and Statecharts
 Requirements

Basis Path Testing:


Path Testing Basis Path Testing is a white-box testing technique based on the
control structure of a program or a module. Using this structure, a control flow
graph is prepared and the various possible paths present in the graph are executed
as a part of testing. Therefore, by definition, Basis path testing is a technique of
selecting the paths in the control flow graph, that provide a basis set of execution
paths through the program or module. Since this testing is based on the control
structure of the program, it requires complete knowledge of the program’s
structure. To design test cases using this technique, four steps are followed :
1. Construct the Control Flow Graph
2. Compute the Cyclomatic Complexity of the Graph
3. Identify the Independent Paths
4. Design Test cases from Independent Paths
Let’s understand each step one by one. 1. Control Flow Graph – A control flow
graph (or simply, flow graph) is a directed graph which represents the control
structure of a program or module. A control flow graph (V, E) has V number of
nodes/vertices and E number of edges in it. A control graph can also have :
 Junction Node – a node with more than one arrow entering it.
 Decision Node – a node with more than one arrow leaving it.
 Region – area bounded by edges and nodes (area outside the graph is
also counted as a region.).

Below are the notations used while constructing a flow graph :


 Sequential Statements

 If – Then – Else


 Do – While –

 While – Do –
 Switch – Case

 Graph Matrices:
 A graph matrix is a data structure that can assist in developing a tool for
automation of path testing. Properties of graph matrices are fundamental
for developing a test tool and hence graph matrices are very useful in
understanding software testing concepts and theory.
 What is a Graph Matrix ?
A graph matrix is a square matrix whose size represents the number of
nodes in the control flow graph. If you do not know what control flow graphs
are, then read this article. Each row and column in the matrix identifies a
node and the entries in the matrix represent the edges or links between
these nodes. Conventionally, nodes are denoted by digits and edges are
denoted by letters.
 Let’s take an example.

Let’s convert this control flow graph into a graph matrix. Since the graph has 4
nodes, so the graph matrix would have a dimension of 4 X 4. Matrix entries will be
filled as follows :
 (1, 1) will be filled with ‘a’ as an edge exists from node 1 to node 1
 (1, 2) will be filled with ‘b’ as an edge exists from node 1 to node 2. It is
important to note that (2, 1) will not be filled as the edge is unidirectional
and not bidirectional
 (1, 3) will be filled with ‘c’ as edge c exists from node 1 to node 3
 (2, 4) will be filled with ‘d’ as edge exists from node 2 to node 4
 (3, 4) will be filled with ‘e’ as an edge exists from node 3 to node 4
The graph matrix formed is shown below :

Connection Matrix :
A connection matrix is a matrix defined with edges weight. In simple form, when a
connection exists between two nodes of control flow graph, then the edge weight
is 1, otherwise, it is 0. However, 0 is not usually entered in the matrix cells to
reduce the complexity.
For example, if we represent the above control flow graph as a connection matrix,
then the result would be :

As we can see, the weight of the edges are simply replaced by 1 and the cells which
were empty before are left as it is, i.e., representing 0.
A connection matrix is used to find the cyclomatic complexity of the control graph.
Although there are three other methods to find the cyclomatic complexity but this
method works well too.
Following are the steps to compute the cyclomatic complexity :
1. Count the number of 1s in each row and write it in the end of the row
2. Subtract 1 from this count for each row (Ignore the row if its count is 0)
3. Add the count of each row calculated previously
4. Add 1 to this total count
5. The final sum in Step 4 is the cyclomatic complexity of the control flow
graph

Loop Software Testing:


Loop Testing is a type of software testing type that is performed to validate the
loops. It is one of the type of Control Structure Testing. Loop testing is a white box
testing technique and is used to test loops in the program.
Objectives of Loop Testing:
The objective of Loop Testing is:
 To fix the infinite loop repetition problem.
 To know the performance.
 To identify the loop initialization problems.
 To determine the uninitialized variables.
Types of Loop Testing:
Loop testing is classified on the basis of the types of the loops:
1. Simple Loop Testing:
Testing performed in a simple loop is known as Simple loop testing.
Simple loop is basically a normal “for”, “while” or “do-while” in which a
condition is given and loop runs and terminates according to true and
false occurrence of the condition respectively. This type of testing is
performed basically to test the condition of the loop whether the
condition is sufficient to terminate loop after some point of time.
Loop Testing is a type of software testing type that is performed to validate the
loops. It is one of the type of Control Structure Testing. Loop testing is a white box
testing technique and is used to test loops in the program.
Objectives of Loop Testing:
The objective of Loop Testing is:
 To fix the infinite loop repetition problem.
 To know the performance.
 To identify the loop initialization problems.
 To determine the uninitialized variables.
1. Example:
2. while(condition)
3. {
4. statement(s);
}

5. Nested Loop Testing:


Testing performed in a nested loop in known as Nested loop testing.
Nested loop is basically one loop inside the another loop. In nested loop
there can be finite number of loops inside a loop and there a nest is made.
It may be either of any of three loops i.e., for, while or do-while.
Example:
while(condition 1)
{
while(condition 2)
{
statement(s);
}
}

6. Concatenated Loop Testing:


Testing performed in a concatenated loop is known as Concatenated loop
testing. It is performed on the concatenated loops. Concatenated loops are
loops after the loop. It is a series of loops. Difference between nested and
concatenated is that in nested loop is inside the loop but here loop is after
the loop.
Example:
while(condition 1)
{
statement(s);
}
while(condition 2)
{
statement(s);
}
7. Unstructured Loop Testing:
Testing performed in an unstructured loop is known as Unstructured loop
testing. Unstructured loop is the combination of nested and concatenated
loops. It is basically a group of loops that are in no order.
Example:
while()
{
for()
{}
while()
{}
}
Advantages of Loop Testing:
The advantages of Loop testing are:
 Loop testing limits the number of iterations of loop.
 Loop testing ensures that program doesn’t go into infinite loop process.
 Loop testing endures initialization of every used variable inside the loop.
 Loop testing helps in identification of different problems inside the loop.
 Loop testing helps in determination of capacity.
Disadvantages of Loop Testing:
The disadvantages of Loop testing are:
 Loop testing is mostly effective in bug detection in low-level software.
 Loop testing is not useful in bug detection.

Data Flow Testing:


Data Flow Testing is a type of structural testing. It is a method that is used to find
the test paths of a program according to the locations of definitions and uses of
variables in the program. It has nothing to do with data flow diagrams.
It is concerned with:
 Statements where variables receive values,
 Statements where these values are used or referenced.
To illustrate the approach of data flow testing, assume that each statement in the
program assigned a unique statement number. For a statement number S-
DEF(S) = {X | statement S contains the definition of X}
USE(S) = {X | statement S contains the use of X}
If a statement is a loop or if condition then its DEF set is empty and USE set is based
on the condition of statement s.
Data Flow Testing uses the control flow graph to find the situations that can
interrupt the flow of the program.
Reference or define anomalies in the flow of the data are detected at the
time of associations between values and variables. These anomalies are:
A variable is defined but not used or referenced,
 A variable is used but never defined,
 A variable is defined twice before it is used
Advantages of Data Flow Testing:
Data Flow Testing is used to find the following issues-
 To find a variable that is used but never defined,
 To find a variable that is defined but never used,
 To find a variable that is defined multiple times before it is use,
 Deallocating a variable before it is used.
Disadvantages of Data Flow Testing
 Time consuming and costly process
 Requires knowledge of programming languages
Example:
read x, y;
2. if(x>y)
3. a = x+1
else
4. a = y-1
5. print a;
Control flow graph of above example:

Mutation Testing:
Mutation Testing is a type of Software Testing that is performed to design new
software tests and also evaluate the quality of already existing software tests.
Mutation testing is related to modification a program in small ways. It focuses to
help the tester develop effective tests or locate weaknesses in the test data used
for the program.
History of Mutation Testing:
Richard Lipton proposed the mutation testing in 1971 for the first time. Although
high cost reduced the use of mutation testing but now it is widely used for
languages such as Java and XML.

Mutation Testing is a White Box Testing.


Mutation testing can be applied to design models, specifications, databases, tests,
and XML. It is a structural testing technique, which uses the structure of the code to
guide the testing process. It can be described as the process of rewriting the source
code in small ways in order to remove the redundancies in the source code.
Objective of Mutation Testing:
The objective of mutation testing is:
 To identify pieces of code that are not tested properly.
 To identify hidden defects that can’t be detected using other testing
methods.
 To discover new kinds of errors or bugs.
 To calculate the mutation score.
 To study error propagation and state infection in the program.
 To assess the quality of the test cases.
Types of Mutation Testing:
Mutation testing is basically of 3 types:
1. Value Mutations:
In this type of testing the values are changed to detect errors in the program.
Basically a small value is changed to a larger value or a larger value is changed to a
smaller value. In this testing basically constants are changed.
Example:
Initial Code:

int mod = 1000000007;


int a = 12345678;
int b = 98765432;
int c = (a + b) % mod;

Changed Code:
int mod = 1007;
int a = 12345678;
int b = 98765432;
int c = (a + b) % mod;
2. Decision Mutations:
In decisions mutations are logical or arithmetic operators are changed to detect
errors in the program.
Example:
Initial Code:

if(a < b)
c = 10;
else
c = 20;

Changed Code:

if(a > b)
c = 10;
else
c = 20;
3. Statement Mutations:
In statement mutations a statement is deleted or it is replaces by some other
statement.
Example:
Initial Code:

if(a < b)
c = 10;
else
c = 20;
Advantages of Mutation Testing:
 It brings a good level of error detection in the program.
 It discovers ambiguities in the source code.
 It finds and solves the issues of loopholes in the program.
 It helps the testers to write or automate the better test cases.
 It provides more efficient programming source code.
Disadvantages of Mutation Testing:
 It is highly costly and time-consuming.
 It is not able for Black Box Testing.
 Some, mutations are complex and hence it is difficult to implement or
run against various test cases.
 Here, the team members who are performing the tests should have good
programming knowledge.
 Selection of correct automation tool is important to test the programs.

Unit-3
Static Testing:
Static Testing is a type of a Software Testing method which is performed to check
the defects in software without actually executing the code of the software
application. Whereas in Dynamic Testing checks, the code is executed to detect the
defects. Static testing is performed in early stage of development to avoid errors as
it is easier to find sources of failures and it can be fixed easily. The errors that
cannot be found using Dynamic Testing, can be easily found by Static Testing. Static
Testing Techniques: There are mainly two type techniques used in Static

Testing:
1. Review: In static testing review is a process or technique that is performed to
find the potential defects in the design of the software. It is process to detect and
remove errors and defects in the different supporting documents like software
requirements specifications. People examine the documents and sorted out errors,
redundancies and ambiguities. Review is of four types:
 Informal: In informal review the creator of the documents put the
contents in front of audience and everyone gives their opinion and thus
defects are identified in the early stage.
 Walkthrough: It is basically performed by experienced person or expert
to check the defects so that there might not be problem further in the
development or testing phase.
 Peer review: Peer review means checking documents of one-another to
detect and fix the defects. It is basically done in a team of colleagues.
 Inspection: Inspection is basically the verification of document the higher
authority like the verification of software requirement specifications
(SRS).
2. Static Analysis: Static Analysis includes the evaluation of the code quality that is
written bydevelopers. Different tools are used to do the analysis of the code and
comparison of the same with the standard. It also helps in following identification
of following defects:
(a) Unused variables
(b) Dead code
(c) Infinite loops
(d) Variable with undefined value
(e) Wrong syntax
Static Analysis is of three types:
 Data Flow: Data flow is related to the stream processing.
 Control Flow: Control flow is basically how the statements or instructions
are executed.
 Cyclomatic Complexity: Cyclomatic complexity defines the number of
independent paths in the control flow graph made from the code or
flowchart so that minimum number of test cases can be designed for
each independent path.
Inspection:
Inspection is the most formal form of reviews, a strategy adopted during static testing phase.

Characteristics of Inspection :
 Inspection is usually led by a trained moderator, who is not the author.
Moderator's role is to do a peer examination of a document
 Inspection is most formal and driven by checklists and rules.
 This review process makes use of entry and exit criteria.
 It is essential to have a pre-meeting preparation.
 Inspection report is prepared and shared with the author for appropriate
actions.
 Post Inspection, a formal follow-up process is used to ensure a timely and a
prompt corrective action.
 Aim of Inspection is NOT only to identify defects but also to bring in for process
improvement.
Structural Walkthrough:
A structured walkthrough, a static testing technique performed in an organized manner
between a group of peers to review and discuss the technical aspects of software
development process. The main objective in a structured walkthrough is to find defects
inorder to improve the quality of the product.
Structured walkthroughs are usually NOT used for technical discussions or to discuss the
solutions for the issues found. As explained, the aim is to detect error and not to correct
errors. When the walkthrough is finished, the author of the output is responsible for fixing
the issues.

Benefits:
 Saves time and money as defects are found and rectified very early in the
lifecycle.
 This provides value-added comments from reviewers with different technical
backgrounds and experience.
 It notifies the project management team about the progress of the
development process.
 It creates awareness about different development or maintenance
methodologies which can provide a professional growth to participants.
Structured Walkthrough Participants:
 Author - The Author of the document under review.
 Presenter - The presenter usually develops the agenda for the walkthrough and
presents the output being reviewed.
 Moderator - The moderator facilitates the walkthrough session, ensures the
walkthrough agenda is followed, and encourages all the reviewers to
participate.
 Reviewers - The reviewers evaluate the document under test to determine if it
is technically accurate.
 Scribe - The scribe is the recorder of the structured walkthrough outcomes who
records the issues identified and any other technical comments, suggestions,
and unresolved questions.
Technical Review:
A Technical review is a static white-box testing technique which is conducted to spot the
defects early in the life cycle that cannot be detected by black box testing techniques.
Technical Review - Static Testing:

Technical Review Characteristics:


 Technical Reviews are documented and uses a defect detection process that
has peers and technical specialist as part of the review process.
 The Review process doesn't involve management participation.
 It is usually led by trained moderator who is NOT the author.
 The report is prepared with the list of issues that needs to be addressed.

Validation Testing:
The process of evaluating software during the development process or at the end of the
development process to determine whether it satisfies specified business requirements.
Validation Testing ensures that the product actually meets the client's needs. It can also be
defined as to demonstrate that the product fulfills its intended use when deployed on
appropriate environment.
It answers to the question, Are we building the right product?

Validation Testing - Workflow:


Validation testing can be best demonstrated using V-Model. The Software/product under test
is evaluated during this type of testing.
Activities:
 Unit Testing
 Integration Testing
 System Testing
 User Acceptance Testing

Unit Testing:
Unit testing involves the testing of each unit or an individual component of the software
application. It is the first level of functional testing. The aim behind unit testing is to validate
unit components with its performance.

A unit is a single testable part of a software system and tested during the development phase
of the application software.

The purpose of unit testing is to test the correctness of isolated code. A unit component is an
individual function or code of the application. White box testing approach used for unit
testing and usually done by the developers.

Whenever the application is ready and given to the Test engineer, he/she will start checking
every component of the module or module of the application independently or one by one,
and this process is known as Unit testing or components testing.

Types of Unit Testing:

There are 2 types of Unit Testing: Manual, and Automated.


Workflow of Unit
Testing:

Unit Testing Techniques:


There are 3 types of Unit Testing Techniques. They are
1. Black Box Testing: This testing technique is used in covering the unit tests
for input, user interface, and output parts.
2. White Box Testing: This technique is used in testing the functional
behavior of the system by giving the input and checking the functionality
output including the internal design structure and code of the modules.
3. Gray Box Testing: This technique is used in executing the relevant test
cases, test methods, test functions, and analyzing the code performance
for the modules.
INTEGRATION TESTING:
Integration testing is the process of testing the interface between two
software units or modules. It focuses on determining the correctness of the
interface. The purpose of integration testing is to expose faults in the
interaction between integrated units. Once all the modules have been unit
tested, integration testing is performed.
Integration testing is a software testing technique that focuses on verifying the
interactions and data exchange between different components or modules of a
software application. The goal of integration testing is to identify any problems
or bugs that arise when different components are combined and interact with
each other. Integration testing is typically performed after unit testing and before
system testing. It helps to identify and resolve integration issues early in the
development cycle, reducing the risk of more severe and costly problems later
on.
Integration testing can be done by picking module by module. This can be done
so that there should be a proper sequence to be followed. And also if you don’t
want to miss out on any integration scenarios then you have to follow the proper
sequence. Exposing the defects is the major focus of the integration testing and
the time of interaction between the integrated units.
Integration test approaches – There are four types of integration testing
approaches. Those approaches are the following:
1. Big-Bang Integration Testing – It is the simplest integration testing
approach, where all the modules are combined and the functionality is
verified after the completion of individual module testing. In simple words, all
the modules of the system are simply put together and tested. This approach
is practicable only for very small systems. If an error is found during the
integration testing, it is very difficult to localize the error as the error may
potentially belong to any of the modules being integrated. So, debugging
errors reported during big bang integration testing is very expensive to fix.
Big-Bang integration testing is a software testing approach in which all
components or modules of a software application are combined and tested at
once. This approach is typically used when the software components have a low
degree of interdependence or when there are constraints in the development
environment that prevent testing individual components. The goal of big-bang
integration testing is to verify the overall functionality of the system and to
identify any integration problems that arise when the components are combined.
While big-bang integration testing can be useful in some situations, it can also be
a high-risk approach, as the complexity of the system and the number of
interactions between components can make it difficult to identify and diagnose
problems.
2. Bottom-Up Integration Testing – In bottom-up testing, each module at lower
levels are tested with higher modules until all modules are tested. The primary
purpose of this integration testing is that each subsystem tests the interfaces
among various modules making up the subsystem. This integration testing uses test
drivers to drive and pass appropriate data to the lower-level modules.
3. Top-Down Integration Testing – Top-down integration testing technique is used
in order to simulate the behaviour of the lower-level modules that are not yet
integrated. In this integration testing, testing takes place from top to bottom. First,
high-level modules are tested and then low-level modules and finally integrating
the low-level modules to a high level to ensure the system is working as intended.
4. Mixed Integration Testing – A mixed integration testing is also called sandwiched
integration testing. A mixed integration testing follows a combination of top down
and bottom-up testing approaches. In top-down approach, testing can start only
after the top-level module have been coded and unit tested. In bottom-up
approach, testing can start only after the bottom level modules are ready. This
sandwich or mixed approach overcomes this shortcoming of the top-down and
bottom-up approaches. It is also called the hybrid integration testing. also, stubs
and drivers are used in mixed integration testing.

Functional Testing:
Functional testing is basically defined as a type of testing that verifies that each
function of the software application works in conformance with the requirement
and specification. This testing is not concerned with the source code of the
application. Each functionality of the software application is tested by providing
appropriate test input, expecting the output, and comparing the actual output with
the expected output. This testing focuses on checking the user interface, APIs,
database, security, client or server application, and functionality of the Application
Under Test. Functional testing can be manual or automated.
Process :
Functional testing involves the following steps:
1. Identify test input: This step involves identifying the functionality that
needs to be tested. This can vary from testing the usability functions, and
main functions to error conditions.
2. Compute expected outcomes: Create input data based on the
specifications of the function and determine the output based on these
specifications.
3. Execute test cases: This step involves executing the designed test cases
and recording the output.
4. Compare the actual and expected output: In this step, the actual output
obtained after executing the test cases is compared with the expected
output to determine the amount of deviation in the results. This step
reveals if the system is working as expected or not.

Type of Functional Testing Techniques


1. Unit Testing: Unit testing is the type of functional testing technique
where the individual units or modules of the application are tested. It
ensures that each module is working correctly.
2. Integration Testing: In Integration testing, combined individual units are
tested as a group and expose the faults in the interaction between the
integrated units.
3. Smoke Testing: Smoke testing is a type of functional testing technique
where the basic functionality or feature of the application is tested as it
ensures that the most important function works properly.
4. User Acceptance Testing: User acceptance testing is done by the client to
certify that the system meets the requirements and works as intended. It
is the final phase of testing before the product release.
5. Interface Testing: Interface testing is a type of software testing technique
that checks the proper interaction between two different software
systems.
System Testing:
System Testing is a type of software testing that is performed on a complete
integrated system to evaluate the compliance of the system with the corresponding
requirements. In system testing, integration testing passed components are taken
as input. The goal of integration testing is to detect any irregularity between the
units that are integrated together. System testing detects defects within both the
integrated units and the whole system. The result of system testing is the observed
behavior of a component or a system when it is tested. System Testing is carried
out on the whole system in the context of either system requirement specifications
or functional requirement specifications or in the context of both. System testing
tests the design and behavior of the system and also the expectations of the
customer. It is performed to test the system beyond the bounds mentioned in
the software requirements specification (SRS). System Testing is basically
performed by a testing team that is independent of the development team that
helps to test the quality of the system impartial. It has both functional and non-
functional testing. System Testing is a black-box testing. System Testing is
performed after the integration testing and before the acceptance testing. System
Testing Process: System Testing is performed in the following steps:
 Test Environment Setup: Create testing environment for the better
quality testing.
 Create Test Case: Generate test case for the testing process.
 Create Test Data: Generate the data that is to be tested.
 Execute Test Case: After the generation of the test case and the test
data, test cases are executed.
 Defect Reporting: Defects in the system are detected.
 Regression Testing: It is carried out to test the side effects of the testing
process.
 Log Defects: Defects are fixed in this step.
 Retest: If the test is not successful then again test is performed.
types of System Testing:
 Performance Testing: Performance Testing is a type of software testing
that is carried out to test the speed, scalability, stability and reliability of
the software product or application.
 Load Testing: Load Testing is a type of software Testing which is carried
out to determine the behavior of a system or software product under
extreme load.
 Stress Testing: Stress Testing is a type of software testing performed to
check the robustness of the system under the varying loads.
 Scalability Testing: Scalability Testing is a type of software testing which
is carried out to check the performance of a software application or
system in terms of its capability to scale up or scale down the number of
user request load.

Acceptance testing:
Acceptance testing is formal testing based on user requirements and function processing. It
determines whether the software is conforming specified requirements and user
requirements or not. It is conducted as a kind of Black Box testing where the number of
required users involved testing the acceptance level of the system. It is the fourth and last
level of software testing.
Types of Acceptance Testing:
1. User Acceptance Testing (UAT): User acceptance testing is used to
determine whether the product is working for the user correctly. Specific
requirements which are quite often used by the customers are primarily
picked for the testing purpose. This is also termed as End-User Testing.
2. Business Acceptance Testing (BAT): BAT is used to determine whether
the product meets the business goals and purposes or not. BAT mainly
focuses on business profits which are quite challenging due to the
changing market conditions and new technologies so the current
implementation may have to being changed which results in extra
budgets.
3. Operational Acceptance Testing (OAT): OAT is used to determine the
operational readiness of the product and is non-functional testing. It mainly
includes testing of recovery, compatibility, maintainability, reliability, etc.
OAT assures the stability of the product before it is released to production.
4. Alpha Testing: Alpha testing is used to determine the product in the
development testing environment by a specialized testers team usually
called alpha testers.
5. Beta Testing: Beta testing is used to assess the product by exposing it to the
real end-users, usually called beta testers in their environment. Feedback is
collected from the users and the defects are fixed. Also, this helps in
enhancing the product to give a rich user experience.

Regression Testing:
Regression Testing is the process of testing the modified parts of the code and the
parts that might get affected due to the modifications to ensure that no new
errors have been introduced in the software after the modifications have been
made. Regression means return of something and in the software field, it refers
to the return of a bug.
Process of Regression testing:
Firstly, whenever we make some changes to the source code for any reasons like
adding new functionality, optimization, etc. then our program when executed
fails in the previously designed test suite for obvious reasons. After the failure,
the source code is debugged in order to identify the bugs in the program. After
identification of the bugs in the source code, appropriate modifications are made.
Then appropriate test cases are selected from the already existing test suite
which covers all the modified and affected parts of the source code. We can add
new test cases if required. In the end regression testing is performed using the
selected test cases.

Progressive Testing in Software Testing:


Progressive Testing is also known as Incremental Testing. In Software
Testing Incremental Testing refers to test modules one after another. When in an
application parent-child modules are tested related modules to it needs to be
tested first.
Let’s understand Progressive Testing/Increment Testing in more elaborately way by
going little bit deep into it. This Increment Testing is considered as sub testing
technique which comes under Integration Testing. Actually this testing acts as an
approach/strategy to perform integration testing on software product rather as
direct testing activities.
After completion of Unit Testing over each individual components of the software,
then the Integration Testing is performed to ensure proper interface and
interaction between components of system. Incremental testing or Progressive
testing is treated as partial phase of Integration testing. First it performs
Integration testing on standalone components thereafter it goes on integrating
components and performs integration testing over them accordingly. As the
components are integrated in an incremental manner that’s why also it is termed
as Incremental Testing.

Regression Testing:
Regression testing is a black box testing techniques. It is used to authenticate a code change
in the software does not impact the existing functionality of the product. Regression testing
is making sure that the product works fine with new functionality, bug fixes, or any change
in the existing feature.
Regression testing is a type of software testing. Test cases are re-executed to check the
previous functionality of the application is working fine, and the new changes have not
produced any bugs.

Regression testing can be performed on a new build when there is a significant change in
the original functionality. It ensures that the code still works even when the changes are
occurring. Regression means Re-test those parts of the application, which are unchanged.

Regression tests are also known as the Verification Method. Test cases are often
automated. Test cases are required to execute many times and running the same test case
again and again manually, is time-consuming and tedious too.

Types of Regression Testing:


The different types of Regression Testing are as follows:

1. Unit Regression Testing [URT]


2. Regional Regression Testing[RRT]
3. Full or Complete Regression Testing [FRT]

1) Unit Regression Testing [URT]


In this, we are going to test only the changed unit, not the impact area, because it may affect
the components of the same module.

2)Regional Regression testing [RRT]


In this, we are going to test the modification along with the impact area or regions, are called
the Regional Regression testing. Here, we are testing the impact area because if there are
dependable modules, it will affect the other modules also.
3) Full Regression testing [FRT]
During the second and the third release of the product, the client asks for adding 3-4 new
features, and also some defects need to be fixed from the previous release. Then the testing
team will do the Impact Analysis and identify that the above modification will lead us to test
the entire product.

UNIT-4
Efficient Test Suite management:
Test suite is a container that has a set of tests which helps testers in executing and reporting
the test execution status. It can take any of the three states namely Active, Inprogress and
completed.
A Test case can be added to multiple test suites and test plans. After creating a test plan, test
suites are created which in turn can have any number of tests.
Test suites are created based on the cycle or based on the scope. It can contain any type of
tests, viz - functional or Non-Functional.

Test Suite - Diagram:

Test Case Prioritization in Software Testing:


As the name suggests, test case prioritization refers to prioritizing test cases in test
suite on basis of different factors. Factors could be code coverage, risk/critical
modules, functionality, features, etc.
Why should test cases be prioritized?
As the size of software increases, test suite also grows bigger and also requires
more efforts to maintain test suite. In order to detect bugs in software as early as
possible, it is important to prioritize test cases so that important test cases can be
executed first.
Types of Test Case Prioritization :
 General Prioritization :
In this type of prioritization, test cases that will be useful for the
subsequent modified versions of product are prioritized. It does not
require any information regarding modifications made in the product.
 Version – Specific Prioritization :
Test cases can also be prioritized such that they are useful on specific version
of product. This type of prioritization requires knowledge about changes that have
been introduced in product.
Prioritization Techniques :
1. Coverage – based Test Case Prioritization :
This type of prioritization is based on code coverage i.e. test cases are prioritized on
basis of their code coverage.
 Total Statement Coverage Prioritization –
In this technique, total number of statements covered by test case is
used as factor to prioritize test cases. For example, test case covering 10
statements will be given higher priority than test case covering 5
statements.
 Additional Statement Coverage Prioritization –
This technique involves iteratively selecting test case with maximum
statement coverage, then selecting test case which covers statements
that were left uncovered by previous test case. This process is repeated
till all statements have been covered.
2. Risk – based Prioritization :
This technique uses risk analysis to identify potential problem areas which if failed,
could lead to bad consequences. Therefore, test cases are prioritized keeping in
mind potential problem areas. In risk analysis, following steps are performed :
 List potential problems.
 Assigning probability of occurrence for each problem.
 Calculating severity of impact for each problem.
After performing above steps, risk analysis table is formed to present results. The
table consists of columns like Problem ID, Potential problem identified, Severity of
Impact, Risk exposure, etc.
3. Prioritization using Relevant Slice :
In this type of prioritization, slicing technique is used – when program is modified,
all existing regression test cases are executed in order to make sure that program
yields same result as before, except where it has been modified. For this purpose,
we try to find part of program which has been affected by modification, and then
prioritization of test cases is performed for this affected part. There are 3 parts to
slicing technique :
 Execution slice –
 The statements executed under test case form execution slice.
 Dynamic slice –
Statements executed under test case that might impact program output.
 Relevant Slice –
Statements that are executed under test case and don’t have any impact on
the program output but may impact output of test case.
4. Requirements – based Prioritization :
Some requirements are more important than others or are more critical in nature,
hence test cases for such requirements should be prioritized first. The following
factors can be considered while prioritizing test cases based on requirements :
 Customer assigned priority –
The customer assigns weight to requirements according to his need or
understanding of requirements of product.
 Developer perceived implementation complexity –
Priority is assigned by developer on basis of efforts or time that would be
required to implement that requirement.
 Requirement volatility –
This factor determines frequency of change of requirement.
 Fault proneness of requirements –
Priority is assigned based on how error-prone requirement has been in
previous versions of software.
Software Quality Management:
Software Quality Management ensures that the required level of quality is
achieved by submitting improvements to the product development process.
SQA aims to develop a culture within the team and it is seen as everyone's
responsibility.
Software Quality management should be independent of project management
to ensure independence of cost and schedule adherences. It directly affects the
process quality and indirectly affects the product quality.
Activities of Software Quality Management:
 Quality Assurance - QA aims at developing Organizational
procedures and standards for quality at Organizational level.
 Quality Planning - Select applicable procedures and standards for a
particular project and modify as required to develop a quality plan.
 Quality Control - Ensure that best practices and standards are
followed by the software development team to produce quality
products.
Software Quality Metrics:
Software metrics can be classified into three categories −
 Product metrics − Describes the characteristics of the product such
as size, complexity, design features, performance, and quality level.
 Process metrics − These characteristics can be used to improve the
development and maintenance activities of the software.
 Project metrics − This metrics describe the project characteristics
and execution. Examples include the number of software
developers, the staffing pattern over the life cycle of the software,
cost, schedule, and productivity.
Some metrics belong to multiple categories. For example, the in-process quality
metrics of a project are both process metrics and project metrics.
Software quality metrics are a subset of software metrics that focus on the
quality aspects of the product, process, and project. These are more closely
associated with process and product metrics than with project metrics.
Software quality metrics can be further divided into three categories −
 Product quality metrics
 In-process quality metrics
 Maintenance quality metrics
Product Quality Metrics:
This metrics include the following −
 Mean Time to Failure
 Defect Density
 Customer Problems
 Customer Satisfaction
In-process Quality Metrics:
In-process quality metrics deals with the tracking of defect arrival during formal
machine testing for some organizations. This metric includes −
 Defect density during machine testing
 Defect arrival pattern during machine testing
 Phase-based defect removal pattern
 Defect removal effectiveness
Maintenance Quality Metrics
Although much cannot be done to alter the quality of the product during this
phase, following are the fixes that can be carried out to eliminate the defects as
soon as possible with excellent fix quality.
 Fix backlog and backlog management index
 Fix response time and fix responsiveness
 Percent delinquent fixes
 Fix quality

UNIT-5
AUTOMATION AND TESTING TOOLS:
Test Automation:
Test Automation is the best way to increase the effectiveness, test
coverage, and execution speed in software testing. Automated software
testing is important due to the following reasons:

 Manual Testing of all workflows, all fields, all negative scenarios is


time and money consuming
 It is difficult to test for multilingual sites manually
 Test Automation in software testing does not require Human
intervention. You can run automated test unattended (overnight)
 Test Automation increases the speed of test execution
 Automation helps increase Test Coverage
 Manual Testing can become boring and hence error-prone.

Which Test Cases to Automate?


Test cases to be automated can be selected using the following criterion
to increase the automation ROI

 High Risk – Business Critical test cases


 Test cases that are repeatedly executed
 Test Cases that are very tedious or difficult to perform manually
 Test Cases which are time-consuming

The following category of test cases are not suitable for automation:

 Test Cases that are newly designed and not executed manually at
least once
 Test Cases for which the requirements are frequently changing
 Test cases which are executed on an ad-hoc basis.

Automated Testing Process:


Following steps are followed in an Automation Process

Test Automation Process


Step 1) Test Tool Selection

Step 2) Define scope of Automation

Step 3) Planning, Design and Development

Step 4) Test Execution

Step 5) Maintenance
Step 1) Test tool selection
Test Tool selection largely depends on the technology the Application
Under Test is built on. For instance, QTP does not support Informatica. So
QTP cannot be used for testing Informatica applications. It’s a good idea
to conduct a Proof of Concept of Tool on AUT.
Step 2) Define the scope of Automation
The scope of automation is the area of your Application Under Test which
will be automated. Following points help determine scope:

 The features that are important for the business


 Scenarios which have a large amount of data
 Common functionalities across applications
 Technical feasibility
 The extent to which business components are reused
 The complexity of test cases
 Ability to use the same test cases for cross-browser testing

Step 3) Planning, Design, and Development


During this phase, you create an Automation strategy & plan, which
contains the following details-

 Automation tools selected


 Framework design and its features
 In-Scope and Out-of-scope items of automation
 Automation testbed preparation
 Schedule and Timeline of scripting and execution
 Deliverables of Automation Testing

Step 4) Test Execution


Automation Scripts are executed during this phase. The scripts need input
test data before there are set to run. Once executed they provide
detailed test reports.

Execution can be performed using the automation tool directly or


through the Test Management tool which will invoke the automation
tool.

Example: Quality center is the Test Management tool which in turn it will
invoke QTP for execution of automation scripts. Scripts can be executed
in a single machine or a group of machines. The execution can be done
during the night, to save time.
Step 5) Test Automation Maintenance Approach
Test Automation Maintenance Approach is an automation testing phase
carried out to test whether the new functionalities added to the software
are working fine or not. Maintenance in automation testing is executed
when new automation scripts are added and need to be reviewed and
maintained in order to improve the effectiveness of automation scripts
with each successive release cycle.
Framework for Automation
A framework is set of automation guidelines which help in

 Maintaining consistency of Testing


 Improves test structuring
 Minimum usage of code
 Less Maintenance of code
 Improve re-usability
 Non Technical testers can be involved in code
 The training period of using the tool can be reduced
 Involves Data wherever appropriate

There are four types of frameworks used in automation software


testing:
1. Data Driven Automation Framework
2. Keyword Driven Automation Framework
3. Modular Automation Framework
4. Hybrid Automation Framework

Test Tools
Testing Tools:
Tools from a software testing context can be defined as a product that supports one or more
test activities right from planning, requirements, creating a build, test execution, defect
logging and test analysis.

Classification of Tools
Tools can be classified based on several parameters. They include:
 The purpose of the tool
 The Activities that are supported within the tool
 The Type/level of testing it supports
 The Kind of licensing (open source, freeware, commercial)
 The technology used
Types of Tools:
S.No. Tool Type Used for Used by

1. Test Management Tool Test Managing, scheduling, defect testers


logging, tracking and analysis.

2. Configuration For Implementation, execution, tracking All Team


management tool changes members

3. Static Analysis Tools Static Testing Developers

4. Test data Preparation Analysis and Design, Test data Testers


Tools generation

5. Test Execution Tools Implementation, Execution Testers

6. Test Comparators Comparing expected and actual results All Team


members
7. Coverage measurement Provides structural coverage Developers
tools

8. Performance Testing Monitoring the performance, response Testers


tools time

9. Project planning and For Planning Project


Tracking Tools Managers

10. Incident Management For managing the tests Testers


Tools

Tools Implementation - process


 Analyse the problem carefully to identify strengths, weaknesses and
opportunities
 The Constraints such as budgets, time and other requirements are
noted.
 Evaluating the options and Shortlisting the ones that are meets the
requirement
 Developing the Proof of Concept which captures the pros and cons
 Create a Pilot Project using the selected tool within a specified team
 Rolling out the tool phase wise across the organization
Selection of Testing Tools:
Making a decision to start test automation is easy, but choosing the right test
automation tool is not. There are teams that are spending a lot on hiring new
manual testing resources but find it hard to invest in automation. The
reasons could be many. One major cause that we are about to eliminate for
you is understanding the criteria for selection of testing tools.
Sometimes, teams spend significant time exploring tools and get find
overwhelming information out there that they give up on the idea of
automation altogether. Other times, choose a generic tool, start with
automation, but then never get past the first few test cases.
Project Requirements:

There is no point in looking for a solution when you don’t know the problem.
So, before you start exploring the various tools and technologies available in
the market for test automation, you need to list down your project
requirements and the problems you are looking to solve.
The list, in general, should answer the below questions:

 Type of application that needs to be tested: It could be web, mobile,


API or a desktop application.
 Platforms that need to be tested: If yours is a desktop application, list
down the operating systems that should be tested. If yours is a mobile
application, then list down the supported mobile operating systems. If
yours is a web application, then list down the supported browsers.
 Language your application is built in: This can help if you are planning
to use a programming language for automation.

Team Skills / Learning Curve:

When selecting a tool for automation there could be 2 types of tools:

 A codeless test automation tool


 An automation tool that requires coding

If yours is a team that already has people that are skilled in some
programming language then you can think of using an automation tool in
that programming language. Or, if you plan to hire skilled people for
automation then you don’t need to consider this point.
But, if you are planning to have an automation tool that will not need you to
look for people with the required skillset, going for codeless automation
tools will be a good idea. These tools allow the automation of test
cases without the need for knowing a programming language.
. Budget:

This is a very important criteria for selection of testing tool. You might easily
say that you will want a free tool because you don’t want to spend on
automation if you can avoid it.
But, you also need to consider that the amount of time being spent on
automation, the number of people working on the tool and the machines
being used for automation also constitute the amount you spend on
automation. So, consider below points before deciding on the budget:
 Cost of human resources being used for automation: If there is a tool
that does not need you to hire new resources, especially for
automation, consider it a saving.
 Time spent on learning the tool: If there is a tool that has a low
learning curve, that is an indirect saving in the cost you might have
spent in terms of the time the resources spend on learning the tool. Or
hiring resources that are skilled in that particular tool.
 Time being spent on automation: If there is a tool that makes it easy to
create and maintain test cases, thereby saving time, consider it a saving
in cost.

Ease of Test Case Creation and Maintenance:

Not every tool is made to handle all kinds of scenarios. So, to make sure that
your chosen tool meets your needs, try automating a few test cases of your
application to know if the tool suits your needs. That could be done with the
trial version of a tool if your search has narrowed down to premium tools.
Also, to avoid spending more time in test case maintenance as compared to
test case creation, make sure to choose a tool that fits your budget including
the maintenance costs. There are tools that have the ability to self-heal the
test cases in case of minor changes in the application.
Such tools help to reduce the cost of test case maintenance. Also, it helps if
the tool supports pause and resume of test case execution for a better
debugging experience.
Reusability:

To avoid writing the same code multiple times in multiple test cases and to
avoid duplication of efforts, look for tools that allow the reuse of already
created test steps in different test cases and projects.
6. Data-driven Testing:

If yours is an application that needs testing for a variety of data at multiple


interfaces, it is important to choose a tool that supports data-driven testing.
7. Reporting:

Test case creation and test case execution would be useless if the reports
were not useful so do go through all the features in the reporting supported
by a tool.
. Support for Collaboration:

If you are doing automation of a project for a client, the client will want to
review the quality of automated test cases.
It will also be beneficial if other non-technical members of the team are able
to automate/review the test cases. So, in such scenarios, look for tools that
make collaboration with the management and clients easy.
9. Support for Tools for Integration:

If there are some process improvement or CI/CD tools that you already use
or plan on using, make sure that you chose a tool that integrates with them.
10. Training and 24×7 Support:

Consider a scenario – you started using a tool for automation, and after
automating about 10 test cases successfully, you got stuck on the 11th test
case; you don’t know how to resolve the problem. You have looked at all
possible forums but you don’t have a solution in sight. If you want to save
the time, use a tool that has 24×7 support to resolve any problems you
encounter.
Guidelines on Automate Testing:
DO automate tasks as close to the code as possible

Unit tests are so important because they exercise code functionality without
touching any dependency. If a developer makes a change that results in a hole
in logic, that hole will be detected before the change makes it to the tester,
saving everyone valuable time. A popular trend today is TDD, or test-driven
development. This is where the developer writes the unit tests before he or she
writes the code, ensuring that they begin by thinking about all the possible use
cases for the software before solving the technical challenges of writing the
code.

DO automate repetitive tasks

Some tests are so important that they need to run repeatedly. A perfect example
of this is the login test: if your users can’t log into the application, you have a
real customer service problem! But no one wants to spend time manually
logging into an application again and again. Automating your login test cases
ensures that authentication is tested with a wide variety of users and accounts,
both valid and invalid.

3. DO automate things users will do every day

What is the primary function of your software? What is the typical path that a
user will take when using your software? These are the kinds of activities that
should have automated test cases. Rather than running through a manual user
path every morning, you can set your automated test to do it and you’ll be
notified right away if there is a problem.

4. DO automate basic smoke-level tests

I like to think of smoke-level tests as tests of features that we would be really


embarrassed by if they failed in the field. One company where I worked early in
my career had a search feature that was broken for weeks, and no one noticed
because we hadn’t run a test on it. Unfortunately, the bug was pushed out to
production and seen by customers. Automating these tests and running them
with every build can help catch major problems quickly.

5. DO automate things that will save you time

A coworker of mine was testing a feature which needed a completely new


account set up each time the test was run. Rather than set it up manually every
time, he created automation that would set up a new account for him, saving
him valuable time.

6. DO automate things that will allow you to exercise lots of different options

A test that fills out a form by filling in all of the available fields is not completely
testing the form. What if there is one missing field? What if there are two
missing fields? What if one of those fields is required? With automation, you can
exercise many different combinations of form submission in much less time than
it would take to do manually.
7. DO automate things that will alert you when something is wrong

I have several negative tests in my API test suites that verify that a user can’t do
something when they don’t have permission to do it. Recently some of those
tests failed, alerting me to the fact that someone had changed the permission
structure, and now a user was able to view content they shouldn’t.

8. DON’T automate test cases that you know will be flaky

If you can’t come up with a way to run an automated test on a feature and have
it pass consistently, you may want to run that test manually or find a different
way to assert your results. When I was first getting started with automation, I
wanted to test that a feature sent an email and that the email was received. I
discovered that email clients are tricky to test in the UI, because there’s no way
of knowing how long it will take before the email is delivered. Instead, I verified
that the email provider had sent the email, and occasionally did a manual check
of the inbox to make sure that the email arrived.

9. DON’T automate test cases for features that are in the early stages and are
expected to go through many changes

It’s great to write unit tests for new code, which as mentioned above is usually
done by the developer. And automated services tests can be created before
there is a UI for a new feature. But if you know that your API endpoints or your
UI will be changing quite a bit as the story progresses, you may want to hold off
on services or UI automation until things have settled down a bit. For the
moment, manual testing will be your best strategy.

10. DON’T automate test cases for features that no one cares about

Your application probably runs on a wide variety of browsers, and your


inclination may be to run your tests on all of them. But it could be that only one
percent of users are running your application on a certain browser. If that’s the
case, why go through the stress of trying to run your tests on this
browser? Similarly, if there is a feature in your application that will be
deprecated soon and only one percent of your users are using it, your time
would probably be better spent automating another feature.
OVERVIEW OF TESTING TOOLS:WIN RUNNER:
Win Runner is the most used Automated Software Testing Tool.
Main Features of Win Runner are

 Developed by Mercury Interactive


 Functionality testing tool
 Supports C/s and web technologies such as (VB, VC++, D2K, Java, HTML,
Power Builder, Delphe, Cibell (ERP))
 To Support .net, xml, SAP, Peoplesoft, Oracle applications, Multimedia
we can use QTP.
 Winrunner run on Windows only.
 Xrunner run only UNIX and linux.
 Tool developed in C on VC++ environment.
 To automate our manual test win runner used TSL (Test Script language
like c)

The main Testing Process in Win Runner is


1) Learning
Recognazation of objects and windows in our application by winrunner is
called learning. Winrunner 7.0 follows Auto learning.
2) Recording
Winrunner records over manual business operation in TSL
3) Edit Script
depends on corresponding manual test, test engineer inserts check points in to
that record script.
4) Run Script

During test script execution, winrunner compare tester given expected values and application actual

values and returns results.

5) Analyze Results

Tester analyzes the tool given results to concentrate on defect tracking if required.

LoadRunner:
LoadRunner is a Performance Testing tool which was pioneered by
Mercury in 1999. LoadRunner was later acquired by HPE in 2006. In 2016,
LoadRunner was acquired by MicroFocus.
LoadRunner supports various development tools, technologies and
communication protocols. In fact, this is the only tool in market which
supports such a large number of protocols to conduct Performance
Testing. Performance Test Results produced by LoadRunner software are
used as a benchmark against other tools

Why LoadRunner?
LoadRunner is not only pioneer tool in Performance Testing, but it is still a
market leader in the Performance Testing paradigm. In a recent
assessment, LoadRunner has about 85% market share in Performance
Testing industry.

Broadly, LoadRunner tool supports RIA (Rich Internet Applications), Web


2.0 (HTTP/HTML, Ajax, Flex and Silverlight etc.), Mobile, SAP, Oracle,
MS SQL Server, Citrix, RTE, Mail and above all, Windows Socket. There is
no competitor tool in the market which could offer such wide variety of
protocols vested in a single tool.

JMeter and Junit:


we will learn to integrate JMeter and JUnit. This JMeter and JUnit integration help
in load testing of customized Java methods – the JUnit tests. Integrating JUnit in
JMeter helps in finding the time taken by individual tests with the applied load
using various JMeter capabilities. In this post, we will create a sample JUnit test
and then configure it to run in JMeter.
Steps to integrate JMeter with JUnit
1. Create a JUnit Test Project
2. Create Jar for the JUnit Project
3. Put Jar in JMeter’s lib/junit directory
4. Run JUnit tests in JMeter
Creating a JUnit Test Project:
Here, we will create sample Java project having JUnit annotations. It contains a
test class sampleJUnitTest.java having to test methods.

For the demo, we have two JUnit Tests in sampleJUnitTest.java –


sampleTestPassing and sampleTestFailing. The Test sampleTestPassing gets
passed when run and the Test sampleTestFailing is explicitly failed using
Assert.fail() command.

Creating Jar for the JUnit Project:


Now we will create a Jar of the above JUnit project. In eclipse, jar can be created
easily using Export functionality. Follow the below screenshots for the Jar creation
steps-
 Right-click on the Project and click on the Export button.
Inside Java, click on the ‘JAR file’.
Select your Project and check the resources. Also, provide the path for the
generated Jar file.

Putting Jar in JMeter’s lib/junit directory


Next put the generated Jar file in the JMeter’s lib/junit directory and restart
JMeter.

Running JUnit tests in JMeter


 First, add a “JUnit Request” to a Thread group
 Check the “Search for JUnit 4 annotations (instead of JUnit3)” checkbox
 From the “Classname” dropdown select the JUnit Test class created
 From the “Test Method” dropdown, select the JUnit method/Test you
want to load test
 Likewise, multiple JUnit Requests can be added with each request having a
Test method- in this example, two JUnit requests are added for Passing and
failing a test
Add Listeners and run the test.

Selenium Automation Testing:


Selenium is an open-source, automated, and valuable testing tool that all web application
developers should be well aware of. A test performed using Selenium is usually referred to
as Selenium automation testing. However, Selenium is not just a single tool but a collection
of tools, each catering to different Selenium automation testing needs. In this tutorial you
will learn all about Selenium and the various types of Selenium automation testing tools.

But before we explore this Selenium automation testing tutorial, let’s first address the need
for Selenium automation testing and how Selenium came into the picture in the first place.

Manual testing, a vital part of the application development process, unfortunately, has
many shortcomings, chief of them being that the process is monotonous and repetitive. To
overcome these obstacles, Jason Huggins, an engineer at Thoughtworks, decided to
automate the testing process. He developed a JavaScript program called
the JavaScriptTestRunner that automated web application testing. This program was
renamed Selenium in 2004.
One disadvantage of Selenium automation testing is that it works only for web
applications, which leaves desktop and mobile apps out in the cold. However, tools
like Appium and HP’s QTP, among others, can be used to test software and mobile
applications. Selenium Automation Testing Tools

Selenium consists of a set of tools that facilitate the testing process.

 Reusability of test scripts

 Debugging test scripts

 Selenium side runner

 Provision for control flow statements


 Improved locator functionality

TESTING OBJECT ORIENTED SOFTWARE:


Software typically undergoes many levels of testing, from unit testing to system or
acceptance testing. Typically, in-unit testing, small “units”, or modules of the
software, are tested separately with focus on testing the code of that module. In
higher, order testing (e.g, acceptance testing), the entire system (or a subsystem) is
tested with the focus on testing the functionality or external behavior of the
system.
As information systems are becoming more complex, the object-oriented paradigm
is gaining popularity because of its benefits in analysis, design, and coding.
Conventional testing methods cannot be applied for testing classes because of
problems involved in testing classes, abstract classes, inheritance, dynamic binding,
message, passing, polymorphism, concurrency, etc.
Testing classes is a fundamentally different problem than testing functions. A
function (or a procedure) has a clearly defined input-output behavior, while a class
does not have an input-output behavior specification. We can test a method of a
class using approaches for testing functions, but we cannot test the class using
these
approaches.
According to Davis the dependencies occurring in conventional systems are:
 Data dependencies between variables
 Calling dependencies between modules
 Functional dependencies between a module and the variable it computes
 Definitional dependencies between a variable and its types.
OOAD:
Once a program code is written, it must be tested to detect and subsequently handle all errors
in it. A number of schemes are used for testing purposes.
Another important aspect is the fitness of purpose of a program that ascertains whether the
program serves the purpose which it aims for. The fitness defines the software quality.

Testing Object-Oriented Systems


Testing is a continuous activity during software development. In object-oriented systems,
testing encompasses three levels, namely, unit testing, subsystem testing, and system testing.

Unit Testing
In unit testing, the individual classes are tested. It is seen whether the class attributes are
implemented as per design and whether the methods and the interfaces are error-free. Unit
testing is the responsibility of the application engineer who implements the structure.
Subsystem Testing
This involves testing a particular module or a subsystem and is the responsibility of the
subsystem lead. It involves testing the associations within the subsystem as well as the
interaction of the subsystem with the outside. Subsystem tests can be used as regression tests
for each newly released version of the subsystem.

System Testing
System testing involves testing the system as a whole and is the responsibility of the quality-
assurance team. The team often uses system tests as regression tests when assembling new
releases.

Object-Oriented Testing Techniques


Grey Box Testing
The different types of test cases that can be designed for testing object-oriented programs
are called grey box test cases. Some of the important types of grey box testing are −
 State model based testing − This encompasses state coverage, state transition
coverage, and state transition path coverage.
 Use case based testing − Each scenario in each use case is tested.
 Class diagram based testing − Each class, derived class, associations, and
aggregations are tested.
 Sequence diagram based testing − The methods in the messages in the
sequence diagrams are tested.
Techniques for Subsystem Testing
The two main approaches of subsystem testing are −
 Thread based testing − All classes that are needed to realize a single use case
in a subsystem are integrated and tested.
 Use based testing − The interfaces and services of the modules at each level of
hierarchy are tested. Testing starts from the individual classes to the small
modules comprising of classes, gradually to larger modules, and finally all the
major subsystems.
Categories of System Testing
 Alpha testing − This is carried out by the testing team within the organization
that develops software.
 Beta testing − This is carried out by select group of co-operating customers.
 Acceptance testing − This is carried out by the customer before accepting the
deliverables.

Web Based Testing:


Web testing is a software testing technique to test web applications or websites for
finding errors and bugs. A web application must be tested properly before it goes to
the end-users. Also, testing a web application does not only means finding common
bugs or errors but also testing the quality-related risks associated with the
application. Software Testing should be done with proper tools and resources and
should be done effectively. We should know the architecture and key areas of a web
application to effectively plan and execute the testing.
Testing a web application is very common while testing any other application like
testing functionality, configuration, or compatibility, etc. Testing a web application
includes the analysis of the web fault compared to the general software faults. Web
applications are required to be tested on different browsers and platforms so that
we can identify the areas that need special focus while testing a web application.
Types of Web Testing:
Basically, there are 4 types of web-based testing that are available and all four of
them are discussed below:
 Static Website Testing: A static website is a type of website in which the
content shown or displayed is exactly the same as it is stored in the server.
This type of website has great UI but does not have any dynamic feature
that a user or visitor can use. In static testing, we generally focus on testing
things like UI as it is the most important part of a static website. We check
things font size, color, spacing, etc. testing also includes checking the
contact us form, verifying URLs or links that are used in the website, etc.
 Dynamic Website Testing: A dynamic website is a type of website that
consists of both frontend i.e, UI, and the backend of the website like a
database, etc. This type of website gets updated or change regularly as per
the user’s requirements. In this website, there are a lot of functionalities
involved like what a button will do if it is pressed, are error messages are
shown properly at their defined time, etc. We check if the backend is
working properly or not, like does entering the data or information in the
GUI or frontend gets updated in the databases or not.
 E-Commerce Website Testing: An e-commerce website is very difficult in
maintaining as it consists of different pages and functionalities, etc. In this
testing, the tester or developer has to check various things like checking if
the shopping cart is working as per the requirements or not, are user
registration or login functionality is also working properly or not, etc. The
most important thing in this testing is that does a user can successfully do
payment or not and if the website is secured. And there are a lot of things
that a tester needs to test apart from the given things.
 Mobile-Based Web Testing: In this testing, the developer or tester basically
checks the website compatibility on different devices and generally on
mobile devices because many of the users open the website on their
mobile devices. So, keeping that thing in mind, we must check that the site
is responsive on all devices or platforms.
challenges in testing web based software:
Web-based software are different as compared to traditional systems. Keeping in view the
environment and behavior of web-based systems. We face many challenges while testing
them. These challenges become issues and guidelines when we perform testing of these
systems. Some of the challenges and quality issues for web-based system are discussed as –

1. Diversity and complexity – Web applications interact with many components that run on
diverse hardware and software platforms. They are written in diverse languages and they
are based on different programming approaches such as procedural, 00 and hybrid
languages such as Java server pages (JSPs). The client side includes browsers, HTML,
embedded scripting languages and applets. The server side includes CGI, JSPs, Java servlets
and NET technologies. They all interact with diverse back-end engines and other
components that are found on web server or other servers.

2. Dynamic environment – The key aspect of web applications is its dynamic nature. The
dynamic aspects are caused by uncertainty in the program behavior, changes in application
requirement, rapidly evolving web technology itself and other factors. The dynamic nature
of web software creates challenges for the analysis, testing and maintenance for these
systems. Forex: it is difficult to determine statically the applications control flow because
the control flow is highly dependent user input and sometimes in terms of trends in user
behavior over time or user location. Not knowing which page an application is likely to
display hinders statically modelling the control flow with accuracy and efficiency.

3. Very short development time – Clients of web-based systems impose very short
development time, compared to other software or information systems project (eg: e-
business system, etc.)

4. Continuous evolution – Demand for more functionality and capacity after the system has
been designed and deployed to meet the extended scope and demands i.e. scalability
issues.

5. Compatibility and interoperability – There are many compatibility issues that make
testing a difficult task. Web applications often are affected by factors that may cause
incompatibility and interoperability issues. The problem of incompatibility exist on both the
client as well as server side. The server components can be distributed to different
operating various versions of browsers running under a variety as can be there on the client
side. Graphics and other on a website have to be tested on multiple browsers. The that
executes from browser also has to be tested. The different versions of HTML they are similar
in some ways they have different tags which may produce different

Aspects of Software Quality:


Software engineering is a complex field. Good software engineers are capable of
balancing opposing forces and working within constraints to create great
software. Poor software developers (they really aren’t engineers) are ones who
are incapable of perceiving the trade-offs they are making and the implications of
their design decisions (or lack thereof).

Every software engineer absolutely must know the seven aspects of software
quality:

 Reliability
 Understandability
 Modifiability
 Usability
 Testability
 Portability
 Efficiency

These seven aspects can be measured and judged for any software product. They
apply to embedded systems, websites, mobile apps, video games, open-source
APIs, internal services, and any other sort of software product. These aspects are
entirely business domain independent. A software engineer’s ability can be
measured by the quality of the software he creates. Skilled engineers create high-
quality software and source code.

For different projects, the prioritization of the aspects of quality will vary. Some
projects should be focused on reliability, usability, and understandability, while
other projects will place high importance on testability and efficiency. As a
software engineer, you must know which aspects of quality are most important to
your project. You must apply your best efforts towards the most critical aspects,
and not spend excessive time on less important aspects.

Reliability: Software is reliable if it behaves consistently. The functionality of a


program should be predictable and repeatable. Errors should occur rarely or not
at all. Errors that do occur should be handled gracefully and proactively. Users
should never ask themselves whether the software will work correctly.

Understandability: The structure, components, and source code must be


understandable. They must be clear. They must be well-organized. They must
behave the way a developer would expect. Anything in the code that causes
developers confusion reveals that the code is lacking in understandability. High-
quality source code always appears simple and obvious.

Modifiability: It should be easy to add or change the behavior of a system. Flexible


systems require changing very few lines of code to alter a behavior. For the
expected dimensions of change, an application should have plugin points that
allow the application to be used with different contextual elements. Tight
coupling to an element that is expected to change is absolutely unacceptable.

Usability: Software products must be simple and easy to use. The common use
cases should be as obvious and clearly presented as possible. Software should not
require excessive configuration. Users should feel empowered by your software.
They should not need internet searches to discover core application functionality.

Testability: The functionality of software must be verifiable. The process of testing


the software must be easy. Each business use case should be directly testable.
Clear verification metrics must be available. Highly testable software will ship with
a comprehensive automated test suite.

Portability: Portable software is usable in different environments and contexts. It


is highly reusable. Portable software is decoupled from specific operating
systems, types of hardware, and deployment contexts. Extremely portable
software is reusable across projects and problem domains.

Efficiency: Efficient software uses as few physical resources as possible. It is fast. It


is memory-efficient. It consumes few CPU cycles. It uses little battery life. It makes
few external service calls. It minimizes the number of database calls. Efficient
software accomplishes as much as possible with the least amount of resources.

Web engineering:
Web engineering focuses on the methodologies, techniques, and tools that are
the foundation of Web application development and which support their
design, development, evolution, and evaluation. Web application development
has certain characteristics that make it different from traditional software,
information system, or computer application development.
Web engineering is multidisciplinary and encompasses contributions from
diverse areas: systems analysis and design, software engineering,
hypermedia/hypertext engineering, requirements engineering, human-
computer interaction, user interface, data engineering, information science,
information indexing and retrieval, testing, modelling and simulation, project
management, and graphic design and presentation. Web engineering is neither
a clone nor a subset of software engineering, although both involve
programming and software development. While Web Engineering uses
software engineering principles, it encompasses new approaches,
methodologies, tools, techniques, and guidelines to meet the unique
requirements of Web-based applications.
 Web-based Information Systems (WIS) development process is
different and unique.[3]
 Web engineering is multi-disciplinary; no single discipline (such as
software engineering) can provide complete theory basis, body of
knowledge and practices to guide WIS development. [4]
 Issues of evolution and lifecycle management when compared to
more 'traditional' applications.
 Web-based information systems and applications are pervasive and
non-trivial. The prospect of Web as a platform will continue to grow
and it is worth being treated specifically.
However, it has been controversial, especially for people in other traditional
disciplines such as software engineering, to recognize Web engineering as a new field. The issue
is how different and independent Web engineering is, compared with other disciplines.

Mobile Testing Tools:


Mobile application testing is an interaction by which application programming
produced for handheld cell phones is tried for its usefulness, convenience, and
consistency. Mobile applications either come pre-introduced or can be introduced
from portable programming dissemination stages. It can be a robotized or manual
kind of testing.

What are Mobile testing tools?

Mobile testing tools are programmed for testing portable applications. This
classification incorporates cloud-based testing devices, application conveyance
instruments, crash-revealing tools, execution testing instruments, mobile phone
emulators, mechanized UI analyzers, portable enhancement, A/B testing tools, and
defect logging instruments.

Criteria For Selecting Mobile Testing Tool

A few standards might be considered when choosing portable application testing


devices to satisfy and achieve the need and prerequisite of testing both from the
specialized and business imminent.
1. Preparing, documentation, instructional exercises, and rules: It is
concerned with preparing materials for learning and utilizing the device, as
vital for their instrument choice.
2. Designated Platform: Determination of the testing instrument ought to be
made concerning the stage alongside its different variations and variants
for which a portable application is focused and planned to work.
Nonetheless, it is favored that separated from focusing on one or
significant stages, the testing device ought to be ready to give testing to
different stages moreover. This guarantees the cross-stage testing of the
portable applications.
3. Code and Build Requirement and Need: Programming code and
fabricating involve worry for their protection and security. Consequently,
code or the form ought not to be shared or sent out external to the testing
group, limits, or climate to any obscure or unapproved element. The
chosen apparatus shouldn’t think twice about the protection and security
of programming source code and work in any regard.
4. Level of programming abilities required: In the event that testers in the
group don’t have great programming abilities, picking up programming to
utilize a device is a genuine concern.
5. Extra highlights: Other than computerizing the versatile application tests,
a testing device ought to be ready to give extra helpful highlights. It should
have the option to convey numerous functionalities, for example,
 Logging and revealing deformities.
 Shifting logged imperfection concerning need, time, type, and
other pertinent boundaries.
 Observing and following the bug.
 Ready to ease QA or Project administrator in reviewing the
generally speaking and summing up the status of the tests.
6. Ceaseless testing: The robotized testing instrument ought to be ready to
convey nonstop testing to assess the level of effect caused to programming
because of progress or change in code. The progressions delivered in the
code ought to be promptly tried by the device.
7. Great test reports: Apparatus produced test records and logs for a status
report, test execution examinations, and imperfection conclusions.
8. Permitting and support costs: All expenses related to obtaining, upkeep,
and backing the utilization of devices.
9. Outsider bug global positioning framework: The chosen instrument
should be ready to help and incorporate with other or outsider bug global
positioning frameworks.
10. Group Management: The instrument, alongside the undertaking of testing
the versatile application, ought to likewise give the benefit of dealing with
the exercises of the testing group, which might incorporate jobs and
obligations, tasks allocated to every part, the status of the errand, criticism,
and audits.
10 Mobile Testing Tools

1. Kobiton

It is a cloud platform that permits genuine gadgets or emulators to run mechanized


or manual tests on all applications independent of the OS. It functions admirably with
local applications, Android applications, and ios applications. It utilizes the structure
of Appium and encounters standard updates to further develop its test proficiency.
It permits script changes also for better testing practices.

2. Test Complete

Test Complete permits you to run a few rehashed UI tests over the application stage.
It is a viable device that can help you in testing hybrid mobile applications, and that
implies it will uphold both Android as well as iOS application testing. Besides, it is a
robotized testing instrument that you can carry out on genuine cell phones or
emulators calm. The computerized test scripts are accessible over the instruments,
yet you can likewise browse the VB script, Javascript, Python, and others.
With TestComplete, you can make and run repeatable and strong UI tests across
native or hybrid mobile applications. With TestComplete, there is a compelling
reason need to escape your telephone or tablet.

3. Appium

Appium is impressively one of the most mind-blowing portable application testing


apparatuses utilized by most expert analyzers. Appium is a viable device for web and
portable application testing that functions admirably in any event, for mixture of
applications. Additionally, Appium is likewise implied for computerized practical
testing to work on the general usefulness of the applications.
Appium is open-source and a cross-stage Mobile Testing Tool for the mixture and
local iOS, it upholds Android renditions from 2.3 onwards. Appium works like a server
running behind the scenes, like a selenium server.

4. Test IO

Test IO permits you to test portable applications on continuous executions to ensure


that the versatile application assigned for your business works flawlessly on
practically all OS stages. These testing devices are, on occasion, in front of the expert
analyzers to recognize the inside bugs. Besides, Test IO has adaptable testing
estimates that figure out the assorted need and necessities of the clients and forces
quicker results. Utilizing Test IO, you can look for the expulsion of the QA Bottleneck
on request to adapt to your assumptions from the application.

5. Katalon Studio

Katalon Studio is the main Appium elective for portable testing. It likewise brings
expanded abilities for web, API, and work area testing. Based on top of Appium and
Selenium, Katalon Studio eliminates the devices’ current steep expectation to learn
and adapt and thusly brings a codeless testing experience to clients at all scales and
mastery. As well as supporting Android and IOS stages, testing across OS (Windows,
macOS, and Linux) is additionally accessible.

6. Robotium

Robotium is a testing tool that is assigned to take care of android applications just
under a mechanized testing system. Robotium is explicitly assigned for black-box
testing on android applications. It utilizes JavaScript to set up the test scripts. A
portion of the extra necessities for the consistent running of this instrument is
Android SDK, Eclipse for the test project, Android improvement Kit, and JDK.

7. Ranorex Studio

Ranorex Studio is an across-the-board answer for versatile application testing.


Ranorex Studio is simple for novices with a codeless snap-and-go point of interaction
and supportive wizards but strong for computerization specialists with a full IDE.
Upholds iOS and Android testing, including local versatile applications and portable
web applications.

8. Selendroid (Selenium for Android)

Selendroid is likewise an open-source structure, all the while cooperating with


various gadgets and emulators. It is driven by the UI of local as well as hybrid
applications and, furthermore, portable web subsequently, the test ought to be
composed through Selenium 2 client API. The test code of Selendroid depends on
Selenium 2 and WebDriver API.

9. Eggplant

Eggplant is a Commercial GUI Automation Testing item planned and created by


TestPlant utilized for Android and iOS application testing and is named as eggOn. It
permits you to lead start-to-finish testing of portable applications and sites.
It is helpful for UI Automation and useful, Image-Based Testing, Mobile Testing,
Network Testing, Web Testing, and Cross-Browser Testing.
One content for all gadgets and stages, Full gadget code are a few extra elements of
this device, and furthermore, there is no requirement for any single change in the
application code to test the application under test.
10. test Rigor – Write Complex Automation Tests with Plain English

With test Rigor, manual QA will make truly steady and entirely dependable versatile
computerized tests – for local and a half and half portable applications (for the two
iOS and Android), as well as portable web and API. It assists you with
straightforwardly communicating tests as executable determinations in plain English.
Clients of all specialized capacities can construct start-to-finish trials of any intricacy
covering versatile, web, and API steps in a single test. Test steps are communicated
on the end-client level as opposed to depending on subtleties of execution like
XPaths or CSS Selectors.

You might also like