Software Testing
Software Testing
Software development, including software testing, involves human beings. Therefore, human
In software testing, psychology plays an extremely important role. It is one of those factors
that stay behind the scene, but has a great impact on the end result. Categorized into three
sections, the psychology of testing enables smooth testing as well as makes the process
hassle-free. It is mainly dependent on the mindset of the developers and testers, as well as
the quality of communication between them. Moreover, the psychology of testing improves
mutual understanding among team members and helps them work towards a common goal.
Test Independence.
various activities, which are performed by different individuals using their expertise and
software, people with different skill and mindset are required. From coding to software
implementation, engineers with different skill set are required to perform tasks, in which
they are certified. As testing and analyzing the software is much different than developing or
programming it, it is quite clear that to develop a software with unique features and superior
quality, developers and testers of different mindset are required. Moreover, the introduction
of new development and testing techniques and methodologies has further made this a
necessary requirement.
requires constant communication between team members to successfully achieve their goals
and aims. Communication, if done in an polite and courteous manner can help build a strong
and reliable relationship between team members and help them avoid any
finding bugs and reporting them, it becomes vital for them to communicate it in a courteous,
polite, and suitable way to avoid hurting and disappointing someone. Finding and reporting
defects can be an achievement for the tester but is merely an inconvenience for
programmers, designers, developers, and other stakeholders of the project. As testing can be
a destructive activity, it is important for software testers to report defects and failures as
objectively and politely as possible to avoid hurting and upsetting other members related to
the project.
higher than self testing or group testing, as the former helps avoid author bias and often is
more effective in finding and detecting bugs and defects. This type of testing is mainly
performed by individuals who are not related to the project directly or are from different
organization and are hired mainly to test the quality as well as the effectiveness of the
developed product. Test independence, therefore, helps developers and other stakeholders
get more accurate test results, which helps them build a better software product, with
3 It is feasible because: a) It checks for s/w It is not feasible because: a) Achieving deadlines
reliability and no Bugs in the final product b) b) Various possible outputs c) Timing constraints
It tests in each phase c) It uses constrained d) No. of possible test environments.
resources
5 It is less complex and less time consuming It is complex and time consuming
6 It is adopted such that critical test cases are It corners all the test cases
concerned first
Unit Testing – It refers to individual units or components of the software that are tested. The
purpose of unit testing is to see that each software unit performs as per the expected
requirement and functionality. It may be termed as first-level testing.
Integration Testing - It refers to testing in which the individual software components or
modules are combined together and tested as a group. It is used to see whether there is any
fault in the integrated units. It is normally performed after the unit testing.
System Testing – The testing is performed on the system as a whole and to see whether the
system is working as per the requirements and functionality specified. It takes the
components of the integration testing as input and performs the testing.
Regression Testing – This testing is performed to check whether the changes to the software
do not produce undesirable results. It is used to ensure that the previously developed
software components and tested components still perform correctly after the changes. It
also depicts that the changes have not broken any functionality.
UAT – User Acceptance testing – It is the testing done by the end-user or client to
accept the software system before moving the software to the production
environment. It is used to check the functionality of the software end to end related
to the business flow.
The testers should possess good knowledge of the software or application. There should not any
critical bug or error should be left in the application.
Black Box Testing – In this type of testing, the emphasis is on the functionality of the
application. The code for the tester is like a black box. He concentrates on the
functionality rather than the internal structures of the application. It is used to check
the application from the user’s perspective and per requirements stated by the
customer.
White Box Testing - In this type of testing, the emphasis is given on the internal
structure of the program in the application/software, code structure, internal design
to verify the input and output of the process in the software.
Grey Box Testing – This software testing method is a combination of Black Box
testing and White Box Testing. It includes the testing of internal coding structure so
involves the inputs from both developer and tester to ensure a quality product.
Ad-hoc Testing – It is also one of the terms used in Software Testing. The testing is
performed without planning and other formal documentation. It is normally
intended to run once. This testing is carried out on a random basis to check the
functionality of the application.
Exploratory Testing – In this testing, the tester explores the application what intends
to do, or its purpose. The tester uses his experience and knowledge to check the
application and try to find errors/bugs. We can say it learning along with the test
execution.
Alpha Testing – This testing is performed before the final product is released to the
end-users or the customer. This testing is basically performed by the QA Testers
internally to the organization or you can say performed at the developer’s site. The
Bugs can be addressed by the developer immediately and can be fixed.
Beta Testing – This testing is performed by the end-users or at the customer's place
who are not part of the organization. It is normally performed at the client’s place.
The issues collected from the testing will be implemented for future versions of the
product or application. The user inputs are gathered on the product and ensure it is
ready for the customers.
Automated Testing - In this testing, the Test cases execution takes using an
Automation tool. The Test data is prepared, test cases are executed using the
required data. Test scripts are created for the testing of the application.
Software Testing Life Cycle (STLC):
It is also known as static testing, where we are ensuring that "we are developing the right
product or not". And it also checks that the developed application fulfilling all the
requirements given by the client.
Validation testing
Validation testing is testing where tester performed functional and non-functional testing.
Here functional testing includes Unit Testing (UT), Integration Testing (IT) and System
Testing (ST), and non-functional testing includes User acceptance testing (UAT).
Validation testing is also known as dynamic testing, where we are ensuring that "we have
developed the product right." And it also checks that the software meets the business needs
of the client.
Difference between verification and validation testing
Verification Validation
We check whether we are developing the right product We check whether the developed product is right.
or not.
Verification is also known as static testing. Validation is also known as dynamic testing.
Verification includes different methods like Validation includes testing like functional testing, system
Inspections, Reviews, and Walkthroughs. testing, integration, and User acceptance testing.
It is a process of checking the work-products (not the It is a process of checking the software during or at the
final product) of a development cycle to decide end of the development cycle to decide whether the
whether the product meets the specified software follow the specified business requirements.
requirements.
Quality assurance comes under verification testing. Quality control comes under validation testing.
The execution of code does not happen in the In validation testing, the execution of code happens.
verification testing.
In verification testing, we can find the bugs early in the In the validation testing, we can find those bugs, which
development phase of the product. are not caught in the verification process.
Verification testing is executed by the Quality Validation testing is executed by the testing team to test
assurance team to make sure that the product is the application.
developed according to customers' requirements.
Verification is done before the validation testing. After verification testing, validation testing takes place.
In this type of testing, we can verify that the inputs In this type of testing, we can validate that the user
follow the outputs or not. accepts the product or not.
Verification Testing:
Verification is the process of evaluating work-products of a development phase to determine
whether they meet the specified requirements.
verification ensures that the product is built according to the requirements and design
specifications. It also answers to the question, Are we building the product right?
Verification of Requirements:
In this type of verification, all the requirements gathered from the user's viewpoint are
verified. For this purpose, an acceptance criterion is prepared. An acceptance criterion
defines the goals and requirements of the proposed system and acceptance limits for each
of the goals and requirements. The acceptance criteria matter the most in case of real-time
systems where performance is a critical
issue in certain events. For example, if a real-time system has to process and take decision
for a weapon within x seconds, then any performance below this. would lead to rejection of
the system. Therefore, it can be concluded that acceptance criteria for a requirement must
be defined by the designers of the system and should not be overlooked, as they can create
problems while testing the system.
The tester works in parallel by performing the following two tasks:
1. The tester reviews the acceptance criteria in terms of its completeness, clarity, and
testability. Moreover, the tester understands the proposed system well in advance
so that necessary resources can be planned for the project..
2. The tester prepares the Acceptance Test Plan which is referred at the time of
Acceptance Testing
Verification of Objectives
After gathering requirements, specific objectives are prepared considering every
specification. These objectives are prepared in a document called software requirement
specification (SRS). In this activity also, two parallel activities are performed by the tester:
1. The tester verifies all the objectives mentioned in SRS. The purpose of this
verification is to ensure that the user's needs are properly understood before
proceeding with the project.
2. The tester also prepares the System Test Plan which is based on SRS. This plan will be
referenced at the time of System testing.
In verifying the requirements and objectives, the tester must consider both functional and
non- functional requirements. Functional requirements may be easy to comprehend,
whereas non-functional requirements pose a challenge to testers in terms of understanding,
quantifying, test planning, and test execution.
How to Verify Requirements and Objectives
Requirement and objectives verification has a high potential of detecting bugs. Therefore,
requirements must be verified. As stated above, the testers use the SRS for verification of
objectives. One characteristic of a good SRS is that it can be verified. An SRS can be verified,
if and only if, every requirement stated herein can be verified. A requirement can be
verified, if, and only if, there is some procedure to check that the software meets its
requirement. It is a good idea to specify the requirements in a quantification manner. It
means that ambiguous statements or terms such as 'good quality', "usually" and 'may
happen' should be avoided. Instead of this, quantified specifications should be provided. An
example of a verifiable statement is
High Level Design is the general system Low Level Design is like detailing
01. design means it refers to the overall HLD means it refers to
system design. component-level design process.
It is created first means before Low Level It is created second means after
08.
Design. High Level Design.
High Level Solution converts the Low Level Design converts the
10. Business/client requirement into High High Level Solution into Detailed
Level Solution. solution.
S.No. HIGH LEVEL DESIGN LOW LEVEL DESIGN
In HLD the output criteria is data base In LLD the output criteria is
11. design, functional design and review program specification and unit
record. test plan.
Code Verification:
Code verification is the process used for checking the software code for errors introduced in
the coding phase. The objective of code verification process is to check the software code in
all aspects. This process includes checking the consistency of user requirements with the
design phase. Note that code verification process does not concentrate on proving the
correctness of programs. Instead, it verifies whether the software code has been translated
according to the requirements of the user.
The code verification techniques are classified into two categories, namely, dynamic and
static. The dynamic technique is performed by executing some test data. The outputs of the
program are tested to find errors in the software code. This technique follows the
conventional approach for testing the software code. In the static technique, the program is
executed conceptually and without any data. In other words, the static technique does not
use any traditional approach as used in the dynamic technique. Some of the commonly used
static techniques are code reading, static analysis, symbolic execution, and code inspection
and reviews.
Code
Reading
Figure out what is important: While reading the code, emphasis should be on finding
graphical techniques (bold, italics) or positions (beginning or end of the section).
Important comments may be highlighted in the introduction or at the end of the
software code. The level of details should be according to the requirements of the
software code.
Read what is important: Code reading should be done with the intent to check syntax
and structure such as brackets, nested loops, and functions rather than the non-
essentials such as name of the software developer who has written the software code.
Validation Testing:
The process of evaluating software during the development process or at the end of the
development process to determine whether it satisfies specified business requirements.
Validation Testing ensures that the product actually meets the client's needs. It can also be
defined as to demonstrate that the product fulfills its intended use when deployed on
appropriate environment.
It answers to the question, Are we building the right product?
UNIT-2
DYNAMIC TESTING-BLACKBOX TESTING TECHNIQUES:
Boundary Value Analysis:Boundary Value Analysis
Boundary Value Analysis is based on testing the boundary values of valid and invalid
partitions. The behavior at the edge of the equivalence partition is more likely to be
incorrect than the behavior within the partition, so boundaries are an area where
testing is likely to yield defects.
It checks for the input values near the boundary that have a higher chance of error.
Every partition has its maximum and minimum values and these maximum and
minimum values are the boundary values of a partition.
A boundary value for a valid partition is a valid boundary value.
A boundary value for an invalid partition is an invalid boundary value.
For each variable we check-
Minimum value.
Just above the minimum.
Nominal Value.
Just below Max value.
Max value.
Example: Consider a system that accepts ages from 18 to 56.
Boundary Value Analysis(Age accepts 18 to 56)
Valid Test cases: Valid test cases for the above can be any value entered greater
than 17 and less than 57.
Enter the value- 18.
Enter the value- 19.
Enter the value- 37.
Enter the value- 55.
Enter the value- 56.
Invalid Testcases: When any value less than 18 and greater than 56 is
entered. Enter the value- 17.
Enter the value- 57.
Single Fault Assumption: When more than one variable for the same application is
checked then one can use a single fault assumption. Holding all but one variable to
the extreme value and allowing the remaining variable to take the extreme value.
For n variable to be checked:
Maximum of 4n+1 test cases
Problem: Consider a Program for determining the Previous Data.
Input: Day, Month, Year with valid ranges as-
1 ≤ Month≤12
1 ≤ Day ≤31
1900 ≤ Year ≤ 2000
Design Boundary Value Test Cases.
Solution: Taking the year as a Single Fault Assumption i.e. year will be having values
varying from 1900 to 2000 and others will have nominal values.
Test Cases Month Day Year Output
Taking Day as Single Fault Assumption i.e. Day will be having values varying from 1
to 31 and others will have nominal values.
Taking Month as Single Fault Assumption i.e. Month will be having values varying
from 1 to 12 and others will have nominal values.
The idea and motivation behind BVA are that errors tend to occur near the
extremes of the variables. The defect on the boundary value can be the result of
countless possibilities.
Typing of Languages: BVA is not suitable for free-form languages such as COBOL
and FORTRAN, These languages are known as weakly typed languages. This can be
useful and can cause bugs also.
PASCAL, ADA is the strongly typed language that requires all constants or variables
defined with an associated data type.
Limitation of Boundary Value Analysis:
It works well when the product is under test.
It cannot consider the nature of the functional dependencies of variables.
BVA is quite rudimentary.
Equivalence Partitioning:
It is a type of black-box testing that can be applied to all levels of software testing.
In this technique, input data are divided into the equivalent partitions that can be
used to derive test cases-
In this input data are divided into different equivalence data classes.
It is applied when there is a range of input values.
Example: Below is the example to combine Equivalence Partitioning and Boundary
Value.
Consider a field that accepts a minimum of 6 characters and a maximum of 10
characters. Then the partition of the test cases ranges 0 – 5, 6 – 10, 11 – 14.
The equivalence partitions are derived from requirements and specifications of the software.
The advantage of this approach is, it helps to reduce the time of testing due to a smaller
number of test cases from infinite to finite. It is applicable at all levels of the testing process.
Examples of Equivalence Partitioning technique
Assume that there is a function of a software application that accepts a particular number of
digits, not greater and less than that particular number. For example, an OTP number which
contains only six digits, less or more than six digits will not be accepted, and the application
will redirect the user to the error page.
We
can perform equivalence partitioning in two ways which are as follows:
Let us see how pressman and general practice approaches are going to use in different
conditions:
Condition1
If the requirement is a range of values, then derive the test case for one valid and two
invalid inputs.
Here, the Range of values implies that whenever we want to identify the range values, we
go for equivalence partitioning to achieve the minimum test coverage. And after that, we go
for error guessing to achieve maximum test coverage.
According to pressman:
For example, the Amount of test field accepts a Range (100-400) of values:
Whenever the requirement is Range + criteria, then divide the Range into the internals and
check for all these values.
For example:
n the below image, the pressman technique is enough to test for an age text field for one
valid and two invalids. But, if we have the condition for insurance of ten years and above are
required and multiple policies for various age groups in the age text field, then we need to
use the practice method.
Condition2
If the requirement is a set of values, then derive the test case for one valid and two
invalid inputs.
Here, Set of values implies that whenever we have to test a set of values, we go for one
positive and two negative inputs, then we moved for error guessing, and we also need to
verify that all the sets of values are as per the requirement.
Example 1
Tables are useful tools for representing and documenting many types of information
relating to test case design. These are beneficial for the applications; which can be described
using state transition diagrams and state tables. First we define some basic terms related to
state tables,
1. Finite State Machine
A finite state machine (FSM) is a behavioral model whose outcome depends upon both
previous and current inputs. FSM models can be prepared for software structure or
software behavior and it can be used as a tool for functional testing. Many testers prefer to
use FSM model as a guide to design functional tests.
2. State Transition Diagrams or State Graph
A system or its components may have a number of states depending on its input and time.
For example, a task in an operating system can have the following states:
New State: When a task is newly created
Ready: When the task is waiting in the ready queue for its turn
Running: When instructions of the task are being executed by CPU
Waiting: When the task is waiting for an I/O event or reception of a signal
Terminated The task has finished execution
States are represented by nodes. Now with the help of nodes and transition links between
the nodes, a state transition diagram or state graph is prepared. A state graph is the
pictorial representation of an FSM. Its purpose is to depict the states that a system or its
components can assume. It shows the events or circumstances that cause or result from a
change from one state to another.
Whatever is being modelled is subjected to inputs. As a result of these inputs, when one
state is changed to another, it is called a transition. Transitions are represented by links that
join the nodes.
The state graph of task states is shown in Fig.1.
Each arrow link provides two types of information:
T0=T0= Task is in new state and waiting for admission to ready queue
T1=T1= A new task admitted to ready queue
T2=T2= A ready task has started running
T3=T3= Running task has been interrupted
T4=T4= Running task is waiting for I/O or event
T5=T5= Wait period of waiting task is over
T6=T6= Task has completed execution
3. State Table
State graphs of larger systems may not be easy to understand. Therefore, state graphs are
converted into tabular form for convenience sake, which are known as state tables. State
tables also specify states, inputs, transitions, and outputs. The following conventions are
used for state table:
The highlighted cells of the table are valid inputs causing a change of state. Other cells are
invalid inputs, which do not cause any transition in the state of a task.
4. State Table-Based Testing
After reviewing the basics, we can start functional testing with state tables. A state graph
and its companion state table contain information that is converted into test cases.
1.Condition Stubs : The conditions are listed in this first upper left part of
the decision table that is used to determine a particular action or set of
actions.
1. Action Stubs : All the possible actions are given in the first lower left
portion (i.e, below condition stub) of the decision table.
2. Condition Entries : In the condition entry, the values are inputted in the
upper right portion of the decision table. In the condition entries part of
the table, there are multiple rows and columns which are known as Rule.
3. Action Entries : In the action entry, every entry has some associated
action or set of actions in the lower right portion of the decision table
and these values are called outputs.
Types of Decision Tables :
The decision tables are categorized into two types and these are given below:
1. Limited Entry : In the limited entry decision tables, the condition entries
are restricted to binary values.
2. Extended Entry : In the extended entry decision table, the condition
entries have more than two values. The decision tables use multiple
conditions where a condition may have many possibilities instead of only
‘true’ and ‘false’ are known as extended entry decision tables.
Applicability of Decision Tables :
The order of rule evaluation has no effect on the resulting action.
The decision tables can be applied easily at the unit level only.
Once a rule is satisfied and the action selected, n another rule needs to
be examined.
The restrictions do not eliminate many applications.
Example of Decision Table Based testing :
Below is the decision table of the program for determining the largest amongst
three numbers in which its input is a triple of positive integers (x,y, and z) and
values are from the interval [1, 300].
Table 1 : Decision Table of largest amongst three numbers :
Conditi R R R R R R R R1 R1 R1 R1 R1
R1 R2
ons 3 4 5 6 7 8 9 0 1 2 3 4
c1: x > =
F T T T T T T T T T T T T T
1?
c2: x <=
F T T T T T T T T T T T T
300?
c3: y > =
F T T T T T T T T T T T
1?
c4: x <=
F T T T T T T T T T T
300?
c5: z > =
F T T T T T T T T T
1?
c6: z <=
F T T T T T T T T
300?
c7: x>y? T T T T F F F F
c8: y>z? T T F F T T F F
c9: z>x? T F T F T F T F
Rule 25 12 6 3 1
8 1 1 1 1 1 1 1 1
Count 6 8 4 2 6
a1 :
Invalid X X X X X X
input
a2 : x is
X X
largest
a3 : y is
X X
largest
a4 : z is
X X
largest
a5 :
Impossi
ble
The dynamic test cases are used when code works dynamically based on user input. For
example, while using email account, on entering valid email, the system accepts it but, when
you enter invalid email, it throws an error message. In this technique, the input conditions
are assigned with causes and the result of these input conditions with effects.
This technique aims to reduce the number of test cases but still covers all necessary test cases
with maximum coverage to achieve the desired application quality.
be true.
Situation:
The character in column 1 should be either A or B and in the column 2 should be a digit. If
both columns contain appropriate values then update is made. If the input of column 1 is
incorrect, i.e. neither A nor B, then message X will be displayed. If the input in column 2 is
incorrect, i.e. input is not a digit, then message Y will be displayed.
o A file must be updated, if the character in the first column is either "A" or "B" and in the second
column it should be a digit.
o If the value in the first column is incorrect (the character is neither A nor B) then massage X
will be displayed.
o If the value in the second column is incorrect (the character is not a digit) then massage Y will
be displayed.
Now, we are going to make a Cause-Effect graph for the above situation:
Causes are:
o C1 - Character in column 1 is A
o C2 - Character in column 1 is B
o C3 - Character in column 2 is digit!
Effects:
Effect E1- Update made- The logic for the existence of effect E1 is "(C1 OR C2) AND C3".
For C1 OR C2, any one from C1 and C2 should be true. For logic AND C3 (Character in
column 2 should be a digit), C3 must be true. In other words, for the existence of effect E1
(Update made) any one from C1 and C2 but the C3 must be true. We can see in graph cause
C1 and C2 are connected through OR logic and effect E1 is connected with AND logic.
Effect E2 - Displays Massage X - The logic for the existence of effect E2 is "NOT C1 AND NOT
C2" that means both C1 (Character in column 1 should be A) and C2 (Character in column 1
should be B) should be false. In other words, for the existence of effect E2 the character in
column 1 should not be either A or B. We can see in the graph, C1 OR C2 is connected
through NOT logic with effect E2.
Effect E3 - Displays Massage Y- The logic for the existence of effect E3 is "NOT C3" that
means cause C3 (Character in column 2 is a digit) should be false. In other words, for the
existence of effect E3, the character in column 2 should not be a digit. We can see in the
graph, C3 is connected through NOT logic with effect E3.
So, it is the cause-effect graph for the given situation. A tester needs to convert causes and
effects into logical statements and then design cause-effect graph. If function gives output
(effect) according to the input (cause) so, it is considered as defect free, and if not doing so,
then it is sent to the development team for the correction.
o Error Guessing
o Equivalence Partitioning
o Boundary Value Analysis[BVA]
In this section, we will understand the first test case design technique that is Error guessing
techniques.
Error guessing is a technique in which there is no specific method for identifying the error. It
is based on the experience of the test analyst, where the tester uses the experience to guess
the problematic areas of the software. It is a type of black box testing technique which does
not have any defined structure to find the error.
In this approach, every test engineer will derive the values or inputs based on their
understanding or assumption of the requirements, and we do not follow any kind of rules to
perform error guessing technique.
The accomplishment of the error guessing technique is dependent on the ability and product
knowledge of the tester because a good test engineer knows where the bugs are most likely
to be, which helps to save lots of time.
the error guessing technique be implemented:
The implementation of this technique depends on the experience of the tester or analyst
having prior experience with similar applications. It requires only well-experienced testers
with quick error guessing technique. This technique is used to find errors that may not be
easily captured by formal black box testing techniques, and that is the reason, it is done after
all formal techniques.
The scope of the error guessing technique entirely depends on the tester and type of
experience in the previous testing involvements because it does not follow any method and
guidelines. Test cases are prepared by the analyst to identify conditions. The conditions are
prepared by identifying most error probable areas and then test cases are designed for them.
The main purpose of this technique is to identify common errors at any level of testing by
exercising the following tasks:
The increment of test cases depends upon the ability and experience of the tester.
o The main purpose of error guessing technique is to deal with all possible errors which cannot
be identified informal testing.
o It must contain the all-inclusive sets of test cases without skipping any problematic areas and
without involving redundant test cases.
o This technique accomplishes the characteristics left incomplete during the formal testing.
Depending on the tester's intuition and experience, all the defects cannot be corrected. There
are some factors that can be used by the examiner while using their experience -
o Tester's intuition
o Historical learning
o Review checklist
o Risk reports of the software
o Application UI
o General testing rules
o Previous test results
o Defects occurred in the past
o Variety of data which is used for testing
o Knowledge of AUT
Advantages
The benefits of error guessing technique are as follows:
Disadvantage
Following are the drawbacks of error guessing technique:
Developers do white box testing. In this, the developer will test every line of the code of the
program. The developers perform the White-box testing and then send the application or the
software to the testing team, where they will perform the black box testing and verify the
application along with the requirements and identify the bugs and sends it to the developer.
The developer fixes the bugs and does one round of white box testing and sends it to the
testing team. Here, fixing the bugs implies that the bug is deleted, and the particular feature
is working fine on the application.
Logic Coverage:
Logic corresponds to the internal structure of the code and this testing is adopted for safety-
critical applications such as softwares used in aviation industry. This Test verifies the subset
of the total number of truth assignments to the expressions.
If – Then – Else
–
Do – While –
While – Do –
Switch – Case
Graph Matrices:
A graph matrix is a data structure that can assist in developing a tool for
automation of path testing. Properties of graph matrices are fundamental
for developing a test tool and hence graph matrices are very useful in
understanding software testing concepts and theory.
What is a Graph Matrix ?
A graph matrix is a square matrix whose size represents the number of
nodes in the control flow graph. If you do not know what control flow graphs
are, then read this article. Each row and column in the matrix identifies a
node and the entries in the matrix represent the edges or links between
these nodes. Conventionally, nodes are denoted by digits and edges are
denoted by letters.
Let’s take an example.
Let’s convert this control flow graph into a graph matrix. Since the graph has 4
nodes, so the graph matrix would have a dimension of 4 X 4. Matrix entries will be
filled as follows :
(1, 1) will be filled with ‘a’ as an edge exists from node 1 to node 1
(1, 2) will be filled with ‘b’ as an edge exists from node 1 to node 2. It is
important to note that (2, 1) will not be filled as the edge is unidirectional
and not bidirectional
(1, 3) will be filled with ‘c’ as edge c exists from node 1 to node 3
(2, 4) will be filled with ‘d’ as edge exists from node 2 to node 4
(3, 4) will be filled with ‘e’ as an edge exists from node 3 to node 4
The graph matrix formed is shown below :
Connection Matrix :
A connection matrix is a matrix defined with edges weight. In simple form, when a
connection exists between two nodes of control flow graph, then the edge weight
is 1, otherwise, it is 0. However, 0 is not usually entered in the matrix cells to
reduce the complexity.
For example, if we represent the above control flow graph as a connection matrix,
then the result would be :
As we can see, the weight of the edges are simply replaced by 1 and the cells which
were empty before are left as it is, i.e., representing 0.
A connection matrix is used to find the cyclomatic complexity of the control graph.
Although there are three other methods to find the cyclomatic complexity but this
method works well too.
Following are the steps to compute the cyclomatic complexity :
1. Count the number of 1s in each row and write it in the end of the row
2. Subtract 1 from this count for each row (Ignore the row if its count is 0)
3. Add the count of each row calculated previously
4. Add 1 to this total count
5. The final sum in Step 4 is the cyclomatic complexity of the control flow
graph
Mutation Testing:
Mutation Testing is a type of Software Testing that is performed to design new
software tests and also evaluate the quality of already existing software tests.
Mutation testing is related to modification a program in small ways. It focuses to
help the tester develop effective tests or locate weaknesses in the test data used
for the program.
History of Mutation Testing:
Richard Lipton proposed the mutation testing in 1971 for the first time. Although
high cost reduced the use of mutation testing but now it is widely used for
languages such as Java and XML.
Changed Code:
int mod = 1007;
int a = 12345678;
int b = 98765432;
int c = (a + b) % mod;
2. Decision Mutations:
In decisions mutations are logical or arithmetic operators are changed to detect
errors in the program.
Example:
Initial Code:
if(a < b)
c = 10;
else
c = 20;
Changed Code:
if(a > b)
c = 10;
else
c = 20;
3. Statement Mutations:
In statement mutations a statement is deleted or it is replaces by some other
statement.
Example:
Initial Code:
if(a < b)
c = 10;
else
c = 20;
Advantages of Mutation Testing:
It brings a good level of error detection in the program.
It discovers ambiguities in the source code.
It finds and solves the issues of loopholes in the program.
It helps the testers to write or automate the better test cases.
It provides more efficient programming source code.
Disadvantages of Mutation Testing:
It is highly costly and time-consuming.
It is not able for Black Box Testing.
Some, mutations are complex and hence it is difficult to implement or
run against various test cases.
Here, the team members who are performing the tests should have good
programming knowledge.
Selection of correct automation tool is important to test the programs.
Unit-3
Static Testing:
Static Testing is a type of a Software Testing method which is performed to check
the defects in software without actually executing the code of the software
application. Whereas in Dynamic Testing checks, the code is executed to detect the
defects. Static testing is performed in early stage of development to avoid errors as
it is easier to find sources of failures and it can be fixed easily. The errors that
cannot be found using Dynamic Testing, can be easily found by Static Testing. Static
Testing Techniques: There are mainly two type techniques used in Static
Testing:
1. Review: In static testing review is a process or technique that is performed to
find the potential defects in the design of the software. It is process to detect and
remove errors and defects in the different supporting documents like software
requirements specifications. People examine the documents and sorted out errors,
redundancies and ambiguities. Review is of four types:
Informal: In informal review the creator of the documents put the
contents in front of audience and everyone gives their opinion and thus
defects are identified in the early stage.
Walkthrough: It is basically performed by experienced person or expert
to check the defects so that there might not be problem further in the
development or testing phase.
Peer review: Peer review means checking documents of one-another to
detect and fix the defects. It is basically done in a team of colleagues.
Inspection: Inspection is basically the verification of document the higher
authority like the verification of software requirement specifications
(SRS).
2. Static Analysis: Static Analysis includes the evaluation of the code quality that is
written bydevelopers. Different tools are used to do the analysis of the code and
comparison of the same with the standard. It also helps in following identification
of following defects:
(a) Unused variables
(b) Dead code
(c) Infinite loops
(d) Variable with undefined value
(e) Wrong syntax
Static Analysis is of three types:
Data Flow: Data flow is related to the stream processing.
Control Flow: Control flow is basically how the statements or instructions
are executed.
Cyclomatic Complexity: Cyclomatic complexity defines the number of
independent paths in the control flow graph made from the code or
flowchart so that minimum number of test cases can be designed for
each independent path.
Inspection:
Inspection is the most formal form of reviews, a strategy adopted during static testing phase.
Characteristics of Inspection :
Inspection is usually led by a trained moderator, who is not the author.
Moderator's role is to do a peer examination of a document
Inspection is most formal and driven by checklists and rules.
This review process makes use of entry and exit criteria.
It is essential to have a pre-meeting preparation.
Inspection report is prepared and shared with the author for appropriate
actions.
Post Inspection, a formal follow-up process is used to ensure a timely and a
prompt corrective action.
Aim of Inspection is NOT only to identify defects but also to bring in for process
improvement.
Structural Walkthrough:
A structured walkthrough, a static testing technique performed in an organized manner
between a group of peers to review and discuss the technical aspects of software
development process. The main objective in a structured walkthrough is to find defects
inorder to improve the quality of the product.
Structured walkthroughs are usually NOT used for technical discussions or to discuss the
solutions for the issues found. As explained, the aim is to detect error and not to correct
errors. When the walkthrough is finished, the author of the output is responsible for fixing
the issues.
Benefits:
Saves time and money as defects are found and rectified very early in the
lifecycle.
This provides value-added comments from reviewers with different technical
backgrounds and experience.
It notifies the project management team about the progress of the
development process.
It creates awareness about different development or maintenance
methodologies which can provide a professional growth to participants.
Structured Walkthrough Participants:
Author - The Author of the document under review.
Presenter - The presenter usually develops the agenda for the walkthrough and
presents the output being reviewed.
Moderator - The moderator facilitates the walkthrough session, ensures the
walkthrough agenda is followed, and encourages all the reviewers to
participate.
Reviewers - The reviewers evaluate the document under test to determine if it
is technically accurate.
Scribe - The scribe is the recorder of the structured walkthrough outcomes who
records the issues identified and any other technical comments, suggestions,
and unresolved questions.
Technical Review:
A Technical review is a static white-box testing technique which is conducted to spot the
defects early in the life cycle that cannot be detected by black box testing techniques.
Technical Review - Static Testing:
Validation Testing:
The process of evaluating software during the development process or at the end of the
development process to determine whether it satisfies specified business requirements.
Validation Testing ensures that the product actually meets the client's needs. It can also be
defined as to demonstrate that the product fulfills its intended use when deployed on
appropriate environment.
It answers to the question, Are we building the right product?
Unit Testing:
Unit testing involves the testing of each unit or an individual component of the software
application. It is the first level of functional testing. The aim behind unit testing is to validate
unit components with its performance.
A unit is a single testable part of a software system and tested during the development phase
of the application software.
The purpose of unit testing is to test the correctness of isolated code. A unit component is an
individual function or code of the application. White box testing approach used for unit
testing and usually done by the developers.
Whenever the application is ready and given to the Test engineer, he/she will start checking
every component of the module or module of the application independently or one by one,
and this process is known as Unit testing or components testing.
Functional Testing:
Functional testing is basically defined as a type of testing that verifies that each
function of the software application works in conformance with the requirement
and specification. This testing is not concerned with the source code of the
application. Each functionality of the software application is tested by providing
appropriate test input, expecting the output, and comparing the actual output with
the expected output. This testing focuses on checking the user interface, APIs,
database, security, client or server application, and functionality of the Application
Under Test. Functional testing can be manual or automated.
Process :
Functional testing involves the following steps:
1. Identify test input: This step involves identifying the functionality that
needs to be tested. This can vary from testing the usability functions, and
main functions to error conditions.
2. Compute expected outcomes: Create input data based on the
specifications of the function and determine the output based on these
specifications.
3. Execute test cases: This step involves executing the designed test cases
and recording the output.
4. Compare the actual and expected output: In this step, the actual output
obtained after executing the test cases is compared with the expected
output to determine the amount of deviation in the results. This step
reveals if the system is working as expected or not.
Acceptance testing:
Acceptance testing is formal testing based on user requirements and function processing. It
determines whether the software is conforming specified requirements and user
requirements or not. It is conducted as a kind of Black Box testing where the number of
required users involved testing the acceptance level of the system. It is the fourth and last
level of software testing.
Types of Acceptance Testing:
1. User Acceptance Testing (UAT): User acceptance testing is used to
determine whether the product is working for the user correctly. Specific
requirements which are quite often used by the customers are primarily
picked for the testing purpose. This is also termed as End-User Testing.
2. Business Acceptance Testing (BAT): BAT is used to determine whether
the product meets the business goals and purposes or not. BAT mainly
focuses on business profits which are quite challenging due to the
changing market conditions and new technologies so the current
implementation may have to being changed which results in extra
budgets.
3. Operational Acceptance Testing (OAT): OAT is used to determine the
operational readiness of the product and is non-functional testing. It mainly
includes testing of recovery, compatibility, maintainability, reliability, etc.
OAT assures the stability of the product before it is released to production.
4. Alpha Testing: Alpha testing is used to determine the product in the
development testing environment by a specialized testers team usually
called alpha testers.
5. Beta Testing: Beta testing is used to assess the product by exposing it to the
real end-users, usually called beta testers in their environment. Feedback is
collected from the users and the defects are fixed. Also, this helps in
enhancing the product to give a rich user experience.
Regression Testing:
Regression Testing is the process of testing the modified parts of the code and the
parts that might get affected due to the modifications to ensure that no new
errors have been introduced in the software after the modifications have been
made. Regression means return of something and in the software field, it refers
to the return of a bug.
Process of Regression testing:
Firstly, whenever we make some changes to the source code for any reasons like
adding new functionality, optimization, etc. then our program when executed
fails in the previously designed test suite for obvious reasons. After the failure,
the source code is debugged in order to identify the bugs in the program. After
identification of the bugs in the source code, appropriate modifications are made.
Then appropriate test cases are selected from the already existing test suite
which covers all the modified and affected parts of the source code. We can add
new test cases if required. In the end regression testing is performed using the
selected test cases.
Regression Testing:
Regression testing is a black box testing techniques. It is used to authenticate a code change
in the software does not impact the existing functionality of the product. Regression testing
is making sure that the product works fine with new functionality, bug fixes, or any change
in the existing feature.
Regression testing is a type of software testing. Test cases are re-executed to check the
previous functionality of the application is working fine, and the new changes have not
produced any bugs.
Regression testing can be performed on a new build when there is a significant change in
the original functionality. It ensures that the code still works even when the changes are
occurring. Regression means Re-test those parts of the application, which are unchanged.
Regression tests are also known as the Verification Method. Test cases are often
automated. Test cases are required to execute many times and running the same test case
again and again manually, is time-consuming and tedious too.
UNIT-4
Efficient Test Suite management:
Test suite is a container that has a set of tests which helps testers in executing and reporting
the test execution status. It can take any of the three states namely Active, Inprogress and
completed.
A Test case can be added to multiple test suites and test plans. After creating a test plan, test
suites are created which in turn can have any number of tests.
Test suites are created based on the cycle or based on the scope. It can contain any type of
tests, viz - functional or Non-Functional.
UNIT-5
AUTOMATION AND TESTING TOOLS:
Test Automation:
Test Automation is the best way to increase the effectiveness, test
coverage, and execution speed in software testing. Automated software
testing is important due to the following reasons:
The following category of test cases are not suitable for automation:
Test Cases that are newly designed and not executed manually at
least once
Test Cases for which the requirements are frequently changing
Test cases which are executed on an ad-hoc basis.
Step 5) Maintenance
Step 1) Test tool selection
Test Tool selection largely depends on the technology the Application
Under Test is built on. For instance, QTP does not support Informatica. So
QTP cannot be used for testing Informatica applications. It’s a good idea
to conduct a Proof of Concept of Tool on AUT.
Step 2) Define the scope of Automation
The scope of automation is the area of your Application Under Test which
will be automated. Following points help determine scope:
Example: Quality center is the Test Management tool which in turn it will
invoke QTP for execution of automation scripts. Scripts can be executed
in a single machine or a group of machines. The execution can be done
during the night, to save time.
Step 5) Test Automation Maintenance Approach
Test Automation Maintenance Approach is an automation testing phase
carried out to test whether the new functionalities added to the software
are working fine or not. Maintenance in automation testing is executed
when new automation scripts are added and need to be reviewed and
maintained in order to improve the effectiveness of automation scripts
with each successive release cycle.
Framework for Automation
A framework is set of automation guidelines which help in
Test Tools
Testing Tools:
Tools from a software testing context can be defined as a product that supports one or more
test activities right from planning, requirements, creating a build, test execution, defect
logging and test analysis.
Classification of Tools
Tools can be classified based on several parameters. They include:
The purpose of the tool
The Activities that are supported within the tool
The Type/level of testing it supports
The Kind of licensing (open source, freeware, commercial)
The technology used
Types of Tools:
S.No. Tool Type Used for Used by
There is no point in looking for a solution when you don’t know the problem.
So, before you start exploring the various tools and technologies available in
the market for test automation, you need to list down your project
requirements and the problems you are looking to solve.
The list, in general, should answer the below questions:
If yours is a team that already has people that are skilled in some
programming language then you can think of using an automation tool in
that programming language. Or, if you plan to hire skilled people for
automation then you don’t need to consider this point.
But, if you are planning to have an automation tool that will not need you to
look for people with the required skillset, going for codeless automation
tools will be a good idea. These tools allow the automation of test
cases without the need for knowing a programming language.
. Budget:
This is a very important criteria for selection of testing tool. You might easily
say that you will want a free tool because you don’t want to spend on
automation if you can avoid it.
But, you also need to consider that the amount of time being spent on
automation, the number of people working on the tool and the machines
being used for automation also constitute the amount you spend on
automation. So, consider below points before deciding on the budget:
Cost of human resources being used for automation: If there is a tool
that does not need you to hire new resources, especially for
automation, consider it a saving.
Time spent on learning the tool: If there is a tool that has a low
learning curve, that is an indirect saving in the cost you might have
spent in terms of the time the resources spend on learning the tool. Or
hiring resources that are skilled in that particular tool.
Time being spent on automation: If there is a tool that makes it easy to
create and maintain test cases, thereby saving time, consider it a saving
in cost.
Not every tool is made to handle all kinds of scenarios. So, to make sure that
your chosen tool meets your needs, try automating a few test cases of your
application to know if the tool suits your needs. That could be done with the
trial version of a tool if your search has narrowed down to premium tools.
Also, to avoid spending more time in test case maintenance as compared to
test case creation, make sure to choose a tool that fits your budget including
the maintenance costs. There are tools that have the ability to self-heal the
test cases in case of minor changes in the application.
Such tools help to reduce the cost of test case maintenance. Also, it helps if
the tool supports pause and resume of test case execution for a better
debugging experience.
Reusability:
To avoid writing the same code multiple times in multiple test cases and to
avoid duplication of efforts, look for tools that allow the reuse of already
created test steps in different test cases and projects.
6. Data-driven Testing:
Test case creation and test case execution would be useless if the reports
were not useful so do go through all the features in the reporting supported
by a tool.
. Support for Collaboration:
If you are doing automation of a project for a client, the client will want to
review the quality of automated test cases.
It will also be beneficial if other non-technical members of the team are able
to automate/review the test cases. So, in such scenarios, look for tools that
make collaboration with the management and clients easy.
9. Support for Tools for Integration:
If there are some process improvement or CI/CD tools that you already use
or plan on using, make sure that you chose a tool that integrates with them.
10. Training and 24×7 Support:
Consider a scenario – you started using a tool for automation, and after
automating about 10 test cases successfully, you got stuck on the 11th test
case; you don’t know how to resolve the problem. You have looked at all
possible forums but you don’t have a solution in sight. If you want to save
the time, use a tool that has 24×7 support to resolve any problems you
encounter.
Guidelines on Automate Testing:
DO automate tasks as close to the code as possible
Unit tests are so important because they exercise code functionality without
touching any dependency. If a developer makes a change that results in a hole
in logic, that hole will be detected before the change makes it to the tester,
saving everyone valuable time. A popular trend today is TDD, or test-driven
development. This is where the developer writes the unit tests before he or she
writes the code, ensuring that they begin by thinking about all the possible use
cases for the software before solving the technical challenges of writing the
code.
Some tests are so important that they need to run repeatedly. A perfect example
of this is the login test: if your users can’t log into the application, you have a
real customer service problem! But no one wants to spend time manually
logging into an application again and again. Automating your login test cases
ensures that authentication is tested with a wide variety of users and accounts,
both valid and invalid.
What is the primary function of your software? What is the typical path that a
user will take when using your software? These are the kinds of activities that
should have automated test cases. Rather than running through a manual user
path every morning, you can set your automated test to do it and you’ll be
notified right away if there is a problem.
6. DO automate things that will allow you to exercise lots of different options
A test that fills out a form by filling in all of the available fields is not completely
testing the form. What if there is one missing field? What if there are two
missing fields? What if one of those fields is required? With automation, you can
exercise many different combinations of form submission in much less time than
it would take to do manually.
7. DO automate things that will alert you when something is wrong
I have several negative tests in my API test suites that verify that a user can’t do
something when they don’t have permission to do it. Recently some of those
tests failed, alerting me to the fact that someone had changed the permission
structure, and now a user was able to view content they shouldn’t.
If you can’t come up with a way to run an automated test on a feature and have
it pass consistently, you may want to run that test manually or find a different
way to assert your results. When I was first getting started with automation, I
wanted to test that a feature sent an email and that the email was received. I
discovered that email clients are tricky to test in the UI, because there’s no way
of knowing how long it will take before the email is delivered. Instead, I verified
that the email provider had sent the email, and occasionally did a manual check
of the inbox to make sure that the email arrived.
9. DON’T automate test cases for features that are in the early stages and are
expected to go through many changes
It’s great to write unit tests for new code, which as mentioned above is usually
done by the developer. And automated services tests can be created before
there is a UI for a new feature. But if you know that your API endpoints or your
UI will be changing quite a bit as the story progresses, you may want to hold off
on services or UI automation until things have settled down a bit. For the
moment, manual testing will be your best strategy.
10. DON’T automate test cases for features that no one cares about
During test script execution, winrunner compare tester given expected values and application actual
5) Analyze Results
Tester analyzes the tool given results to concentrate on defect tracking if required.
LoadRunner:
LoadRunner is a Performance Testing tool which was pioneered by
Mercury in 1999. LoadRunner was later acquired by HPE in 2006. In 2016,
LoadRunner was acquired by MicroFocus.
LoadRunner supports various development tools, technologies and
communication protocols. In fact, this is the only tool in market which
supports such a large number of protocols to conduct Performance
Testing. Performance Test Results produced by LoadRunner software are
used as a benchmark against other tools
Why LoadRunner?
LoadRunner is not only pioneer tool in Performance Testing, but it is still a
market leader in the Performance Testing paradigm. In a recent
assessment, LoadRunner has about 85% market share in Performance
Testing industry.
But before we explore this Selenium automation testing tutorial, let’s first address the need
for Selenium automation testing and how Selenium came into the picture in the first place.
Manual testing, a vital part of the application development process, unfortunately, has
many shortcomings, chief of them being that the process is monotonous and repetitive. To
overcome these obstacles, Jason Huggins, an engineer at Thoughtworks, decided to
automate the testing process. He developed a JavaScript program called
the JavaScriptTestRunner that automated web application testing. This program was
renamed Selenium in 2004.
One disadvantage of Selenium automation testing is that it works only for web
applications, which leaves desktop and mobile apps out in the cold. However, tools
like Appium and HP’s QTP, among others, can be used to test software and mobile
applications. Selenium Automation Testing Tools
Unit Testing
In unit testing, the individual classes are tested. It is seen whether the class attributes are
implemented as per design and whether the methods and the interfaces are error-free. Unit
testing is the responsibility of the application engineer who implements the structure.
Subsystem Testing
This involves testing a particular module or a subsystem and is the responsibility of the
subsystem lead. It involves testing the associations within the subsystem as well as the
interaction of the subsystem with the outside. Subsystem tests can be used as regression tests
for each newly released version of the subsystem.
System Testing
System testing involves testing the system as a whole and is the responsibility of the quality-
assurance team. The team often uses system tests as regression tests when assembling new
releases.
1. Diversity and complexity – Web applications interact with many components that run on
diverse hardware and software platforms. They are written in diverse languages and they
are based on different programming approaches such as procedural, 00 and hybrid
languages such as Java server pages (JSPs). The client side includes browsers, HTML,
embedded scripting languages and applets. The server side includes CGI, JSPs, Java servlets
and NET technologies. They all interact with diverse back-end engines and other
components that are found on web server or other servers.
2. Dynamic environment – The key aspect of web applications is its dynamic nature. The
dynamic aspects are caused by uncertainty in the program behavior, changes in application
requirement, rapidly evolving web technology itself and other factors. The dynamic nature
of web software creates challenges for the analysis, testing and maintenance for these
systems. Forex: it is difficult to determine statically the applications control flow because
the control flow is highly dependent user input and sometimes in terms of trends in user
behavior over time or user location. Not knowing which page an application is likely to
display hinders statically modelling the control flow with accuracy and efficiency.
3. Very short development time – Clients of web-based systems impose very short
development time, compared to other software or information systems project (eg: e-
business system, etc.)
4. Continuous evolution – Demand for more functionality and capacity after the system has
been designed and deployed to meet the extended scope and demands i.e. scalability
issues.
5. Compatibility and interoperability – There are many compatibility issues that make
testing a difficult task. Web applications often are affected by factors that may cause
incompatibility and interoperability issues. The problem of incompatibility exist on both the
client as well as server side. The server components can be distributed to different
operating various versions of browsers running under a variety as can be there on the client
side. Graphics and other on a website have to be tested on multiple browsers. The that
executes from browser also has to be tested. The different versions of HTML they are similar
in some ways they have different tags which may produce different
Every software engineer absolutely must know the seven aspects of software
quality:
Reliability
Understandability
Modifiability
Usability
Testability
Portability
Efficiency
These seven aspects can be measured and judged for any software product. They
apply to embedded systems, websites, mobile apps, video games, open-source
APIs, internal services, and any other sort of software product. These aspects are
entirely business domain independent. A software engineer’s ability can be
measured by the quality of the software he creates. Skilled engineers create high-
quality software and source code.
For different projects, the prioritization of the aspects of quality will vary. Some
projects should be focused on reliability, usability, and understandability, while
other projects will place high importance on testability and efficiency. As a
software engineer, you must know which aspects of quality are most important to
your project. You must apply your best efforts towards the most critical aspects,
and not spend excessive time on less important aspects.
Usability: Software products must be simple and easy to use. The common use
cases should be as obvious and clearly presented as possible. Software should not
require excessive configuration. Users should feel empowered by your software.
They should not need internet searches to discover core application functionality.
Web engineering:
Web engineering focuses on the methodologies, techniques, and tools that are
the foundation of Web application development and which support their
design, development, evolution, and evaluation. Web application development
has certain characteristics that make it different from traditional software,
information system, or computer application development.
Web engineering is multidisciplinary and encompasses contributions from
diverse areas: systems analysis and design, software engineering,
hypermedia/hypertext engineering, requirements engineering, human-
computer interaction, user interface, data engineering, information science,
information indexing and retrieval, testing, modelling and simulation, project
management, and graphic design and presentation. Web engineering is neither
a clone nor a subset of software engineering, although both involve
programming and software development. While Web Engineering uses
software engineering principles, it encompasses new approaches,
methodologies, tools, techniques, and guidelines to meet the unique
requirements of Web-based applications.
Web-based Information Systems (WIS) development process is
different and unique.[3]
Web engineering is multi-disciplinary; no single discipline (such as
software engineering) can provide complete theory basis, body of
knowledge and practices to guide WIS development. [4]
Issues of evolution and lifecycle management when compared to
more 'traditional' applications.
Web-based information systems and applications are pervasive and
non-trivial. The prospect of Web as a platform will continue to grow
and it is worth being treated specifically.
However, it has been controversial, especially for people in other traditional
disciplines such as software engineering, to recognize Web engineering as a new field. The issue
is how different and independent Web engineering is, compared with other disciplines.
Mobile testing tools are programmed for testing portable applications. This
classification incorporates cloud-based testing devices, application conveyance
instruments, crash-revealing tools, execution testing instruments, mobile phone
emulators, mechanized UI analyzers, portable enhancement, A/B testing tools, and
defect logging instruments.
1. Kobiton
2. Test Complete
Test Complete permits you to run a few rehashed UI tests over the application stage.
It is a viable device that can help you in testing hybrid mobile applications, and that
implies it will uphold both Android as well as iOS application testing. Besides, it is a
robotized testing instrument that you can carry out on genuine cell phones or
emulators calm. The computerized test scripts are accessible over the instruments,
yet you can likewise browse the VB script, Javascript, Python, and others.
With TestComplete, you can make and run repeatable and strong UI tests across
native or hybrid mobile applications. With TestComplete, there is a compelling
reason need to escape your telephone or tablet.
3. Appium
4. Test IO
5. Katalon Studio
Katalon Studio is the main Appium elective for portable testing. It likewise brings
expanded abilities for web, API, and work area testing. Based on top of Appium and
Selenium, Katalon Studio eliminates the devices’ current steep expectation to learn
and adapt and thusly brings a codeless testing experience to clients at all scales and
mastery. As well as supporting Android and IOS stages, testing across OS (Windows,
macOS, and Linux) is additionally accessible.
6. Robotium
Robotium is a testing tool that is assigned to take care of android applications just
under a mechanized testing system. Robotium is explicitly assigned for black-box
testing on android applications. It utilizes JavaScript to set up the test scripts. A
portion of the extra necessities for the consistent running of this instrument is
Android SDK, Eclipse for the test project, Android improvement Kit, and JDK.
7. Ranorex Studio
9. Eggplant
With test Rigor, manual QA will make truly steady and entirely dependable versatile
computerized tests – for local and a half and half portable applications (for the two
iOS and Android), as well as portable web and API. It assists you with
straightforwardly communicating tests as executable determinations in plain English.
Clients of all specialized capacities can construct start-to-finish trials of any intricacy
covering versatile, web, and API steps in a single test. Test steps are communicated
on the end-client level as opposed to depending on subtleties of execution like
XPaths or CSS Selectors.