0% found this document useful (0 votes)
7 views19 pages

Software testing (2)

Download as docx, pdf, or txt
Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1/ 19

UNIT -1 FOUNDATIONS OF SOFTWARE TESTING

Software testing:
Software Testing is a method to check whether the actual software product matches expected
requirements and to ensure that software product is Defect free.

The purpose of software testing is to identify errors, gaps or missing requirements in contrast to
actual requirements.

Why do we test software?

Software testing is the process of evaluating and verifying that a software product or application
does what it is supposed to do. The benefits of testing include preventing bugs, reducing
development costs and improving performance.

Black Box testing

Black box testing is a technique of software testing which examines the functionality of software without
peering into its internal structure or coding. The primary source of black box testing is a specification
of requirements that is stated by the customer.

In this method, tester selects a function and gives input value to examine its functionality, and
checks whether the function is giving expected output or not. If the function produces correct
output, then it is passed in testing, otherwise failed. The test team reports the result to the
development team and then tests the next function. After completing testing of all functions if
there are severe problems, then it is given back to the development team for correction.

Generic steps of Black Box testing:

 The black box test is based on the specification of requirements, so it is examined in


the beginning.
 In the second step, the tester creates a positive test scenario and an adverse test
scenario by selecting valid and invalid input values to check that the software is
processing them correctly or incorrectly.
 In the third step, the tester develops various test cases such as decision table, all pairs
test, equivalent division, error estimation, cause-effect graph, etc.
 The fourth phase includes the execution of all test cases.
 In the fifth step, the tester compares the expected output against the actual output.
 In the sixth and final step, if there is any flaw in the software, then it is cured and
tested again.
Techniques Used in Black box testing:

1) Decision table technique: Decision Table Technique is a systematic approach where various
input combinations and their respective system behavior are captured in a tabular form. It is
appropriate for the functions that have a logical relationship between two and more than two
inputs.
2) Boundary value Technique : Boundary Value Technique is used to test boundary values,
boundary values are those that contain the upper and lower limit of a variable. It tests, while
entering boundary value whether the software is producing correct output or not.
3) State Transition technique: State Transition Technique is used to capture the behavior of the
software application when different input values are given to the same function.

4) All pair testing technique : All-pair testing Technique is used to test all the possible discrete
combinations of values. This combinational method is used for testing the application that uses
checkbox input, radio button input, list box, text box, etc.

5) Cause Effect Technique : Cause-Effect Technique underlines the relationship between a given
result and all the factors affecting the result.It is based on a collection of requirements.

6) Equivalence Partitioning Technique : Equivalence partitioning is a technique of software


testing in which input data divided into partitions of valid and invalid values, and it is mandatory
that all partitions must exhibit the same behavior.

7) Error Guessing technique : Error guessing is a technique in which there is no specific method
for identifying the error. It is based on the experience of the test analyst, where the tester uses the
experience to guess the problematic areas of the software.

8) Use-case Technique : Use case Technique used to identify the test cases from the beginning to
the end of the system as per the usage of the system. By using this technique, the test team creates
a test scenario that can exercise the entire software based on the functionality of each function
from start to end.

WHITE BOX TESTING

White box testing which is also known as glass box is testing, structural testing, clear
box testing, open box testing and transparent box testing.

 White box testing tests internal coding and infrastructure of a software focus on
checking of predefined inputs against expected and desired outputs. It is based on
inner workings of an application and revolves around internal structure testing.
 In this type of testing programming skills are required to design test cases. The
primary goal of white box testing is to focus on the flow of inputs and outputs
through the software and strengthening the security of the software.

The white box testing contains various tests, which are as follows:
 Path testing
 Loop testing
 Condition testing
 Testing based on the memory perspective
 Test performance of the program

Generic steps for White box testing

o Design all test scenarios, test cases and prioritize them according to high priority number.
o This step involves the study of code at runtime to examine the resource utilization, not
accessed areas of the code, time taken by various methods and operations and so on.
o In this step testing of internal subroutines takes place. Internal subroutines such as
nonpublic methods, interfaces are able to handle all types of data appropriately or not.
o This step focuses on testing of control statements like loops and conditional statements to
check the efficiency and accuracy for different data inputs.
o In the last step white box testing includes security testing to check all possible security
loopholes by looking at how the code handles security.
Reasons for white box testing:
 It identifies internal security holes.
 To check the way of input inside the code.
 Check the functionality of conditional loops.
 To test function, object, and statement at an individual level.
Advantages of white box testing :
 White box testing optimizes code so hidden errors can be identified.
 Test cases of white box testing can be easily automated.
 This testing is more thorough than other testing approaches as it covers all code paths.
 It can be started in the SDLC phase even without GUI.

Disadvantages of white box testing:

 White box testing is too much time consuming when it comes to large-scale
programming applications.
 White box testing is much expensive and complex.
 It can lead to production error because it is not detailed by the developers.
 White box testing needs professional programmers who have a detailed knowledge
and understanding of programming language and implementation.

Techniques used in white box testing:

1)Data flow testing : Data flow testing is a group of testing strategies that examines
the control flow of programs in order to explore the sequence of variables according
to the sequence of events.
2)Control flow testing: Control flow testing determines the execution order of
statements or instructions of the program through a control structure. The control
structure of a program is used to develop a test case for the program. In this
technique, a particular part of a large program is selected by the tester to set the
testing path. Test cases represented by the control graph of the program.
3)Branch Testing :
Branch coverage technique is used to cover all branches of the control flow graph. It
covers all the possible outcomes (true and false) of each condition of decision point at
least once.
4)Statement Testing: Statement coverage technique is used to design white box test
cases. This technique involves execution of all statements of the source code at least
once. It is used to calculate the total number of executed statements in the source
code, out of total statements present in the source code.
5)Decision testing : This technique reports true and false outcomes of Boolean
expressions. Whenever there is a possibility of two or more outcomes from the
statements like do while statement, if statement and case statement (Control flow
statements), it is considered as decision point because there are two outcomes either
true or false.

Difference Between BlackBox Testing and White Box Testing:

Software Testing Life Cycle


The procedure of software testing is also known as STLC (Software Testing Life Cycle) which includes
phases of the testing process.The testing process is executed in a well-planned and systematic manner. All
activities are done to improve the quality of the software product.

Characteristics of STLC :

1. STLC is a testing process part of the software development life cycle.


2. The STLC process remains the same irrespective of the SDLC methodology used.
3. In STLC, once the first phase is done, only the tester moves to the next phase of the
process.
Software testing life cycle contains the following steps:

1. Requirement Analysis
2. Test Plan Creation
3. Environment setup
4. Test case Execution
5. Defect Logging
6. Test Cycle Closure

1.Requirement Analysis: Requirement Analysis is the first step of the Software


Testing Life Cycle (STLC). In this phase quality assurance team understands the
requirements like what is to be tested. If anything is missing or not understandable then
the quality assurance team meets with the stakeholders to better understand the detailed
knowledge of requirements.

The activities that take place during the Requirement Analysis stage include:
 Reviewing the software requirements document (SRD) and other related documents
 Interviewing stakeholders to gather additional information
 Identifying any ambiguities or inconsistencies in the requirements
 Identifying any missing or incomplete requirements
 Identifying any potential risks or issues that may impact the testing process
RTM : Creating a requirement traceability matrix (RTM) to map requirements to
testcases.
At the end of this stage, the testing team should have a clear understanding of the
software requirements and should have identified any potential issues that may impact
the testing process. This will help to ensure that the testing process is focused on the
most important areas of the software and that the testing team is able to deliver high-
quality results.
2. Test Planning: Test Planning is the most efficient phase of the software testing life
cycle where all testing plans are defined. In this phase manager of the testing, team
calculates the estimated effort and cost for the testing work. This phase gets started
once the requirement-gathering phase is completed.

The activities that take place during the Test Planning stage include:
 Identifying the testing objectives and scope
 Developing a test strategy: selecting the testing methods and techniques that will be
used
 Identifying the testing environment and resources needed
 Identifying the test cases that will be executed and the test data that will be used
 Estimating the time and cost required for testing
 Identifying the test deliverables and milestones
 Assigning roles and responsibilities to the testing team
 Reviewing and approving the test plan
3.Test Case Development: The test case development phase gets started once the test
planning phase is completed. In this phase testing team notes down the detailed test
cases. The testing team also prepares the required test data for the testing. When the test
cases are prepared then they are reviewed by the quality assurance team.

The activities that take place during the Test Case Development stage include:
 Identifying the test cases that will be developed
 Writing test cases that are clear, concise, and easy to understand
 Creating test data and test scenarios that will be used in the test cases
 Identifying the expected results for each test case
 Reviewing and validating the test cases
 Updating the requirement traceability matrix (RTM) to map requirements to test
cases

4.Test Environment Setup: Test environment setup is a vital part of the STLC.
Basically, the test environment decides the conditions on which software is tested. This
is independent activity and can be started along with test case development. In this
process, the testing team is not involved. either the developer or the customer creates
the testing environment.
5. Test Execution: After the test case development and test environment setup test
execution phase gets started. In this phase testing team starts executing test cases based
on prepared test cases in the earlier step.

The activities that take place during the test execution stage of the Software
Testing Life Cycle (STLC) include:
 Test execution: The test cases and scripts created in the test design stage are run
against the software application to identify any defects or issues.
 Defect logging: Any defects or issues that are found during test execution are
logged in a defect tracking system, along with details such as the severity, priority,
and description of the issue.
 Test data preparation: Test data is prepared and loaded into the system for test
execution
 Test environment setup: The necessary hardware, software, and network
configurations are set up for test execution
 Test execution: The test cases and scripts are run, and the results are collected and
analyzed.
 Test result analysis: The results of the test execution are analyzed to determine the
software’s performance and identify any defects or issues.
 Defect retesting: Any defects that are identified during test execution are retested to
ensure that they have been fixed correctly.
 Test Reporting: Test results are documented and reported to the relevant
stakeholders.
6.Test closure is the final stage of the Software Testing Life Cycle (STLC) where all
testing-related activities are completed and documented. The main objective of the test
closure stage is to ensure that all testing-related activities have been completed and that
the software is ready for release.

The main activities that take place during the test closure stage include:
 Test summary report: A report is created that summarizes the overall testing
process, including the number of test cases executed, the number of defects found,
and the overall pass/fail rate.
 Defect tracking: All defects that were identified during testing are tracked and
managed until they are resolved.
 Test environment clean-up: The test environment is cleaned up, and all test data
and test artifacts are archived.
 Test closure report: A report is created that documents all the testing-related
activities that took place, including the testing objectives, scope, schedule, and
resources used.
 Knowledge transfer: Knowledge about the software and testing process is shared
with the rest of the team and any stakeholders who may need to maintain or support
the software in the future.
 Feedback and improvements: Feedback from the testing process is collected and
used to improve future testing processes.

V- Model of Software Testing


V Model is a highly disciplined SDLC model which has a testing phase parallel to each
development phase. The V model is an extension of the waterfall model wherein software
development and testing is executed in a sequential way.

It is known as the Validation or Verification Model.


 The left side of the model is Software Development Life Cycle – SDLC
 The right side of the model is Software Test Life Cycle – STLC
 The entire figure looks like a V, hence the name V – model

The V-Model is a linear and sequential model that consists of the following phases:

1. Requirements Gathering and Analysis: The first phase of the V-Model is the requirements
gathering and analysis phase, where the customer’s requirements for the software are
gathered and analyzed to determine the scope of the project.
2. Design: In the design phase, the software architecture and design are developed, including
the high-level design and detailed design.
3. Implementation: In the implementation phase, the software is actually built based on the
design.
4. Testing: In the testing phase, the software is tested to ensure that it meets the customer’s
requirements and is of high quality.
5. Deployment: In the deployment phase, the software is deployed and put into use.
6. Maintenance: In the maintenance phase, the software is maintained to ensure that it
continues to meet the customer’s needs and expectations.
7. The V-Model is often used in safety-critical systems, such as aerospace and defense
systems, because of its emphasis on thorough testing and its ability to clearly define the
steps involved in the software development process.

Verification: It involves static analysis technique (review) done without executing code. It is
the process of evaluation of the product development phase to find whether specified
requirements meet.

Validation: It involves dynamic analysis technique (functional, non-functional), testing done


by executing code. Validation is the process to evaluate the software after the completion of
the development phase to determine whether software meets the customer expectations and
requirements.

Design Phase:

 Requirement Analysis: This phase contains detailed communication with the customer to
understand their requirements and expectations. This stage is known as Requirement
Gathering.
 System Design: This phase contains the system design and the complete hardware and
communication setup for developing product.
 Architectural Design: System design is broken down further into modules taking up
different functionalities. The data transfer and communication between the internal
modules and with the outside world (other systems) is clearly understood.
 Module Design: In this phase the system breaks down into small modules. The detailed
design of modules is specified, also known as Low-Level Design (LLD).

Testing Phases:

 Unit Testing: Unit Test Plans are developed during module design phase. These Unit Test
Plans are executed to eliminate bugs at code or unit level.
 Integration testing: After completion of unit testing Integration testing is performed. In
integration testing, the modules are integrated and the system is tested. Integration testing
is performed on the Architecture design phase. This test verifies the communication of
modules among themselves.
 System Testing: System testing test the complete application with its functionality, inter
dependency, and communication.It tests the functional and non-functional requirements of
the developed application.
 User Acceptance Testing (UAT): UAT is performed in a user environment that resembles
the production environment. UAT verifies that the delivered system meets user’s
requirement and system is ready for use in real world.

Program Correctness and Verification

There are several reasons and some of them are as follows:

 The focus of software testing is to run the candidate program on selected input data and check
whether the program behaves correctly with respect to its specification. The behavior of the
program can be analyzed only if we know what is a correct behavior; hence the study of
correctness is an integral part of software testing.
 The study of program correctness leads to analyze candidate programs at arbitrary levels of
granularity; in particular, it leads to make assumptions on the behavior of the program at specific
stages in its execution and to verify (or disprove) these assumptions; the same assumptions can
be checked at run-time during testing, giving us valuable information as we try to diagnose the
program or establish its correctness. Hence the skills that we develop as we try to prove program
correctness enable us to be better/more effective testers.
 It is common for program testers and program provers to make polite statements about testing
and proving being complementary and then to assiduously ignore each other (each other’s
methods). But there is more to complementarity than meets the eye. Very often, what makes a
testing method or a proving method ineffective is not an intrinsic attribute of the method, but
rather the fact that the method is used against the wrong ...

Reliability versus safety


 Reliability is the probability that a system or component will perform its intended function for a
prescribed time and under stipulated environmental conditions.
 Reliability may thus be determined by the probability of failure per demand,
 while safety is also determined by the consequences of these failures.

Software Reliability Measures can be divided in to four categories:


1.Product Metrics

Product metrics are those which are used to build the artifacts, i.e.,
requirement specification documents, system design documents, etc. These
metrics help in the assessment if the product is right sufficient through
records on attributes like usability, reliability, maintainability & portability. In
these measurements are taken from the actual body of the source code.

2.Project Management Metrics


Project metrics define project characteristics and execution. If there is proper
management of the project by the programmer, then this helps us to achieve
better products. A relationship exists between the development process and
the ability to complete projects on time and within the desired quality
objectives. Cost increase when developers use inadequate methods. Higher
reliability can be achieved by using a better development process, risk
management process, configuration management process.

These metrics are:

o Number of software developers


o Staffing pattern over the life-cycle of the software
o Cost and schedule
o Productivity

3.Process Metrics
Process metrics quantify useful attributes of the software development
process & its environment. They tell if the process is functioning optimally as
they report on characteristics like cycle time & rework time. The goal of
process metric is to do the right job on the first time through the process.
The quality of the product is a direct function of the process. So process
metrics can be used to estimate, monitor, and improve the reliability and
quality of software. Process metrics describe the effectiveness and quality of
the processes that produce the software product.
Examples are:

o The effort required in the process


o Time to produce the product
o Effectiveness of defect removal during development
o Number of defects found during testing
o Maturity of the process

4.Fault and Failure Metrics


A fault is a defect in a program which appears when the programmer makes
an error and causes failure when executed under particular conditions. These
metrics are used to determine the failure-free execution software.

To achieve this objective, a number of faults found during testing and the
failures or other problems which are reported by the user after delivery are
collected, summarized, and analyzed. Failure metrics are based upon
customer information regarding faults found after release of the software.
The failure data collected is therefore used to calculate failure density, Mean
Time between Failures (MTBF), or other parameters to measure or
predict software reliability.

Failures, Errors and Faults(Defects)


In software testing, a bug is the informal name of defects, which means that
software or application is not working as per the requirement. When we have some
coding error, it leads a program to its breakdown, which is known as a bug.
The test engineers use the terminology Bug.

What is a Defect?
When the application is not working as per the requirement is knows
as defects. It is specified as the aberration from the actual and expected
result of the application or software.

In other words, we can say that the bug announced by the programmer and
inside the code is called a Defect.

What is Error?
The Problem in code leads to errors, which means that a mistake can occur
due to the developer's coding error as the developer misunderstood the
requirement or the requirement was not defined correctly.
The developers use the term error.
What is Fault?
The fault may occur in software because it has not added the code for fault
tolerance, making an application act up.

A fault may happen in a program because of the following reasons:

o Lack of resources
o An invalid step
o Inappropriate data definition

What is Failure?
Many defects lead to the software's failure, which means that a loss
specifies a fatal issue in software/ application or in its module, which makes
the system unresponsive or broken.

In other words, we can say that if an end-user detects an issue in the


product, then that particular issue is called a failure.

Possibilities are there one defect that might lead to one failure or several
failures.

For example, in a bank application if the Amount Transfer module is not


working for end-users when the end-user tries to transfer money, submit
button is not working. Hence, this is a failure.

SOFTWARE TESTING PRINCIPLES


Principle 1. Testing is the process of exercising a software component using a
selected set of test cases, with the intent of (i) revealing defects, and (ii) evaluating
quality.
Testers need to detect the defects before the software becomes operational. This principle
supports testing as an execution-based activity to detect defects. It also supports the
separation of testing from debugging since the intent of the latter is to locate defects and
repair the software. The term ―software component is used in this context to represent
any unit of software ranging in size and complexity from an individual procedure or
method, to an entire software system. The term ―defects represents any deviations in the
software that have a negative impact on its functionality, performance, reliability,
security, etc.

Principle 2. When the test objective is to detect defects, then a good test case is one
that has a high probability of revealing a yet undetected defect(s).
Principle 2 supports careful test design and provides a criterion with which to evaluate
test case design and the effectiveness of the testing effort when the objective is to detect
defects. It requires the tester to consider the goal for each test case, that is, which specific
type of defect is to be detected by the test case. The goal for the test is to prove/disprove
the hypothesis, that is, determine if the specific defect is present / absent. Based on the
hypothesis, test inputs are selected, correct outputs are determined, and the test is run.
Results are analyzed to prove/disprove the hypothesis.
Principle 3. Test results should be inspected meticulously.
Testers need to carefully inspect and interpret test results.
A failure may be overlooked, and the test may be granted a ―pass status when in
reality the software has failed the test. Testing may continue based on erroneous test
results. The defect may be revealed at some later stage of testing, but in that case it may
be more costly and difficult to locate and repair.
A failure may be suspected when in reality none exists. In this case the test may
be granted a ―fail status. Much time and effort may be spent on trying to find the defect
that does not exist. A careful reexamination of the test results could finally indicate that
no failure has occurred.
Principle 4. A test case must contain the expected output or result.
It is often obvious to the novice tester that test inputs must be part of a test case.
However, the test case is of no value unless there is an explicit statement of the expected
outputs or results. Expected outputs allow the tester to determine (i) whether a defect has
been revealed, and (ii) pass/fail status for the test.
Principle 5. Test cases should be developed for both valid and invalid input conditions.
A tester must not assume that the software under test will always be provided with valid
inputs. Inputs may be incorrect for several reasons. Software users often make
typographical errors even when complete/correct information is available. Devices may
also provide invalid inputs due to erroneous conditions and malfunctions. Use of test
cases that are based on invalid inputs is very useful for revealing defects. Invalid inputs
also help developers and testers evaluate the robustness of the software, that is, its ability
to recover when unexpected events occur.
Principle 6. The probability of the existence of additional defects in a software
component is proportional to the number of defects already detected in that
component.
What this principle says is that the higher the number of defects already detected in a
component, the more likely it is to have additional defects when it undergoes further
testing. For example, if there are two components A and B, and testers have found 20
defects in A and 3 defects in B, then the probability of the existence of additional defects
in A is higher than B. Defects often occur in clusters and often in code that has a high
degree of complexity and is poorly designed.
Principle 7. Testing should be carried out by a group that is independent
of the development group.
It is difficult for a developer to admit or conceive that software he/she has created and
developed can be faulty. Testers must realize that developers, on a practical level, it may
be difficult for them to conceptualize where defects could be found. Even when tests fail,
developers often have difficulty in locating the defects since their mental model of the
code may overshadow their view of code as it exists in actuality. The testing group could
be implemented as a completely separate functional entity in the organization. As a
member of any of these groups, the principal duties and training of the testers should lie
in testing rather than in development. The groups need to cooperate so that software of
the highest quality is released to the customer.
Principle 8. Tests must be repeatable and reusable.
This principle calls for experiments in the testing domain to require recording of the
exact conditions of the test, any special events that occurred, equipment used, and a
careful accounting of the results. This information is invaluable to the developers when
the code is returned for debugging so that they can duplicate test conditions. It is also
useful for tests that need to be repeated after defect repair. Scientists expect experiments
to be repeatable by others, and testers should expect the same!

Principle 9. Testing should be planned.


Test plans should be developed for each level of testing, and objectives for each level
should be described in the associated plan. Plans, with their precisely specified
objectives, are necessary to ensure that adequate time and resources are allocated for
testing tasks, and that testing can be monitored and managed. Test planning must be
coordinated with project planning. The test manager and project manager must work
together to coordinate activities. Testers cannot plan to test a component on a given date
unless the developers have it available on that date. A test plan template must be
available to the test manager to guide development of the plan according to
organizational policies and standards.
Principle 10. Testing activities should be integrated into the software life cycle.
It is no longer feasible to postpone testing activities until after the code has been written.
Test planning activities should be integrated into the software life cycle starting as early
as in the requirements analysis phase, and continue on throughout the software life cycle
in parallel with development activities. These activities can continue on until the software
is delivered to the users.
Principle 11. Testing is a creative and challenging task

Difficulties and challenges for the tester


 A tester needs to have comprehensive knowledge of the software engineering
discipline.
 A tester needs to have knowledge from both experience and education as to how
software is specified, designed, and developed.
 A tester needs to be able to manage many details.
 A tester needs to have knowledge of fault types and where faults of a certain type
might occur in code constructs.
 A tester needs to reason like a scientist and propose hypotheses that relate to
presence of specific types of defects.
 A tester needs to have a good grasp of the problem domain of the software that
he/she is testing. Familiarly with a domain may come from educational, training,
and work-related experiences.
 A tester needs to create and document test cases. To design the test cases the
tester must select inputs often from a very wide domain. Those selected should
have the highest probability of revealing a defect (Principle 2). Familiarly with
the domain is essential.
 A tester needs to design and record test procedures for running the tests.
 A tester needs to plan for testing and allocate the proper resources.
 A tester needs to execute the tests and is responsible for recording results.

PROGRAM INSPECTION

Inspections are a formal type of review that involves checking the documents thoroughly before
a meeting and is carried out mostly by moderators. A meeting is then held to review the code and
the design.

Inspection meetings can be held both physically and virtually. The purpose of these meetings is
to review the code and the design with everyone and to report any bugs found.

The stages in the inspections process are:

 Planning, Overview meeting, Preparation, Inspection meeting, Rework and Follow-up.


 The Preparation, Inspection meeting and Rework stages might be iterated.

BENEFITS:

 It is easier to find defects for the people who have not done the implementation
themselves and are unaware of its correctness beforehand.
 Knowledge sharing about specific software artifacts and designs.
 Knowledge sharing regarding defect detection practices.
 Flaws are identified at early stages.
 It reduces the rework and testing effort.

Who is involved?
 Moderator: Inspector who is responsible for organizing and reporting on inspection.
 Author: Owner of the report.
 Reader: A person who guides the examination of the product.
 Recorder: An inspector who notes down all the defects on the defect list.
 Inspector: Member of the inspection team.

Planning

The planning phase starts when the entry criteria for the inspection state are met. A moderator
verifies that the product entry criteria are met.
Overview

In the overview phase, a presentation is given to the inspector with some background
information needed to review the software product properly.

Preparation

This is considered an individual activity. In this part of the process, the inspector collects all the
materials needed for inspection, reviews that material, and notes any defects.

Meeting

The moderator conducts the meeting. In the meeting, the defects are collected and reviewed.

Rework

The author performs this part of the process in response to defect disposition determined at the
meeting.

Follow-up

In follow-up, the moderator makes the corrections and then compiles the inspection management
and defects summary report.

STAGES OF TESTING:

Unit Testing:

In a testing level hierarchy, unit testing is the first level of testing done before integration and
other remaining levels of the testing.

It uses modules for the testing process which reduces the dependency of waiting for Unit testing
frameworks, stubs, drivers and mock objects are used for assistance in unit testing.

Some crucial reasons are listed below:


o Unit testing helps tester and developers to understand the base of code that makes them
able to change defect causing code quickly.
o Unit testing helps in the documentation.
o Unit testing fixes defects very early in the development phase that's why there is a
possibility to occur a smaller number of defects in upcoming testing levels.
o It helps with code reusability by migrating code and test cases.

Unit Testing Techniques:


Unit testing uses all white box testing techniques as it uses the code of software application:
 Data flow Testing
 Control Flow Testing
 Branch Coverage Testing
 Statement Coverage Testing
 Decision Coverage Testing

2. IntegrationTesting

Integration testing is the second level of the software testing process comes after unit testing. In
this testing, units or individual components of the software are tested in a group. The focus of the
integration testing level is to expose defects at the time of interaction between integrated
components or units.

Unit testing uses modules for testing purpose, and these modules are combined and tested in
integration testing. The Software is developed with a number of software modules that are coded
by different coders or programmers. The goal of integration testing is to check the correctness of
communication among all the modules.

Guidelines for Integration testing:

o We go for the integration testing only after the functional testing is completed on each module of
the application.
o We always do integration testing by picking module by module so that a proper sequence is
followed, and also we don't miss out on any integration scenarios.
o First, determine the test case strategy through which executable test cases can be prepared
according to test data.
o Examine the structure and architecture of the application and identify the crucial modules to test
them first and also identify all possible scenarios.
o Design test cases to verify each interface in detail.
o Choose input data for test case execution. Input data plays a significant role in testing.
o If we find any bugs then communicate the bug reports to developers and fix defects and retest.
o Perform positive and negative integration testing.
Here positive testing implies that if the total balance is Rs15, 000 and we are transferring
Rs1500 and checking if the amount transfer works fine. If it does, then the test would be a pass.

And negative testing means, if the total balance is Rs15, 000 and we are transferring Rs20, 000
and check if amount transfer occurs or not, if it does not occur, the test is a pass. If it happens,
then there is a bug in the code, and we will send it to the development team for fixing that bug.

Once all the components or modules are working


independently, then we need to check the data flow between the dependent modules is known
as integration testing.

SYSTEM TESTING

System testing is a type of software testing that evaluates the overall functionality and
performance of a complete and fully integrated software solution. It tests if the system
meets the specified requirements and if it is suitable for delivery to the end-users.

This type of testing is performed after the integration testing and before the acceptance
testing.

System Testing Process: System Testing is performed in the following steps:

 Test Environment Setup: Create testing environment for the better quality testing.
 Create Test Case: Generate test case for the testing process.
 Create Test Data: Generate the data that is to be tested.
 Execute Test Case: After the generation of the test case and the test data, test cases
are executed.
 Defect Reporting: Defects in the system are detected.
 Regression Testing: It is carried out to test the side effects of the testing process.
 Log Defects: Defects are fixed in this step.
 Retest: If the test is not successful then again test is performed.

Types of System Testing:

 Performance Testing: Performance Testing is a type of software testing that is


carried out to test the speed, scalability, stability and reliability of the software
product or application.
 Load Testing: Load Testing is a type of software Testing which is carried out to
determine the behavior of a system or software product under extreme load.
 Stress Testing: Stress Testing is a type of software testing performed to check the
robustness of the system under the varying loads.
 Scalability Testing: Scalability Testing is a type of software testing which is carried
out to check the performance of a software application or system in terms of its
capability to scale up or scale down the number of user request load.
Here are a few common tools used for System Testing:
1. HP Quality Center/ALM
2. IBM Rational Quality Manager
3. Microsoft Test Manager
4. Selenium
5. Appium
6. LoadRunner
7. Gatling
8. JMeter
9. Apache JServ
10. SoapUI
Advantages of Software Testing
 Verifies the overall functionality of the system.
 Detects and identifies system-level problems early in the development cycle.
 Helps to validate the requirements and ensure the system meets the user needs.
 Improves system reliability and quality.
 Facilitates collaboration and communication between development and testing
teams.
 Enhances the overall performance of the system.
 Increases user confidence and reduces risks.
 Facilitates early detection and resolution of bugs and defects.

Disadvantages of System Testing :


 This testing is time consuming process than another testing techniques since it
checks the entire product or software.
 The cost for the testing will be high since it covers the testing of entire software.
 It needs good debugging tool otherwise the hidden errors will not be found.

You might also like