Unit 4 PDF
Unit 4 PDF
Unit 4 PDF
Test Characteristics:
A good test has a high probability of finding an error
A good test is not redundant.
A good test should be “best of breed”(should cover more class of errors)
A good test should be neither too simple nor too complex
Principles of Testing:
There are seven principles of testing. They are as follows:
Testing shows presence of defects:
- Testing can show the defects are present, but cannot prove that there
are no defects.
CS8494/SE TESTING AND IMPLEMENTATION UNIT IV
- Even after testing the application or product thoroughly we cannot say
that the product is 100% defect free.
- Testing always reduces the number of undiscovered defects remaining
in the software but even if no defects are found, it is not a proof of
correctness.
Exhaustive testing is impossible:
- Testing everything including all combinations of inputs and
preconditions is not possible.
- So, instead of doing the exhaustive testing we can use risks and
priorities to focus testing efforts.
Early testing: In the software development life cycle , testing activities should
start as early as possible and should be focused on defined objectives.
Defect clustering: A small number of modules contains most of the defects
discovered during pre-release testing or shows the most operational failures.
Pesticide paradox:
- If the same kinds of tests are repeated again and again, eventually the
same set of test cases will no longer be able to find any new bugs.
- To overcome this Pesticide Paradox , it is really very important to
review the test cases regularly and new and different tests need to be
written to exercise different parts of the software or system to
potentially find more defects.
Testing is context depending: Testing is basically context dependent. Different
kinds of sites are tested differently
Absence–of–errors fallacy:
- If the system built is unusable and does not fulfill the user’s needs and
expectations then finding and fixing defects does not help.
Verification Validation
Are we building the system right? Are we building the right system?
Verification is the process of evaluating Validation is the process of evaluating
products during development phase to find software at the end of the development
out whether they meet the specified phase to determine whether software
requirements. meets the customer expectations and
requirements.
The objective of Verification is to make sure The objective of Validation is to make sure
that the product being develop is as per the that the product actually meet up the
requirements and design specifications. user’s requirements, and check whether
CS8494/SE TESTING AND IMPLEMENTATION UNIT IV
the specifications were correct in the first
place.
Following activities are involved Following activities are involved
in Verification: Reviews, Meetings and in Validation: Testing like black box
Inspections. testing, white box testing, gray box
testing etc.
Verification is carried out by QA team to Validation is carried out by testing team.
check whether implementation software is
as per specification document or not.
Execution of code is not comes Execution of code is comes
under Verification. under Validation.
Verification process explains whether the Validation process describes whether the
outputs are according to inputs or not. software is accepted by the user or not.
Verification is carried out before the Validation activity is carried out just after
Validation. the Verification.
Following items are evaluated Following item is evaluated
during Verification: Plans, Requirement during Validation: Actual product or
Specifications, Design Specifications, Code, Software under test.
Test Cases etc,
Cost of errors caught in Verification is less Cost of errors caught in Validation is
than errors found in Validation. more than errors found in Verification.
It is basically manually checking the of It is basically checking of developed
documents and files like requirement program based on the requirement
specifications etc. specifications documents & files.
Solution:
STEP 1: Draw a control graph (flow graph) to determine different program
paths)
Flow graph for the above program :
CS8494/SE TESTING AND IMPLEMENTATION UNIT IV
The following set of tests can be applied to simple loops, where n is the
maximum number of allowable passes through the loop
1. Skip the loop entirely.
2. Only one pass through the loop.
3. Two passes through the loop.
4. m passes through the loop where m < n.
5. n - 1, n, n + 1 passes through the loop.
ii) Nested Loop(Loop inside the loop)
CS8494/SE TESTING AND IMPLEMENTATION UNIT IV
The number of possible tests would grow geometrically as the level of nesting
increases. To help to reduce the number of tests we follow,
1. Start at the innermost loop. Set all other loops to minimum values.
2. Conduct simple loop tests for the innermost loop while holding the outer
loops at their minimum iteration parameter (e.g., loop counter) values.
3. Work outward, conducting tests for the next loop, but keeping all other
outer loops at minimum values and other nested loops to “typical” values.
4. Continue until all loops have been tested.
B) Equivalence partitioning
Dividing the test input data into a range of values and selecting one input
value from each range is called Equivalence Partitioning.
This method is typically used to reduce the total number of test cases to a
finite set of testable test cases.
Simply, it is the process of taking all possible test cases and placing
them into classes. One test value is picked from each class while
testing.
Typically, an input condition is either a specific numeric value, a range of
values, a set of related values, or a Boolean condition.
Test-case design for equivalence partitioning is based on an evaluation of
equivalence classes for an input condition.
CS8494/SE TESTING AND IMPLEMENTATION UNIT IV
Valid Input: 18 – 56
Invalid Input: less than or equal to 17 (<=17), greater than or equal to 57 (>=57)
Valid Class: 18 – 56 = Pick any one input test data from 18 – 56
Invalid Class 1: <=17 = Pick any one input test data less than or equal to 17
Invalid Class 2: >=57 = Pick any one input test data greater than or equal to 57
CS8494/SE TESTING AND IMPLEMENTATION UNIT IV
C) Boundary value Analysis
Boundary value analysis is another black box test design technique and it is
used to find the errors at boundaries of input domain rather than finding
those errors in the center of input.
Boundary testing is the process of testing between extreme ends or
boundaries between partitions of the input values.
A greater number of errors occurs at the boundaries of the input domain
rather than in the “center.”
So these extreme ends like Start- End, Lower- Upper, Maximum-Minimum,
Just Inside-Just Outside values are called boundary values and the testing is
called "boundary testing".
Rather than focusing solely on input conditions, BVA derives test cases from
the output domain as well
The basic idea in boundary value testing is to select input variable values at
their:
Minimum
Just above the minimum
A nominal value
Just below the maximum
Maximum
Example:
Minimum boundary value is: 18
Maximum boundary value is: 56
Valid Inputs: 18,19,55,56
Invalid Inputs: 17 and 57
Test case Input Output
1 Enter the value 17 (18-1) Invalid
2 Enter the value 18 Valid
3 Enter the value 19 (18+1) Valid
4 Enter the value 55 (56-1) Valid
5 Enter the value 56 Valid
6 Enter the value 57 Invalid
Note: Equivalence Partitioning and Boundary value analysis are linked to each
other and can be used together at all levels of testing.
Performance Testing:
- Performance testing, a non-functional testing technique performed to
determine the system parameters in terms of responsiveness and
stability under various workloads.
- Performance testing measures the quality attributes of the system,
such as scalability, reliability and resource usage.
Performance Testing Techniques:
Load testing - It is the simplest form of testing conducted to understand
the behavior of the system under a specific load.
Load testing will result in measuring important business critical
transactions and load on the database, application server, etc., are also
monitored.
CS8494/SE TESTING AND IMPLEMENTATION UNIT IV
Soak testing : During soak tests the parameters such as memory
utilization is monitored to detect memory leaks or other performance
issues. The main aim is to discover the system's performance under
sustained use.
Spike testing - Spike testing is performed by increasing the number of
users suddenly by a very large amount and measuring the performance of
the system. The main aim is to determine whether the system will be able
to sustain the workload.
Attributes of Performance Testing:
Speed
Scalability
Stability
reliability
Stress Testing-It is performed to find the upper limit capacity of the
system and also to determine how the system performs if the current load
goes well above the expected maximum.
Security Testing:
- It ensures software systems and applications are free from any
vulnerabilities, threats, risks that may cause a big loss.
- The goal of security testing is to identify the threats in the system and
measure its potential vulnerabilities.
- It also helps in detecting all possible security risks in the system and
help developers in fixing these problems through coding.
- It is necessary to involve security testing in the SDLC life cycle in the
earlier phases
4.3.5 Debugging
Debugging is the process of fixing a bug in the software. In other words, it
refers to identifying, analyzing and removing errors.
This activity begins after the software fails to execute properly and concludes by
solving the problem and successfully testing the software.
It is considered to be an extremely complex and tedious task because errors
need to be resolved at all stages of debugging.
Debugging occurs as a consequence of successful testing. That is, when a test
case uncovers an error, debugging is the process that results in the removal of
the error.
CS8494/SE TESTING AND IMPLEMENTATION UNIT IV
Although debugging can and should be an orderly process, it is still very much
an art. A software engineer, evaluating the results of a test, is often confronted
with a "symptomatic" indication of a software problem.
The Debugging Process
Steps involved in debugging are:
Problem identification and report preparation.
Assigning the report to software engineer to the defect to verify that it is
genuine.
Defect Analysis using modeling, documentations, finding and testing
candidate flaws, etc.
Defect Resolution by making required changes to the system.
Validation of corrections.
CS8494/SE TESTING AND IMPLEMENTATION UNIT IV
o Debugging is not testing but always occurs as a consequence of testing. The
debugging process begins with the execution of a test case.
o Results are assessed and a lack of correspondence between expected and actual
performance is encountered.
o The debugging process will always have one of two outcomes:
The cause will be found and corrected.
The cause will not be found.
Characteristics of bugs:
The symptom may disappear (temporarily) when another error is corrected.
The symptom may actually be caused by non errors (e.g., round-off
inaccuracies).
The symptom may be caused by human error that is not easily traced.
The symptom may be a result of timing problems, rather than processing
problems.
It may be difficult to accurately reproduce input conditions
Debugging approaches:
Brute force
Backtracking
Cause elimination.
(i) Brute force:
o The brute force category of debugging is probably the most common and least
efficient method for isolating the cause of a software error.
o We apply brute force debugging methods when all else fails.
o Using a "let the computer find the error" philosophy, memory dumps are
taken, run-time traces are invoked, and the program is loaded with WRITE
statements.
o Although the mass of information produced may ultimately lead to success, it
more frequently leads to wasted effort and time. Thought must be expended
first!
(ii) Backtracking:
o Backtracking is a fairly common debugging approach that can be used
successfully in small programs.
o Beginning at the site where a symptom has been uncovered, the source code is
traced backward (manually) until the site of the cause is found.
o Unfortunately, as the number of source lines increases, the number of potential
backward paths may become unmanageably large.
CS8494/SE TESTING AND IMPLEMENTATION UNIT IV
(iii) Cause elimination:
o The third approach to debugging cause elimination is manifested by induction or
deduction and introduces the concept of binary partitioning.
o Data related to the error occurrence are organized to isolate potential causes.
Some of the widely used debuggers are:
Radare2
WinDbg
Valgrind
Each business system (also called business function) is composed of one or more
business processes, and each business process is defined by a set of sub
processes.
Business process reengineering (BPR) is an approach to change management in
which the related tasks required to obtain a specific business outcome are
radically redesigned.
4.5.1 A BPR Model:
Business process reengineering is an iterative process.
Business goals and the processes that achieve them must be adapted to a
changing business environment.
For this reason, there is no start and end to BPR—it is an evolutionary
process. A model for business process reengineering is depicted in Figure
below.
CS8494/SE TESTING AND IMPLEMENTATION UNIT IV
Fig: A BPR Model
The model defines six activities:
Business definition. Business goals are identified within the context of four
key drivers: cost reduction, time reduction, quality improvement, and
personne development and empowerment. Goals may be defined at the
business level or for a specific component of the business.
Process identification. Processes that are critical to achieving the goals
defined in the business definition are identified. They may then be ranked by
importance, by need for change, or in any other way that is appropriate for the
reengineering activity.
Process evaluation. The existing process is thoroughly analyzed and
measured Process tasks are identified; the costs and time consumed by
process tasks ar noted; and quality/performance problems are isolated.
Process specification and design. Based on information obtained during the
first three BPR activities, use cases (Chapters 5 and 6) are prepared for each
process that is to be redesigned. Within the context of BPR, use cases identify a
scenario that delivers some outcome to a customer. With the use case as the
specification of the process, a new set of tasks are designed for the process.
Prototyping. A redesigned business process must be prototyped before it is
fully integrated into the business. This activity “tests” the process so that
refinements can be made.
Refinement and instantiation. Based on feedback from the prototype, the
business process is refined and then instantiated within a business system.
4.5.2 Reengineering Process Model:
These activities occur in a linear sequence, but this is not always the case.
The reengineering paradigm shown below is a cyclical model.
CS8494/SE TESTING AND IMPLEMENTATION UNIT IV
Fig: A software reengineering process model
Inventory analysis:
Every software organization should have an inventory of all applications.
The inventory can be nothing more than a spreadsheet model containing
information that provides a detailed description (e.g., size, age, business
criticality) of every active application.
It is important to note that the inventory should be revisited on a regular cycle.
Document restructuring:
What causes Document restructuring?
Any software organization must choose the one that is most appropriate for
each case.
Creating documentation is far too time consuming
Documentation must be updated, but your organization has limited resources.
The system is business critical and must be fully re documented.
Reverse Engineering:
In this, the information are collected from the given or exist application.
It takes less time than forward engineering to develop an application.
In reverse engineering the application are broken to extract knowledge or
its architecture.
Reverse engineering for software is the process of analyzing a program in
an effort to create a representation of the program at a higher level of
abstraction than source code.
The term reverse engineering has its origins in the hardware world. A
company disassembles a competitive hardware product in an effort to
understand its competitor’s design and manufacturing “secrets.”
Reverse engineering for software is quite similar. In most cases, however,
the program to be reverse engineered is not a competitor’s. Rather, it is the
company’s own work (often done many years earlier).
Reverse engineering is a process of design recovery.
Reverse engineering tools extract data, architectural, and procedural design
information from an existing program.
Code Restructuring:
The most common type of reengineering is code restructuring.
Here, the source code is analyzed using a restructuring tool. Violations of
structured programming constructs are noted and code is the restructured
CS8494/SE TESTING AND IMPLEMENTATION UNIT IV
(this can be done automatically) or even rewritten in a more modern
programming language.
The resultant restructured code is reviewed and tested to ensure that no
anomalies have been introduced. Internal code documentation is updated.
Data Restructuring:
Data restructuring is a full-scale reengineering activity.
In most cases, data restructuring begins with a reverse engineering activity.
Current data architecture is dissected, and necessary data models are
defined.
Data objects and attributes are identified, and existing data structures are
reviewed for quality.
Forward engineering:
Forward Engineering is a method of creating or making an application with
the help of the given requirements.
Forward engineering is also known as Renovation and Reclamation.
Forward engineering is required high proficiency skill.
It takes more time to construct or develop an application
4.5.3 Reverse Engineering
Reverse engineering can extract design information from source code, but the
abstraction level, the completeness of the documentation, the degree to which
tools and a human analyst work together, and the directionality of the process
are highly variable. The core of reverse engineering is an activity called extract
abstractions.
The abstraction level:
The abstraction level of a reverse engineering process and the tools used to
effect it refers to the sophistication of the design information that can be
extracted from source code.
The abstraction level should be as high as possible. That is, the reverse
engineering process should be capable of deriving procedural design
representations (a low-level abstraction), program and data structure
information (a somewhat higher level of abstraction), object models, data
and/or control flow models (a relatively high level of abstraction), and
entity relationship models (a high level of abstraction).
The completeness level:
The completeness of a reverse engineering process refers to the level of detail
CS8494/SE TESTING AND IMPLEMENTATION UNIT IV
that is provided at an abstraction level. In most cases, the completeness
decreases as the abstraction level increases.
For example, given a source code listing, it is relatively easy to develop a
complete procedural design representation.
Reverse Engineering to Understand Data:
o Reverse engineering of data occurs at different levels of abstraction and is often
the first reengineering task.
o At the program level, internal program data structures must often be reverse
engineered as part of an overall reengineering effort.
o At the system level, global data structures (e.g., files, databases) are often
reengineered to accommodate new database management paradigms (e.g., the
move from flat file to relational or object-oriented database systems).
Internal data structures:
o Reverse engineering techniques for internal program data focus on the definition
of classes of objects. This is accomplished by examining the program code with
the intent of grouping related program variables. In many cases, the data
organization within the code identifies abstract data types.
Database structure:
o Regardless of its logical organization and physical structure, a database allows
the definition of data objects and supports some method for establishing
relationships among the objects.
CS8494/SE TESTING AND IMPLEMENTATION UNIT IV
o Reengineering one database schema into another requires an understanding of
existing objects and their relationships. The following steps may be used to
define the existing data model as a precursor to reengineering a new database
model:
(1) build an initial object model,
(2) determine candidate keys (the attributes are examined to determine
whether they are used to point to another record or table; those that serve
as pointers become candidate keys),
(3) refine the tentative classes,
(4) define generalizations, and
(5) discover associations using techniques that are analogous to the CRC
approach.
Reverse Engineering to Understand Processing:
o Reverse engineering to understand processing begins with an attempt to
understand and then extract procedural abstractions represented by the
source code. To understand procedural abstractions, the code is analyzed at
varying levels of abstraction: system, program, component, pattern, and
statement.
o The overall functionality of the entire application system must be
understood before more detailed reverse engineering work occurs. This
establishes a context for further analysis and provides insight into
interoperability issues among applications within the system.
o A block diagram, representing the interaction between these functional
abstractions, is created. Each component performs some subfunction and
represents a defined procedural abstraction.
o A processing narrative for each component is developed. In some situations,
system, program, and component specifications already exist. When this is
the case, the specifications are reviewed for conformance to existing code.
Reverse Engineering to Understand User Interface
o The redevelopment of user interfaces has become one of the most common
types of reengineering activity. But before a user interface can be rebuilt,
reverse engineering should occur. To fully understand an existing user
interface, the structure and behavior of the interface must be specified.
o Merlo and his colleagues suggest three basic questions that must be
answered as reverse engineering of the UI commences:
CS8494/SE TESTING AND IMPLEMENTATION UNIT IV
What are the basic actions (e.g., keystrokes and mouse clicks) that the
interface must process?
What is a compact description of the behavioral response of the system to
these actions?
What is meant by a “replacement,” or more precisely, what concept of
equivalence of interfaces is relevant here?
It is important to note that a replacement GUI may not mirror the old interface
exactly (in fact, it may be radically different). It is often worthwhile to develop a
new interaction metaphor. For example, an old UI requests that a user provide a
scale factor (ranging from 1 to 10) to shrink or magnify a graphical image. A
reengineered GUI might use a slide-bar and mouse to accomplish the same
function.
Forward Engineering:
o The forward engineering process applies software engineering principles,
concepts, and methods to re-create an existing application.
o In most cases, forward engineering does not simply create a modern
equivalent of an older program.
New user and technology requirements are integrated into the reengineering
effort. The redeveloped program extends the capabilities of the older application.
Forward Engineering for Client-Server Architectures:
Many main frame applications have been reengineered to accommodate client-
server architecture(including web apps). In essence, centralized computing
resources(including software) are distributed among many cliebt platforms.
Although a variety of different distributed environments can be designed, the
typical mainframe application that is reengineered into a client-server
architecture has the following features:
Application functionality migrates to each client computer.
New GUI interfaces are implemented at the client sites. • Database
functions are allocated to the server.
Specialized functionality (e.g., compute-intensive analysis) may remain at
the server site.
CS8494/SE TESTING AND IMPLEMENTATION UNIT IV
New communications, security, archiving, and control requirements must
be established at both the client and server sites.
Three layers of abstraction can be identified.
Database foundation layer:
o The functions of the existing database management system and the data
architecture of the existing database must be reverse engineered as a
precursor to the redesign of the database foundation layer.
o The client-server database is reengineered to ensure that transactions are
executed in a consistent manner, that all updates are performed only by
authorized users, that core business rules are enforced, that queries can be
accommodated efficiently, and that full archiving capability has been
established.
The business rules layer:
The business rules layer represents software resident at both the client
and the server. This software performs control and coordination tasks to
ensure that transactions and queries between the client application and
the database conform to the established business process.
The client applications layer:
The client applications layer implements business functions that are
required by specific groups of end users.
Forward Engineering for Object-Oriented Architectures:
o Object-oriented software engineering has become the development
paradigm of choice for many software organizations.
o The existing software is reverse engineered so that appropriate data,
functional, and behavioral models can be created. If the reengineered
system extends the functionality or behavior of the original application,
use cases are created. The data models created during reverse engineering
are then used in conjunction with CRC modeling to establish the basis for
the definition of classes.
o Class hierarchies, object-relationship models, object-behavior models, and
subsystems are defined, and object-oriented design commences.
*************