August 2010 Bachelor of Science in Information Technology (Bscit) - Semester 6 Bt0056 - Software Testing and Quality Assurance - 2 Credits
August 2010 Bachelor of Science in Information Technology (Bscit) - Semester 6 Bt0056 - Software Testing and Quality Assurance - 2 Credits
August 2010 Bachelor of Science in Information Technology (Bscit) - Semester 6 Bt0056 - Software Testing and Quality Assurance - 2 Credits
Software testing is a critical element of software quality assurance and represents the
ultimate process to ensure the correctness of the product. The quality product always
enhances the customer confidence in using the product thereby increases the
business economics. In other words, a good quality product means zero defects, which
is derived from a better quality process in testing.
The definition of testing is not well understood. People use a totally incorrect definition
of the word testing, and that this is the primary cause for poor program testing.
Examples of these definitions are such statements as “Testing is the process of
demonstrating that errors are not present”, “The purpose of testing is to show that a
program performs its intended functions correctly”, and “Testing is the process of
establishing confidence that a program does what it is supposed to do”.
Testing the product means adding value to it, which means raising the quality or
reliability of the program. Raising the reliability of the product means finding and
removing errors. Hence one should not test a product to show that it works; rather,
one should start with the assumption that the program contains errors and then test
the program to find as many errors as possible. Thus a more appropriate definition is:
Testing is the process of executing a program with the intent of finding errors.
Purpose of Testing
• To show the software works: It is known as demonstration-oriented
• To show the software doesn’t work: It is known as destruction-oriented
• To minimize the risk of not working up to an acceptable level: it is known as
evaluation-oriented
Defect Distribution
In a typical project life cycle, testing is the late activity. When the product is tested,
the defects may be due to many reasons. It may be either programming error or may
be defects in design or defects at any stages in the life cycle. The overall defect
distribution is shown in fig 1.1 .
Quality design refers to the characteristic s that designers specify for an item. The
grade of materials, tolerance, and performance specifications all contribute to the
quality of design.
1. Cyclomatic complexity
2. Cohesion
3. Number of function points
4. Lines of code
• Quality of design
• Quality of conformance
QUALITY OF DESIGN
Quality of design refers to the characteristics that designers specify for an item.
The grade of materials, tolerance, and performance specifications all contribute to
quality of design. As higher graded materials are used and tighter, tolerance and
greater levels of performance are specified the design quality of a product
increases if the product is manufactured according to specifications.
QUALITY OF CONFORMANCE
Quality of conformance is the degree to which the design specifications are
followed during manufacturing. Again, the greater the degree of conformance, the
higher the level of quality of conformance. In software development, quality of
design encompasses requirements, specifications and design of the system.
Quality of conformance is an issue focused primarily on implementation. If the
implementation follows the design and the resulting system meets its
requirements and performance goals, conformance quality is high.
Q3. Explain unit test method with the help of your own example.
Unit testing focuses verification efforts on the smallest unit of software design the
module. Using the procedural design description as guide, important control paths
are tested to uncover errors within the boundary of the module. The relative
complexity of tests and uncovered errors are limited by the constraint scope
established for unit testing. The unit test is normally white-box oriented, and the
step can be conducted in parallel for multiple modules.
The local data structure for a module is a common source of errors .Test
cases should be designed to uncover errors in the following categories
1. Improper or inconsistent typing
2. erroneous initialization are default values
3. incorrect variable names
4. inconsistent data types
5. underflow, overflow, and addressing exception
In addition to local data structures, the impact of global data on a module should
be ascertained during unit testing. Selective testing of execution paths is an
essential task during the unit test. Test cases should be designed to uncover errors
to erroneous computations; incorrect comparisons are improper control flow. Basis
path and loop testing are effective techniques for uncovering a broad array of path
errors.
Boundary testing is the last task of the unit tests step. software often files at its
boundaries. That is, Errors often occur when the nth element of an n-dimensional
array is processed; when the Ith repetition of a loop with i passes is invoke; or
when the maximum or minimum allowable value is encountered. Test cases that
exercise data structure, control flow and data values just below, at just above
maxima and minima are Very likely to uncover errors.
Incremental integration is the antithesis of the big bang approach. The program
is constructed and tested in small segments, where errors are easier to isolate and
correct; interfaces are more likely to be tested completely; and a systematic test
approach may be applied. We discuss some of incremental methods here:
1. Delay many tests until stubs are replaced with actual modules.
2. Develop stubs that perform limited functions that simulate the actual module
3. Integrate the software from the bottom of the hierarchy upward
The first approach causes us to lose some control over correspondence between
specific tests and
incorporation of specific modules. this can lead to difficulty in determining the
cause of errors tends to violate the highly constrained nature of the top down
approach. The second approach is workable but can lead to significant overhead,
as stubs become increasingly complex. The third approach is discussed in next
section.
1. Low-level modules are combined into clusters that perform a specific software
sub function.
2. A driver is written to coordinate test case input and output.
3. The cluster is tested.
4. Drivers are removed and clusters are combined moving upward in the program
structure.
As integration moves upward, the need for separate test drivers lessens. In fact, if
the top two levels of program structure are integrated top-down, the number of
drivers can be reduced substantially and integration of clusters is greatly
simplified.
Regression Testing
Each time a new model is added as a part of integration testing, the software
changes.
New data flow paths are established, new I/O may occur, and new control logic is
invoked. These changes may cause problems with functions that previously
worked flawlessly. In the context of an integration test, strategy regression testing
is the re-execution of subset of tests that have already been conducted to ensure
that changes have not propagated unintended side effects. Regression testing is
the activity that helps to ensure that changes do not introduce unintended
behavior or additional errors.
Note:
It is impractical and inefficient to re-execute every test for every program function
once a change has occurred. Selection of an integration strategy depends upon
software characteristics and sometime project schedule. In general, a combined
approach that uses a top-down strategy for upper levels of the program structure,
coupled with bottom-up strategy for subordinate levels may be best compromise.
The Following criteria and corresponding tests are applied for all test phases.
Interfaces integrity. Internal and external interfaces are tested as each module is
incorporated into the structure. Functional Validity. Tests designed to uncover
functional error are conducted. Information content. Tests designed to uncover
errors associated with local or global data structures are conducted. Performance
Test designed to verify performance bounds established during software design
are conducted.
A schedule for integration, overhead software, and related topics are also
discussed as part of the “test Plan” section. Start and end dates for each phase
are established and availability windows for unit tested modules are defined. A
brief description of overhead software(stubs and drivers) concentrates on
characteristics that might require special effort. Finally, test environments and
resources are described
• quality model
• external metrics
• internal metrics
• quality in use metrics.
Quality Model
The quality model established in the first part of the standard, ISO/IEC 9126-1,
classifies software quality in a structured set of characteristics and sub-
characteristics as follows:
• Functionality - A set of attributes that bear on the existence of a set of
functions and their specified properties. The functions are those that satisfy
stated or implied needs.
• Suitability
• Accuracy
• Interoperability
• Security
• Functionality Compliance
• Reliability - A set of attributes that bear on the capability of software to
maintain its level of performance under stated conditions for a stated period
of time.
• Maturity
• Fault Tolerance
• Recoverability
• Reliability Compliance
• Usability - A set of attributes that bear on the effort needed for use, and on
the individual assessment of such use, by a stated or implied set of users.
• Understandability
• Learnability
• Operability
• Attractiveness
• Usability Compliance
• Efficiency - A set of attributes that bear on the relationship between the
level of performance of the software and the amount of resources used,
under stated conditions.
• Time Behaviour
• Resource Utilisation
• Efficiency Compliance
• Maintainability - A set of attributes that bear on the effort needed to make
specified modifications.
• Analyzability
• Changeability
• Stability
• Testability
• Maintainability Compliance
• Portability - A set of attributes that bear on the ability of software to be
transferred from one environment to another.
• Adaptability
• Installability
• Co-Existence
• Replaceability
• Portability Compliance
Each quality sub-characteristic (e.g. adaptability) is further divided into attributes.
An attribute is an entity which can be verified or measured in the software
product. Attributes are not defined in the standard, as they vary between different
software products.
Software product is defined in a broad sense: it encompasses executable, source
code, architecture descriptions, and so on. As a result, the notion of user extends
to operators as well as to programmers, which are users of components as
software libraries.
The standard provides a framework for organizations to define a quality model for
a software product. On doing so, however, it leaves up to each organization the
task of specifying precisely its own model. This may be done, for example, by
specifying target values for quality metrics which evaluates the degree of
presence of quality attributes
Internal Metrics
Internal metrics are those which do not rely on software execution (static
measures)
External Metrics
External metrics are applicable to running software.
Quality in Use Metrics
Quality in use metrics are only available when the final product is used in real
conditions.
Ideally, the internal quality determines the external quality and external quality
determines quality in use.
This standard stems from the model established in 1977 by McCall and his
colleagues, who proposed a model to specify software quality. The McCall quality
model is organized around three types of Quality Characteristics:
• Factors (To specify): They describe the external view of the software, as
viewed by the users.
• Criteria (To build): They describe the internal view of the software, as seen
by the developer.
• Metrics (To control): They are defined and used to provide a scale and
method for measurement.
ISO/IEC 9126 distinguishes between a defect and a non-conformity, a defect being
The nonfulfillment of intended usage requirements, whereas a nonconformity is
The nonfulfillment of specified requirements. A similar distinction is made between
validation and verification, known as V&V in the testing trade.
August 2010
Bachelor of Science in Information Technology (BScIT) –
Semester 6
BT0056 – Software Testing and Quality Assurance – 2
Credits
In this scenario, the software developer must ensure that the customer’s support
requirement are identified and must design and engineer the business and
technical infrastructure from which the product will be supported. This applied
equally to those business producing software packages and to in-house
information systems departments. Support for software can be complex and may
include.
• User Documentation
• Packaging and distribution arrangements
• Implementation and customization services and consulting
• Product training
• Help Desk Assistance
• Error reporting and correction
• Enhancement
SQA Activities
SQA Plan is interpreted as shown in Fig 2.2
SQA is comprised of a variety of tasks associated with two different constituencies
1. The software engineers who do technical work like
• Performing Quality assurance by applying technical methods
• Conduct Formal Technical Reviews
• Perform well-planed software testing.
2. SQA group that has responsibility for
o Quality assurance planning oversight
o Record keeping
o Analysis and reporting.
o QA activities performed by SE team and SQA are governed by the
following plan.
o Evaluation to be performed.
o Audits and reviews to be performed.
o Standards that is applicable to the project.
o Procedures for error reporting and tracking
o Documents to be produced by the SQA group
o Amount of feedback provided to software project team.
This testing technique takes into account the internal structure of the system or
component. The entire source code of the system must be available. This
technique is known as white box testing because the complete internal structure
and working of the code is available. White box testing helps to derive test cases
to ensure:
• Logic errors and incorrect assumptions most likely to be made when coding
for “special cases”. Need to ensure these execution paths are tested.
• May find assumptions about execution paths incorrect, and so make design
errors. White box testing can find these errors.
• Typographical errors are random. Just as likely to be on an obscure logical
path as on a Mainstream path.
• “Bugs lurk in corners and congregate at boundaries”
SCM concerns itself with answering the question "Somebody did something, how
can one reproduce it?" Often the problem involves not reproducing "it" identically,
but with controlled, incremental changes. Answering the question thus becomes a
matter of comparing different results and of analysing their differences. Traditional
configuration management typically focused on controlled creation of relatively
simple products. Now, implementers of SCM face the challenge of dealing with
relatively minor increments under their own control, in the context of the complex
system being developed