Unit - Iv Testing and Maintenance Software Testing Fundamentals
Unit - Iv Testing and Maintenance Software Testing Fundamentals
Unit - Iv Testing and Maintenance Software Testing Fundamentals
The entire site is dedicated to the basics of software testing. However, you need
to first master the basics of the basics before you begin. We strongly recommend
you to go through the following fundamental articles if you are just starting the
journey into the world of software testing.
Software Quality: Learn how software quality is defined and what it means.
Software quality is the degree of conformance to explicit or implicit
requirements and expectations.
Dimensions of Quality: Learn the dimensions of quality.
Software quality has dimensions such as Accessibility, Compatibility,
Concurrency,
Efficiency …
Software Quality Assurance: Learn what it means and what its relationship is
with Software Quality Control.
Software Quality Assurance is a set of activities for ensuring quality in
software engineering processes.
Software Quality Control: Learn what it means and what its relationship is with
Software Quality Assurance.
Software Quality Control is a set of activities for ensuring quality in
software products.
SQA and SQC Differences: Learn the differences between Software Quality
Assurance and Software Quality Control.
SQA is process-focused and prevention-oriented but SQC is product-
focused and detection-oriented.
Software Development Life Cycle: Learn what SDLC means and what
activities a typical SDLC model comprises of.
software.
Inferences are said to possess internal validity if a causal relation between two variables is
properly demonstrated. A causal inference may be based on a relation when three criteria are
satisfied:
In many cases, however, the magnitude of effects found in the dependent variable
may not just depend on
In order to allow for inferences with a high degree of internal validity, precautions
may be taken during the design of the scientific study. As a rule of thumb,
conclusions based on correlations or associations may only allow for lesser degrees
of internal validity than conclusions drawn on the basis of direct manipulation of
the independent variable. And, when viewed only from the perspective of Internal
Validity, highly controlled true experimental designs (i.e. with random selection,
random assignment to either the control or experimental groups, reliable
instruments, reliable manipulation processes, and safeguards against confounding
factors) may be the "gold standard" of scientific research. By contrast, however,
the very strategies employed to control these factors may also limit the
generalizability or External Validity of the findings.
Branch Testing
Condition Testing
Data Flow Testing
Loop Testing
1. Branch Testing
definition: "For a compound condition C, the true and false branches of C and
every simple condition in C need to be executed at least once."
4. Loop Testing
There are four different classes of loops: simple, concatenated, nested, and
unstructured.
Examples:
Create a set of tests that force the following situations:
Concatenated Loops
o If independent loops, use simple loop testing.
o If dependent, treat as nested loops.
Unstructured loops
TEST PROCEDURES
Test cases
Test cases are built around specifications and requirements, i.e., what the
application is supposed to do. Test cases are generally derived from external
descriptions of the software, including specifications, requirements and design
parameters. Although the tests used are primarily functional in nature, non-
functional tests may also be used. The test designer selects both valid and invalid
inputs and determines the correct output without any knowledge of the test object's
internal structure.
Regression testing is a type of software testing that seeks to uncover new software
bugs, or regressions, in existing functional and non-functional areas of a system
after changes such as enhancements, patches or configuration changes, have been
made to them.
The intent of regression testing is to ensure that changes such as those mentioned
above have not introduced new faults. One of the main reasons for regression
testing is to determine whether a change in one part of the software affects other
parts of the software
Unit Testing
Ideally, each test case is independent from the others. Substitutes such as method
stubs, mock objects, fakes, and test harnesses can be used to assist testing a module
in isolation. Unit tests are typically written and run by software developers to
ensure that code meets its design and behaves as intended.
Another challenge related to writing the unit tests is the difficulty of setting up
realistic and useful tests. It is necessary to create relevant initial conditions so the
part of the application being tested behaves like part of the complete system. If
these initial conditions are not set correctly, the test will not be exercising the code
in a realistic context, which diminishes the value and accuracy of unit test results
To obtain the intended benefits from unit testing, rigorous discipline is needed
throughout the software development process. It is essential to keep careful records
not only of the tests that have been performed, but also of all changes that have
been made to the source code of this or any other unit in the software. Use of
a version control system is essential. If a later version of the unit fails a particular
test that it had previously passed, the version-control software can provide a list of
the source code changes (if any) that have been applied to the unit since that time.
It is also essential to implement a sustainable process for ensuring that test case
failures are reviewed daily and addressed immediately. If such a process is not
implemented and ingrained into the team's workflow, the application will evolve
out of sync with the unit test suite, increasing false positives and reducing the
effectiveness of the test suite.
Unit testing embedded system software presents a unique challenge: Since the
software is being developed on a different platform than the one it will eventually
run on, you cannot readily run a test program in the actual deployment
environment, as is possible with desktop programs.
Integration Testing
Big Bang
In this approach, all or most of the developed modules are coupled together to form
a complete software system or major part of the system and then used for
integration testing. The Big Bang method is very effective for saving time in the
integration testing process. However, if the test cases and their results are not
recorded properly, the entire integration process will be more complicated and may
prevent the testing team from achieving the goal of integration testing.
A type of Big Bang Integration testing is called Usage Model testing. Usage
Model Testing can be used in both software and hardware integration testing. The
basis behind this type of integration testing is to run user-like workloads in
integrated user-like environments. In doing the testing in this manner, the
environment is proofed, while the individual components are proofed indirectly
through their use. Usage Model testing takes an optimistic approach to testing,
because it expects to have few problems with the individual components. The
strategy relies heavily on the component developers to do the isolated unit testing
for their product. The goal of the strategy is to avoid redoing the testing done by
the developers, and instead flesh-out problems caused by the interaction of the
components in the environment. For integration testing, Usage Model testing can
be more efficient and provides better test coverage than traditional focused
functional integration testing. To be more efficient and accurate, care must be used
in defining the user-like workloads for creating realistic scenarios in exercising the
environment. This gives confidence that the integrated environment will work as
expected for the target customers.
All the bottom or low-level modules, procedures or functions are integrated and
then tested. After the integration testing of lower level integrated modules, the next
level of modules will be formed and can be used for integration testing. This
approach is helpful only when all or most of the modules of the same development
level are ready. This method also helps to determine the levels of software
developed and makes it easier to report testing progress in the form of a
percentage.
Top Down Testing is an approach to integrated testing where the top integrated
modules are tested and the branch of the module is tested step by step until the end
of the related module.
The main advantage of the Bottom-Up approach is that bugs are more easily found.
With Top-Down, it is easier to find a missing branch link
LIMITATIONS
Any conditions not stated in specified integration tests, outside of the confirmation
of the execution of design items, will generally not be tested.
In software project management, software testing, and software
engineering, verification and validation (V&V) is the process of checking that a
software system meets specifications and that it fulfills its intended purpose. It may
also be referred to as software quality control. It is normally the responsibility
of software testers as part of the software development lifecycle
Validation checks that the product design satisfies or fits the intended use (high-
level checking), i.e., the software meets the user requirements. This is done
through dynamic testing and other forms of review.
Verification and validation are not the same thing, although they are often
confused. Boehm succinctly expressed the difference between
Validation: Are we building the right product? (This is dynamic process for
checking and testing the real product. Software validation always involves
with executing the code)[citation needed])
Verification: Are we building the product right? (This is static method for verifying
design,code. Software verification is human based checking of documents and
files)[citation needed])
RELATED CONCEPTS
Both verification and validation are related to the concepts of quality and
of software quality assurance. By themselves, verification and validation do not
guarantee software quality; planning, traceability, configuration management and
other aspects of software engineering are required.
The definition of M&S validation focuses on the accuracy with which the M&S
represents the real-world intended use(s). Determining the degree of M&S
accuracy is required because all M&S are approximations of reality, and it is
usually critical to determine if the degree of approximation is acceptable for the
intended use(s). This stands in contrast to software validation.
CLASSIFICATION OF METHODS
Test cases
A test case is a tool used in the process. Test cases may be prepared for software
verification and software validation to determine if the product was built according
to the requirements of the user. Other methods, such as reviews, may be used early
in the life cycle to provide for software validation.
System Testing And Debugging
Numerous books have been written about debugging (see below: Further reading),
as it involves numerous aspects, including interactive debugging, control
flow, integration testing, log files, monitoring (application, system), memory
dumps, profiling, Statistical Process Control, and special design tactics to improve
detection while simplifying changes
Normally the first step in debugging is to attempt to reproduce the problem. This
can be a non-trivial task, for example as with parallel processes or some unusual
software bugs. Also, specific user environment and usage history can make it
difficult to reproduce the problem.
After the bug is reproduced, the input of the program may need to be simplified to
make it easier to debug. For example, a bug in a compiler can make it crash when
parsing some large source file. However, after simplification of the test case, only
few lines from the original source file can be sufficient to reproduce the same
crash. Such simplification can be made manually, using a divide-and-
conquer approach. The programmer will try to remove some parts of original test
case and check if the problem still exists. When debugging the problem in a GUI,
the programmer can try to skip some user interaction from the original problem
description and check if remaining actions are sufficient for bugs to appear.
After the test case is sufficiently simplified, a programmer can use a debugger tool
to examine program states (values of variables, plus the call stack) and track down
the origin of the problem(s). Alternatively, tracing can be used. In simple cases,
tracing is just a few print statements, which output the values of variables at certain
points of program execution
TECHNIQUES
Print debugging (or tracing) is the act of watching (live or recorded) trace
statements, or print statements, that indicate the flow of execution of a
process. This is sometimes called printf debugging, due to the use of
the printf function in C. This kind of debugging was turned on by
the command TRON in the original versions of the novice-
oriented BASIC programming language. TRON stood for, "Trace On."
TRON caused the line numbers of each BASIC command line to print as the
program ran.
Remote debugging is the process of debugging a program running on a
system different from the debugger. To start remote debugging, a debugger
connects to a remote system over a network. The debugger can then control
the execution of the program on the remote system and retrieve information
about its state.
Post-mortem debugging is debugging of the program after it has
already crashed. Related techniques often include various tracing techniques
(for example,) and/or analysis of memory dump (or core dump) of the
crashed process. The dump of the process could be obtained automatically by
the system (for example, when process has terminated due to an unhandled exception), or
by a programmer-inserted instruction, or manually by the interactive user.
"Wolf fence" algorithm: Edward Gauss described this simple but very useful
and now famous algorithm in a 1982 article for communications of the ACM
as follows: "There's one wolf in Alaska; how do you find it? First build a
fence down the middle of the state, wait for the wolf to howl, determine
which side of the fence it is on. Repeat process on that side only, until you
get to the point where you can see the wolf. This is implemented e.g. in
the Git version control system as the command git bisect, which uses the
above algorithm to determine which commit introduced a particular bug.
Delta Debugging – a technique of automating test case simplification.
Best coding practices are a set of informal rules that the software
development community has learned over time which can help improve the quality
of software
Many computer programs remain in use for far longer than the original authors
ever envisaged (sometimes 40 years or more) so any rules need to facilitate both
initial development and subsequent maintenance and enhancement by people other
than the original authors.
Maintainability.
Dependability.
Efficiency.
Usability.
Software Implementation Techniques: Coding practices
Best coding practices are a set of informal rules that the software
development community has learned over time which can help improve the quality
of software
Many computer programs remain in use for far longer than the original authors
ever envisaged (sometimes 40 years or more) so any rules need to facilitate both
initial development and subsequent maintenance and enhancement by people other
than the original authors.
The size of a project or program has a significant effect on error rates, programmer
productivity, and the amount of management needed
Maintainability.
Dependability.
Efficiency.
Usability.
Refactoring
Refactoring is usually motivated by noticing a code smell. For example the method
at hand may be very long, or it may be a near duplicate of another nearby method.
Once recognized, such problems can be addressed by refactoring the source code,
or transforming it into a new form that behaves the same as before but that no
longer "smells". For a long routine, one or more smaller subroutines can be
extracted; or for duplicate routines, the duplication can be removed and replaced
with one shared function. Failure to perform refactoring can result in
accumulating technical debt.
o Encapsulate Field – force code to access the field with getter and setter
methods
o Generalize Type – create more general types to allow for more code
sharing
o Replace type-checking code with State/Strateg
o Replace conditional with polymorphism
UNIT-V
PROJECT MANAGEMENT
The risk of "inflation" of the created lines of code, and thus reducing the
value of the measurement system, if developers are incentivized to be more
productive. FP advocates refer to this as measuring the size of the solution
instead of the size of the problem.
Lines of Code (LOC) measures reward low level languages because more
lines of code are needed to deliver a similar amount of functionality to a
higher level language. C. Jones offers a method of correcting this in his
work.
LOC measures are not useful during early project phases where estimating
the number of lines of code that will be delivered is challenging. However,
Function Points can be derived from requirements and therefore are useful in
methods such as estimation by proxy.
Loc:
Source lines of code (SLOC), also known as lines of code (LOC), is a software
metric used to measure the size of a computer program by counting the number of
lines in the text of the program's source code. SLOC is typically used to predict the
amount of effort that will be required to develop a program, as well as to
estimate programming productivity or maintainability once the software is
produced
SLOC measures are somewhat controversial, particularly in the way that they are
sometimes misused. Experiments have repeatedly confirmed that effort is highly
correlated with SLOC, that is, programs with larger SLOC values take more time
to develop. Thus, SLOC can be very effective in estimating effort. However,
functionality is less well correlated with SLOC: skilled developers may be able to
develop the same functionality with far less code, so one program with fewer
SLOC may exhibit more functionality than another similar program. In particular,
SLOC is a poor productivity measure of individuals, since a developer can develop
only a few lines and yet be far more productive in terms of functionality than a
developer who ends up creating more lines (and generally spending more effort).
Good developers may merge multiple code modules into a single module,
improving the system yet appearing to have negative productivity because they
remove code. Also, especially skilled developers tend to be assigned the most
difficult tasks, and thus may sometimes appear less "productive" than other
developers on a task by this measure. Furthermore, inexperienced developers often
resort to code duplication, which is highly discouraged as it is more bug-prone and
costly to maintain, but it results in higher SLOC.
Make/buy decision:
The act of choosing between manufacturing a product in-house or purchasing it from an external
supplier.
In a make-or-buy decision, the two most important factors to consider are cost and
availability of production capacity.
An enterprise may decide to purchase the product rather than producing it, if is
cheaper to buy than make or if it does not have sufficient production capacity to
produce it in-house. With the phenomenal surge in global outsourcing over the past
decades, the make-or-buy decision is one that managers have to grapple with very
frequently.
nvestopedia explains 'make-or-buy decision'
Factors that may influence a firm's decision to buy a part rather than produce it
internally include lack of in-house expertise, small volume requirements, desire for
multiple sourcing, and the fact that the item may not be critical to its strategy.
Similarly, factors that may tilt a firm towards making an item in-house include
existing idle production capacity, better quality control or proprietary technology
that needs to be protected.
COCOMO II
The original COCOMO® model was first published by Dr. Barry Boehm in 1981,
and reflected the software development practices of the day. In the ensuing decade
and a half, software development techniques changed dramatically. These changes
included a move away from mainframe overnight batch processing to desktop-
based real-time turnaround; a greatly increased emphasis on reusing existing
software and building new systems using off-the-shelf software components; and
spending as much effort to design and manage the software development process
as was once spent creating the software product.
These changes and others began to make applying the original COCOMO® model
problematic. The solution to the problem was to reinvent the model for the 1990s.
After several years and the combined efforts of USC-CSSE, ISR at UC Irvine, and
the COCOMO® II Project Affiliate Organizations , the result is COCOMO® II, a
revised cost estimation model reflecting the changes in professional software
development practice that have come about since the 1970s. This new, improved
COCOMO® is now ready to assist professional software cost estimators for many
years to come.
Intermediate COCOMO computes software development effort as function of program size and a
set of "cost drivers" that include subjective assessment of product, hardware, personnel and
project attributes. This extension considers a set of four "cost drivers",each with a number of
subsidiary attributes:-
Product attributes
o Required software reliability
o Size of application database
o Complexity of the product
Hardware attributes
o Run-time performance constraints
o Memory constraints
o Volatility of the virtual machine environment
o Required turnabout time
Personnel attributes
o Analyst capability
o Software engineering capability
o Applications experience
o Virtual machine experience
o Programming language experience
Project attributes
o Use of software tools
E=ai(KLoC)(bi).EAF
where E is the effort applied in person-months, KLoC is the estimated number of
thousands of delivered lines of code for the project, and EAF is the factor
calculated above. The coefficient ai and the exponent bi are given in the next table.
The Development time D calculation uses E in the same way as in the Basic
COCOMO.
DETAILED COCOMO
The detailed model uses different effort multipliers for each cost driver attribute.
These Phase Sensitive effort multipliers are each to determine the amount of effort
required to complete each phase. In detailed cocomo,the whole software is divided
in different modules and then we apply COCOMO in different modules to estimate
effort and then sum the effort
In detailed COCOMO, the effort is calculated as function of program size and a set
of cost drivers given according to each phase of software life cycle.
The Project Planning Phase is the second phase in the project life cycle. It
involves creating of a set of plans to help guide your team through the execution
and closure phases of the project.
The plans created during this phase will help you to manage time, cost, quality,
change, risk and issues. They will also help you manage staff and external
suppliers, to ensure that you deliver the project on time and within budget.
There are 10 Project Planning steps you need to take to complete the
Project Planning Phase efficiently. These steps and the templates needed to
perform them, are shown in the following diagram.
Click each link in the diagram below, to learn how these templates will help you to
plan projects efficiently.
Project Plan
Resource Plan
Financial Plan
Quality Plan
Risk Plan
Acceptance Plan
Communications Plan
Procurement Plan
The Project Planning Phase is often the most challenging phase for a Project
Manager, as you need to make an educated guess of the staff, resources and
equipment needed to complete your project. You may also need to plan your
communications and procurement activities, as well as contract any 3rd party
suppliers.
In short, you need to create a comprehensive suite of project plans which set out a
clear project roadmap ahead.
The Project Planning Template suite will help you to do this, by giving you a
comprehensive collection of Project Planning templates.
Identification, Projection, RMMM
Projection
Project planner, along with other managers and technical staff, performs four risk
projection activities:
(1) (1) establish a measure that reflects the perceived likelihood of a risk
(3) (3) estimate the impact of the risk on the project and the product
(4) (4) note the overall accuracy of the risk projection so that there will be no
misunderstandings.
Risk table provides a project manager with a simple technique for risk projection
Steps in Setting up Risk Table
(1) (1) Project team begins by listing all risks in the first column of the table.
Accomplished with the help of the risk item checklists.
(3) (3) The probability of occurrence of each risk is entered in the next column
of the table. The probability value for each risk can be estimated by team
members individually.
(4) (4) Individual team members are polled in round-robin fashion until their
assessment of risk probability begins to converge.
(1) (1) Each risk component is assessed using the Risk Charcterization Table
(Figure 1) and impact category is determined.
(2) (2) Categories for each of the four risk components—performance, support,
cost, and schedule—are averaged to determine an overall impact value.
(3) (3) Once the first four columns of the risk table have been completed, the table
is sorted by probability and by impact.
· High-probability, high-impact risks percolate to the top of the table,
and low-probability risks drop to the bottom.
(4) (4) Project manager studies the resultant sorted table and defines a cutoff line.
· cutoff line (drawn horizontally at some point in the table) implies that
only risks that lie above the line will be given further attention.
· Risks below the line are re-evaluated to accomplish second-order
prioritization.
· Risk impact and probability have a distinct influence on management
concern.
· All risks that lie above the cutoff line must be managed.
· The column labeled RMMM contains a pointer into a Risk Mitigation,
Monitoring and Management Plan
2. Using Figure 1, determine the impact for each component based on the
criteria shown.
3. Complete the risk table and analyze the results as described in the preceding
sections.
RE = P x C
Example
Assume the software team defines a project risk in the following manner:
Risk identification.
Risk impact.
Risk Assessment
· Have established a set of triplets of the form:
[ri, li, xi]
where ri is risk
o o further examine the accuracy of the estimates that were made during risk
projection
o o begin thinking about ways to control and/or avert risks that are likely to
occur.
2. 2. Attempt to develop a relationship between each (ri, li, xi) and each of
the referent levels.
3. 3. Predict the set of referent points that define a region of termination,
bounded by a curve or areas of uncertainty.
4. 4. Try to predict how compound combinations of risks will affect a
referent level.
Risk Refinement
risk avoidance
risk monitoring
risk management and contingency planning
· Proactive approach to risk - avoidance strategy.
· Develop risk mitigation plan.
· e.g. assume high staff turnover is noted as a project risk, r1.
· Based on past history
o o the likelihood, l1, of high turnover is estimated to be 0.70 o o the
impact, x1, is projected at level 2.
o o So… high turnover will have a critical impact on project cost and schedule.
· Develop a strategy to mitigate this risk for reducing turnover. · Possible steps
to be taken
o Meet with current staff to determine causes for turnover (e.g., poor working
conditions, low pay, competitive job market).
o Mitigate those causes that are under our control before the project starts.
o Once the project commences, assume turnover will occur and develop
techniques to ensure continuity when people leave.
o Conduct peer reviews of all work (so that more than one person is "up to
speed"). o Assign a backup staff member for every critical technologist.
Problems that occur during a project can be traced to more than one risk.
---> a detailed schedule is redefined for each entry in the macroscopic schedule.
- Compartmentalization
- Interdependency
- Time allocation
- Effort allocation
- Effort validation
- Defined responsibilities
- Defined outcomes
- Defined milestones
Tasks sets are designed to accommodate different types of projects and different
degrees of rigor.
- Reengineering Projects
Degree of Rigor:
- Casual
- Structured
- Strict
- Quick Reaction
Obta
- Mission criticality
- Application longevity
- Performance constraints
- Embedded/non-embedded characteristics
- Project staffing
- Reengineering factors
Critical path:
-- the tasks on a critical path must be completed on schedule to make the whole
project on schedule.
Scheduling of a software project does not differ greatly from scheduling of any
multitask
engineering effort.
- Estimates of effort
- The selection of project type and task set Both methods allow a planer to
do:
- time estimation
1. When software size is small single person can handle same project by
performing steps like requirement analysis, designing, code generation, and
testing etc.
2. If the project is large additional people are required to complete. the project in stipulated in
time it become easy to complete project by distributing work among people and get it done as
early as possible.
3. The communication path of new comer also increase as time increase and day
by day the project become extra complicated. And new customer gets confusion
become more after the days by days.
6. The curve indicate a minimum time value at to which indicates test cost time
for delivery as use move to left to right. It is observed that curved raised non-
linearly.
7. It is possible to make delivery fast; the curve rises very sharply to left of td. The
pnr curve indicates that project delivery time should not be compressed much
behind on td.
8. The number of delivery lines of code are also known as source statements L.
Relationship of L with effort & development time by equation can be described
as
L= P * E1/3 T ¾
E = L3/(P3 T4)
E is called as effort expanded over entire life cycle for software development
and maintenance
This shows that by extending last date of project with six month e.g. we can
reduce the no of people from eight to four. The outcome benefit can be gained
by using less number of people over longer time to achieve the same objective.
These are all about the relationship between people and effort in software
engineering.
Description of People and process management spectrum
The people:-
Recruiting
Selection
Performance
Training
Team / culture development
Carrier development
1)Stack holder: – Those people who are involved in software process and every
software project. Stack holder can be senior manager, project manager
practitioner’s customer and end user.
2)Team leader: – The MOI model of leadership state characteristics that define
and effective project manager motivation, organization and ideas. Successful
project leaders use problem solving strategy.
3)Software team:- The people directly involved in a software project are within
the software project manager scope seven project factor should be considered
when planning structure of software engineering team these are as follows:-
The Process:-
The project manager must decide which process mode is appropriate for 1)
Customer who has a request product 2) the characteristics of product 3) the project
environment.
Once the primary plan is established process decomposition begins a complete plan reflecting
work task required to populate to require to framework activities must be created. These are all
about process.
Process Decomposition:-
• Balanced scorecard
• Code coverage
• Cohesion
• Comment density
• Halstead Complexity
• Instruction path length
• Maintainability index
LIMITATIONS
As software development is a complex process, with high variance on both
methodologies and objectives, it is difficult to define or measure software qualities
and quantities and to determine a valid and concurrent measurement metric,
especially when making such a prediction prior to the detail design. Another source
of difficulty and debate is in determining which metrics matter, and what they
mean.The practical utility of software measurements has therefore been limited to
the following domains:
• Scheduling
• Software sizing
• Programming complexity
• Software quality
A specific measurement may target one or more of the above aspects, or the
balance between them, for example as an indicator of team motivation or project
performance.
software metrics are being widely used by government agencies, the US military,
NASA, IT consultants, academic institutions, and commercial and academic
development estimation software.