SPM Assignment

Download as pdf or txt
Download as pdf or txt
You are on page 1of 238

IME671A: SOFTWARE PROJECT

MANAGEMENT

Subhas C. Misra GROUP 1


PhD (Carleton U, Canada), PDF (Harvard University, USA)
FIET (U.K.), C.ENG (India), FIEI (India), FIETE (India), FRSPH (U.K.),
FRSA (U.K.), FBCS (U.K.)
Department of Industrial and Management Engineering Tanmay Manu Kumar
Indian Institute of Technology
Gaurav Kumar Bharti
Chapter 3
Questions 1 to 18
1. Write five major
responsibilities of a
software project manager
• Involves with the senior managers in 'the process of appointing team
members
• Builds the project team and assigns tasks to various team members
• Responsible for effective project planning and scheduling, project
monitoring and control activities in order to achieve the project
objectives
• Acts as a communicator between the senior management and the
other persons involved in the project like the development team and
internal and external stakeholders
• Effectively resolves issues (if any) that arise between the team
members by changing their roles and responsibilities
• Modifies the project plan (if required) to deal with the situation.
make software projects
much more difficult to
manage, compared to many
other types of projects such
as a project to
lay out a 100 km concrete
road on an existing non-
concrete road.
• Invisibility: Software remains invisible, until its development is complete and it is
operational.
• Changeability: Because the software part of any system is easier to change as
compared to the hardware part, the software part is the one that gets most
frequently changed.
• Complexity: Even a moderate sized software has millions of parts (functions) that
interact with each other in many ways—data coupling, serial and concurrent
runs, state transitions, control dependency, file sharing, etc.
• Uniqueness: Every software project is usually associated with many unique
features or situations
• Exactness of the solution: Mechanical components such as nuts and bolts
typically work satisfactorily as long as they are within a tolerance of 1 percent or
so of their specified sizes.
• Team-oriented and intellect-intensive work: Software development projects are
akin to research projects in the sense that they both involve team-oriented,
intellect-intensive work.
software development life cycle
(SDLC), does the
project management activities
start? When do these end?
Identify the
important project management
activities.
• Planning: The most important parts of software development, requirement gathering
or requirement analysis are usually done by the most skilled and experienced
software engineers in the organization. After the requirements are gathered from
the client, a scope document is created in which the scope of the project is
determined and documented.
• Implementation: The software engineers start writing the code according to the
client's requirements.
• Testing: This is the process of finding defects or bugs in the created software.
• Documentation: Every step in the project is documented for future reference and for
the improvement of the software in the development process. The design
documentation may include writing the application programming interface (API).
• Deployment and maintenance: The software is deployed after it has been approved
for release.
• Maintaining: Software maintenance is done for future reference. Software
improvement and new requirements (change requests) can take longer than the
time needed to create the initial development of the software.
4. What is meant by the
‘size’ of a software project?
Why does a project
manager need to estimate
the size of the project? How
is the size
estimated?
• The size of a project is obviously not the number of bytes that the
source code occupies, neither is it the size of the executable code.The
project size is a measure of the problem.

• Estimation of the size of software is an essential part of Software


Project Management. It helps the project manager to further predict
the effort and time which will be needed to build the project. Various
measures are used in project size estimation. Some of these are:
1. Lines of Code -LOC count the total number of lines of source code in
a project.
2. Function points-In this method, the number and type of functions
supported by the software are utilized to find FPC(function point
count).
projects for which this form of
planning is
especially suitable. Is sliding
window planning appropriate
for small
projects? What are its
advantages over conventional
planning?
• It is usually very difficult to make accurate plans for large projects at project
initiation.
• A part of the difficulty arises from the fact that large projects may take
several years to complete.
• As a result, during the span of the project, the project parameters, scope of
the project, project staff, etc., often change drastically resulting in the initial
plans going haywire.
• In order to overcome this problem, sometimes project managers undertake
project planning over several stages.
• That is, after the initial project plans have been made, these are revised at
frequent intervals.
• Planning a project over a number of stages protects managers from making
big commitments at the start of the project.
• This technique of staggered planning is known as sliding window planning.
• In the sliding window planning technique, starting with an initial plan,
the project is planned more accurately over a number of stages.
• At the start of a project, the project manager has incomplete
knowledge about the nitty-gritty of the project.
• His information base gradually improves as the project progresses
through different development phases.
• The complexities of different project activities become clear, some of
the anticipated risks get resolved, and new risks appear.
• The project parameters are re-estimated periodically as understanding
grows and also a periodically as project parameters change.
• By taking these developments into account, the project manager can
plan the subsequent activities more accurately and with increasing
levels of confidence.
context of software
development?
Why is it important to
improve product visibility
during software
development? How can
product visibility be
improved.
• Product visibility refers to a clear picture of how a project is performing,
including resource allocation and potential risks.
• Increased visibility ensures everyone involved in the project understands what
the objective of the project is and their role in meeting this goal.
Common Causes of Project Visibility
• No Project Management Tool
• No Document Management
• No Communication Plan
Ways to Improve Project Visibility
1. Introduce Project Management Software
2. Create a Project Communication Plan
3. Kick-off Meeting
• 4. Hold Weekly Team Meetings
development projects
according to the COCOMO
estimation model? Give an
example of
software product development
projects belonging to each of
these
categories.
• Short for Constructive Cost Model, a method for evaluating and/or estimating the cost of software
development. There are three levels in the COCOMO hierarchy:
• Basic COCOMO: computes software development effort and cost as a function of program size
expressed in estimated DSIs. There are three modes within Basic COCOMO:
• Organic Mode: Development projects typically are uncomplicated and involve small
experienced teams. The planned software is not considered innovative and requires a
relatively small amount of DSIs (typically under 50,000).
• Semidetached Mode: Development projects typically are more complicated than in Organic
Mode and involve teams of people with mixed levels of experience. The software requires no
more than 300,000 DSIs. The project has characteristics of both projects for Organic Mode
and projects for Embedded Mode.
• Embedded Mode: Development projects must fit into a rigid set of requirements because the
software is to be embedded in a strongly joined complex of hardware, software, regulations
and operating procedures.
• Intermediate COCOMO: an extension of the Basic model that computes software development
effort by adding a set of "cost drivers," that will determine the effort and duration of the project,
such as assessments of personnel and hardware.
• Detailed COCOMO: an extension of the Intermediate model that adds effort multipliers for each
phase of the project to determine the cost drivers impact on each step
8. Briefly explain the
main differences between
the original COCOMO
estimation model and the
COCOMO 2 estimation
model.
BASIS FOR COMPARISON COCOMO 1 COCOMO 2
Basic Founded on linear reuse Based on the non-linear reuse
formula formula
Size of software stated in terms Lines of code Object points, function points
of and lines of code
Number of submodels 3 4
Cost drivers 15 17
Model framework Development begins with the It follows a spiral type of
requirements assigned to the development.
software.
Data points 63 projects referred 161 projects referred

Estimation precision Offers estimates of effort and Supplies estimates that


schedule. represent one standard
derivation nearly the most likely
estimate.
9. What do you mean by
project size? What are the
popular metrics to
measure project size? How
can the size of a project be
estimated during
the project planning stage?
• The project size is a measure of the problem complexity in terms of the effort and
time required to develop the product.
• Currently, two metrics are popularly being used to measure size—lines of code
(LOC) and function point (FP).
• Size of a project be estimated during the project planning stage by Function Point
analysis
• In this method, the number and type of functions supported by the software are
utilized to find FPC(function point count). The steps in function point analysis are:
1. Count the number of functions of each proposed type.
2 Compute the Unadjusted Function Points(UFP).
3. Find Total Degree of Influence(TDI).
4. Compute Value Adjustment Factor(VAF).
5. Find the Function Point Count(FPC).
expert
judgement techniques.
Compare the advantages and
disadvantages of the following
two project
size estimation techniques—
expert judgement and Delphi
technique.
rt Judgement Delphi Cost Estimation
vantages of this method are: Advantages of Delphi Technique :
experts can factor in differences between past • A useful technique in the absence of experts within
ect experience and requirements of the proposed the organisation, as Delphi allows hiring experts from
ect. external sources with required domain knowledge for
experts can factor in project impacts caused by the project.
technologies, architectures, applications and • It is a quick technique to arrive at an estimate.
uages involved in the future project and can also • Delphi technique is popular for its simplicity
or in exceptional personnel characteristics and Disadvantages of Delphi Technique :
ractions, etc. • Sometimes it becomes difficult to find and analyse the
advantages include: right expert.
method can not be quantified. • Difficulty in deciding the number of experts required.
hard to document the factors used by the experts • The estimates derived, are not auditable.
xperts-group. • This technique can only estimate the size and effort of
ert may be some biased, optimistic, and the project but not the time
imistic, even though they have been decreased by
group consensus.
expert judgment method always compliments the
er cost estimating methods such as algorithmic
hod.
11.Why is it difficult to
accurately estimate the
effort required for
completing a project?
Briefly explain the different
effort estimation methods
that are available.
• Effort: How much effort would be necessary to develop the product?
The effectiveness of all later planning activities such as scheduling and
staffing are dependent on the accuracy with which these three
estimations have been made.
• In software development, effort estimation is the process of
predicting the most realistic amount of effort (expressed in terms of
person-hours or money) required to develop or maintain software
based on incomplete, uncertain and noisy input. Effort estimates may
be used as input to project plans, iteration plans, budgets, investment
analyses, pricing processes and bidding rounds.
• WBS-based (bottom up) estimation
• Size-based estimation model
• Judgmental combination
12. Briefly explain the
COCOMO 2 model. In
what aspects is it an
improvement over the
original COCOMO model
• COCOMO 1 model is absolutely based on the waterfall model, but due
to acquiring the object-oriented approach in the software
development process, the COCOMO 1 does not produce accurate
results.
• So, to overcome the limitations of COCOMO 1, COCOMO 2, was
developed.
• The prior aim of the COCOMO 2 model is to generate the support
capabilities for amending the model constantly and provides
quantitative analytic structure, techniques and tools.
• It is also capable of examining the effects of software technology
improvements in the expense of software development life cycle
milestone in software
development? Why is
it considered
helpful to have
milestones in software
development?
• Once project activities have been decomposed into a set of tasks
using WBS, the time frame when each activity is to be performed is to
be determined.
• The end of each important activity is called a milestone.
• The project manager tracks the progress of a project by monitoring
the timely completion of the milestones.
• If he observes that some milestones start getting delayed, he carefully
monitors and controls the progress of the tasks, so that the overall
deadline can still be met.
the COCOMO
estimation technique: cost,
effort, duration, size?
Represent the
precedence ordering among
these activities using a task
network
diagram.
Chapter 3
Questions 19 to 36
Explain why according to the COCOMO model, when the size of a software is
increased by two times, the time to develop the product usually increases by less
than two times.

• The development time is a sublinear function of the size of the product. That is,
when the size of the product increases by two times, the time to develop the
product does not double but rises moderately. For example, to develop a product
twice as large as a product of size 100KLOC, the increase in duration may only be
20 per cent. It may appear surprising that the duration curve does not increase
super linearly—one would normally expect the curves to behave similar to those
in the effort-size plots. This apparent anomaly can be explained by the fact that
COCOMO assumes that a project development is carried out not by a single
person but by a team of developers
Explain why the development time of a software product of given size remains
almost the same, regardless of whether it is organic, semidetached, or embedded
type.

• We can observe that for a project of


any given size, the development time
is roughly the same for all the three
categories of products. For example, a
60 KLOC program can be developed in
approximately 18 months, regardless
of whether it is of organic
semidetached, or embedded type
Explain how Putnam’s model can be used to compute the change in project cost
with project duration. What are the main disadvantages of using the Putnam’s
model to compute the additional costs incurred due to schedule compression?
How can you overcome them?
• Putnam studied the problem of staffing of software projects and found that the
staffing pattern for software development projects has characteristics very similar
to any other R&D projects. Only a small number of developers are needed at the
beginning of a project to carry out the planning and specification tasks. As the
project progresses and more detailed work is performed, the number of
developers increases and reaches a peak during product testing. After
implementation and unit testing, the number of project staff falls.
• Putnam found that the Rayleigh-Norden curve can be adapted to relate
thenumber of delivered lines of code to the effort and the time required to
develop the product. By analysing a large number of defence projects, Putnam
derived the following expression:
What does Halstead’s volume metric represent conceptually? How according to
Halstead is the effort dependent on program volume?

• The length of a program (i.e., the total number of operators and operands used in the code)
depends on the choice of the operators and operands used. In other words, for the same
programming problem, the length would depend on the programming style. This type of
dependency would produce different measures of length for essentially the same problem
when different programming languages are used. Thus, while expressing program size, the
programming language used must be taken into consideration:
V = N log2 h

Let us try to understand the important idea behind this expression. Intuitively, the program
volume V is the minimum number of bits needed to encode the program. In fact, to
represent h different identifiers uniquely, we need at least log2 h bits (where h is the
program vocabulary). In this scheme, we need N log2 h bits to store a program of length N.
Therefore, the volume V represents the size of the program by approximately compensating
for the effect of the programming language used.
What are the relative advantages of using either the LOC
or the function point metric to measure the size of a
software product for software project planning?

• LOC is possibly the simplest among all metrics available to measure project size.
Consequently, this metric is extremely popular. This metric measures the size of a
project by counting the number of source instructions in the developed program.
Obviously, while counting the number of source instructions, comment lines, and
header lines are ignored. Determining the LOC count at the end of a project is
very simple. However, accurate estimation of LOC count at the beginning of a
project is a very difficult task. One can possibly estimate the LOC count at the
starting of a project, only by using some form of systematic guess work.
List the important shortcomings of LOC for use as a
software size
metric for carrying out project estimations.

• a. Difficult to measure LOC in the early stages of a new product.


b. Source instructions vary with coding languages, design methods and with
programmer’s ability.
c. No industry standard for measuring LOC.
d. LOC cannot be used for normalizing if platforms and languages are different.
e. The only way to predict LOC for a new app to be developed is through
analogy based on similar software application.
f. Programmers may be rewarded for writing more LOC based on a
misconception of higher management by thinking that more the LOC, means
more the productivity of the programmer.
Explain why adding more man power to an already late
project makes
it later.

• Brooks’ Law refers to a well-known software development principle coined by Fred


Brooks in The Mythical Man-Month. The law, “Adding manpower to a late software
project makes it later,” states that when a person is added to a project team, and
the project is already late, the project time is longer, rather than shorter.
• Brooks' law may be applied for two key reasons:
"Ramp up" time, which is required by new project members for productivity
because of the complex nature of software projects are complex. This takes
existing resources (personnel) away from active development and places them in
training roles.
• An increase in staff drives communication overhead, including the number and
variety of communication channels.
What do you understand by work breakdown in project
management?
Why is work breakdown important to effective project management?
How is work breakdown achieved? What problems might occur is
tasks
are either broken down into

• A work breakdown structure is a scope management process that is entirely


deliverable-oriented. It is based on an order of tasks that must be completed to
eventually arrive at the final product. The work breakdown structure aims to keep
all project members on task and clearly focused on the purpose of the project.
• The Project Management Body of Knowledge, an internationally recognized
collection of processes and knowledge areas accepted as best practice for the
project management profession, defines the work breakdown structure as a
"hierarchical decomposition of the total scope of work to be carried out by the
project team to accomplish the project objectives and create the required
deliverables.
Consider a software project with 5 tasks T1–T5. Duration of the 5 tasks
in weeks are 3,2,3,5,2 respectively. T 2 and T4 can start when T1 is
complete. T3 can start when T2 is complete. A T5 can start when both
T3 and T4 are complete. Draw the PERT chart representation of the
project. When is the latest start date of the task T3. What is the slack
time of the task T4. Which tasks are on the critical path?
Explain when should you use PERT charts and when you should use
Gantt charts while you are performing the duties of a project manager.

PERT stands for Program Evaluation and Review Technique. A PERT chart illustrates a project
as a network diagram. The U.S. Navy created this tool in the 1950s as they developed the
Polaris missile (and time was of the essence—this was during the Cold War, after all).
PERT charts are best utilized by project managers that the beginning of a project to ensure
that it is accurately scoped. This tool gives users a birds-eye view of the entire project before
it's started to avoid potential bottlenecks. While PERT charts can be used during the project's
implementation to track progress, they lack the flexibility to adapt to small changes with
confronted with roadblocks.  

Created by Henry Gantt during WWI, Gantt charts are used to visualize a project’s schedule
from start to finish. Similar to a PERT chart, Gantt charts display tasks over time to ensure
the project is completed on time. 
Project managers use Gantt charts to identify task dependencies, increase efficiencies, and
improve time management. Gantt charts make it simple to break down projects into
manageable steps that can adjust to the project as needed. 
How is Gantt chart useful in software project management? What
problems might be encountered, if project monitoring and control is
carried out using a Gantt chart?

A Gantt chart is a timeline that is used as a project management tool to


oversee every aspect of the project while keeping a tracking of its
progress. You will easily know who is responsible for what, how long each
task will take and other problems that a team will encounter during the
progress of the project.
What is a baseline in the context of software configuration
management? Explain how a baseline can be updated to form a new
baseline?

In configuration management, a "baseline" is an agreed description of the attributes of a


product
Software Configuration Management is defined as a process to systematically manage,
organize, and control the changes in the documents, codes, and other entities during the
Software Development Life Cycle. It is abbreviated as the SCM process in software
engineering. The primary goal is to increase productivity with minimal mistakes., at a point in
time, which serves as a basis for defining change.
Chapter 3
Questions 46 to 53
Q46.) In what units can you measure the productivity of a software development
team? List three important factors that affect the productivity of a software development
team.

Parameters to measure productivity of software development team;


1) Technical knowledge in the area of the project
2) Quality of the product developed by the programmers
3) Good programming/coding abilities which are applicable across fields
3 important factors that affect productivity of software development team are;
4) Motivation level of software developer
5) Ability to work in a team
6) Communication skills. These comprise of oral, written and interpersonal skills
Q47.) List three common types of risks that a typical software project might
suffer from. Explain how you can identify the risks that your project is susceptible to. Suppose you
are the project manager of a large software development
project, point out the main steps you would follow to manage risks in
your software project.

A project can be susceptible to large variety of risks. There are 3 main categories of
risks which can affect software projects. They are as follows;
1) Project Risks:- Project risks concern various forms of budgetary, schedule, personnel, resource,
and customer-related problems. An important project risk is schedule slippage. Since, software
is intangible, it is very difficult to monitor and control a software project.
2) Technical Risks:- Technical risks concern potential design, implementation, interfacing, testing,
and maintenance problems. Technical risks also include ambiguous specification, incomplete
specification, changing specification, technical uncertainty, and technical obsolescence. Most
technical risks occur due the development team’s insufficient knowledge about the product.
3) Business Risks:- This type of risks includes the risk of building an excellent product that no one
wants, losing budgetary commitments, etc.
Identification of Risks – Risk assessment is done to rank the risks in terms of their
damage causing potential. For risk assessment, first each risk should be rated in
two ways;
1) The likelihood of a risk becoming real (r).
2) The consequence of the problems associated with that risk (s).
Based on these two factors, the priority of each risk can be computed as follows:
p=r*s
where, p is the priority with which the risk must be handled, r is the probability of
the risk becoming real, and s is the severity of damage caused due to the risk
becoming real.
Q48.) Schedule slippage is a very common form of risk that almost every
project manager has to encounter. Explain in 3 to 4 sentences how you
would manage the risk of schedule slippage as the project manager of a
medium-sized project.
Risks relating to schedule slippage arise primarily due to the intangible nature of software. For a
project such as building a house, the progress can easily be seen and assessed by the project
manager. If he finds that the project is lagging behind, then corrective actions can be initiated.
Considering that software development per se is invisible, the first step in managing the risks of
schedule slippage, is to increase the visibility of the software product. Visibility of a software
product can be increased by producing relevant documents during the development process and
getting these documents reviewed by an appropriate team.
Milestones should be placed at regular intervals to provide a manager with regular indication of
progress. Completion of a phase of the development process being followed need not be the only
milestones. Every phase can be broken down to reasonable-sized tasks and milestones can be
associated with these tasks. A milestone is reached, once documentation produced as part of a
software engineering task is produced and gets successfully reviewed. Milestones need not be
placed for every activity. An approximate rule of thumb is to set a milestone every 10 to 15 days. If
milestones are placed too close each other than the overheads in managing the milestones would
be too much.
Q49.) Explain how you can choose the best risk reduction technique when there
are many ways of reducing a risk.
• reduction involves planning ways to contain the damage due to a risk.
Risk
For example, if there is risk that some key personnel might leave, new recruitment may be
planned. The most important risk reduction techniques for technical risks is to build a
prototype that tries out the technology that you are trying to use. For example, if you are using
a compiler for recognizing user commands, you would have to construct a compiler for a small
and very primitive command language first.
There can be several strategies to cope up with a risk. To choose the most appropriate strategy
for handling a risk, the project manager must consider the cost of handling the risk and the
corresponding reduction of risk. For this we may compute the risk leverage of the different
risks. Risk leverage is the difference in risk exposure divided by the cost of reducing the risk.
More formally;

Even though we identified three broad ways to handle any risk, effective risk handling cannot
be achieved by mechanically following a set procedure, but requires a lot of ingenuity on the
part of the project manager.
Q50.) What are the important types of risks that a project might suffer from? How would
you identify the risks that a project is susceptible to during project the project planning
stage?
The different types of risks that a project might suffer from are as follows;
1) Process-related risk: These risks arise due to aggressive work schedule, budget, and
resource utilisation.
2) Product-related risks: These risks arise due to commitment to challenging product
features (e.g. response time of one second, etc.), quality, reliability, etc.
3) Technology-related risks: These risks arise due to commitment to use certain technology
(e.g., satellite communication).

In order to be able to successfully foresee and identify different risks that might affect a
software project, it is a good idea to have a company disaster list. This list would contain all
the bad events that have happened to software projects of the company over the years
including events that can be laid at the customer’s doors. This list can be read by the
project mangers in order to be aware of some of the risks that a project might be
susceptible to. Such a disaster list has been found to help in performing better risk analysis.
Q51.) As a project manager, identify the characteristics that you would look for in a
software developer while trying to select personnel for your team.
Characteristics to look for in software developer for selecting software
development team are;
• Exposure to systematic techniques, i.e. familiarity with softwareengineering
principles.
• Good technical knowledge of the project areas (Domain knowledge)
• Good programming abilities.
• Good communication skills. These skills comprise of oral, written, and
interpersonal skills.
• High motivation.
• Sound knowledge of fundamentals of computer science
• Intelligence.
• Ability to work in a team.
• Discipline, etc.
Q52.) What is egoless programming? How can it be realized?

Ordinarily, the human psychology makes an individual take pride in everything he


creates using original thinking. Software development requires original thinking too,
although of a different type. The human psychology makes one emotionally involved
with his creation and hinders him from objective examination of his creations. Just
like temperamental artists, programmers find it extremely difficult to locate bugs in
their own programs or flaws in their own design. Therefore, the best way to find
problems in a design or code is to have someone review it. Often, having to explain
one’s program to someone else gives a person enough objectivity to find out what
might have gone wrong.

An application of this, is to encourage a democratic teams to think that the design,


code, and other deliverables to belong to the entire group. This is called egoless
programming because it tries to avoid having programmers invest much ego in the
development activity they do in a democratic set up.
Q53.) Is it true that a software product can always be developed faster by having a
larger development team (you can assume that all developers are equally proficient
and have exactly similar experience)? Justify your answer.
Small Teams Are Dramatically More Efficient than Large Teams
• A study done by QSM consultancy in 2005 seems to indicate that smaller teams are more efficient
than larger teams. Not just a little more efficient, but dramatically more efficient. QSM maintains a
database of 4000+ projects. For this study they looked at 564 information systems projects done
since 2002. (The author of the study claims their data for real-time embedded systems projects
showed similar results.) They divided the data into “small” teams (less than 5 people) and “large”
teams (greater than 20 people).
• To complete projects of 100,000 equivalent source lines of code (a measure of the size of the
project) they found the large teams took 8.92 months, and the small teams took 9.12 months. In
other words, the large teams just barely (by a week or so) beat the small teams in finishing the
project!
• Given that the large teams averaged 32 people and the small teams averaged 4 people, the cost of
completing the project a week sooner with the large team is extraordinary: at $10,000 per person-
month (fully loaded employee cost), the large teams would have spent $1.8M while the small teams
only spent $245k.
Q54.) Suppose you have been appointed as the project manager of a large project,
identify the activities you would undertake to plan your project.
Explain the sequence in which you would undertake these activities by using a task
network notation. What are some of the factors which make it hard to accurately
estimate the cost of software projects?
CHAPTER 3
Question 55-65
57. What do you understand by software
configuration management?

• In software engineering, software configuration management is the task of tracking and


controlling changes in the software, part of the larger cross-disciplinary field of configuration
management.SCM practices include revision control and the establishment of baselines. If
something goes wrong, SCM can determine what was changed and who changed it. If a
configuration is working well, SCM can determine how to replicate it across many hosts.

• The acronym "SCM" is also expanded as source configuration management process and software
change and configuration management.However, "configuration" is generally understood to cover
changes typically made by a system administrator.
58. What is the difference between a revision and
a version of a software product? What do you
understand by the terms change control and
version control? Why are these necessary? Explain
how change and version control are achieved using
a configuration management tool
• A version is an iteration, something that is different than before.When
programmers develop software a version is typically a minor software update,
something that addresses issues in the the original release but does not contain
enough to warrant a major release of the software
• A revision is a controlled version. Webster’s dictionary describes a “revision” as 
the act of revising, which is to make a new, amended, improved, or up-to-date 
version. Back to the software analogy, a revision is seen as a major release of the 
software. Something that introduces new features and functionality, as well as 
fixing bugs. In the engineering world we use revisions to document the changes so 
that anyone can understand what was changed. Versions are usually temporary, 
revisions are permanent.
59. Discuss how SCCS or RCS can be used to efficiently manage
the configuration of source code

• Using the Revision Control System (RCS) or the Source Code Control System
(SCCS) lets you keep your source files in a common library and maintain control
over them. Both systems provide easy-to-use, command-line interfaces. Knowing
the basic commands lets you check in the source file to be modified into
a version control file that contains all of the revisions of that source file. When
you want to check out a version control file for editing, the system retrieves the
revision or revisions you specify from the library and creates a working file for
you to use.
Using more advanced interface commands lets you do the following:
• Identify the current status of any file, including the name of the person editing it.
• Reconstruct earlier versions of your files. For each version, the system stores the
changes made to produce that version, the name of the person making the
changes, and the reasons for the changes.
• Prevent the problems that can occur when two people change a file at the same
time without each other's knowledge.
• Maintain multiple branch versions of your files. Branched versions can be merged
back into the original sequence.
• Protect files from unauthorized modification.
60. Consider a software project with 5 tasks T1-T5.
Duration of the 5 tasks (in days) are 15, 10, 12, 25 and 10,
respectively. T2 and T4 can start when T1 is complete. T3
can start when T2 is complete. T5 can start when both T3
and T4 are complete. When is the latest start date of the
task T3? What is the slack time of the task T4?

• Given, T1=10 ; T2=15; T3=18; T4=30;T5=40


• T3 : EST =10+15 = 25 //T1 and T2 complete before
• T5: EST =10+15+18=43
• T3: LST =43-18=25
Slack time = EST-LST =25-25=0 // For T3
61. Why is it necessary for a project manager to
decompose the tasks of a project using work breakdown
structure (WBS)?
• A work breakdown structure (WBS) is a key project deliverable that organizes the team's work into manageable
sections. The Project Management Body of Knowledge (PMBOK) defines the work breakdown structure as a
"deliverable oriented hierarchical decomposition of the work to be executed by the project team." The work
breakdown structure visually defines the scope into manageable chunks that a project team can understand, as
each level of the work breakdown structure provides further definition and detail. Figure 1(below) depicts a
sample work breakdown structure with three levels defined.

• The work breakdown structure has a number of benefits in addition to defining and organizing
the project work. A project budget can be allocated to the top levels of the work breakdown
structure, and department budgets can be quickly calculated based on the each project's work
breakdown structure. By allocating time and cost estimates to specific sections of the work
breakdown structure, a project schedule and budget can be quickly developed. As the project
executes, specific sections of the work breakdown structure can be tracked to identify project
cost performance and identify issues and problem areas in the project organization.
62. If you are asked to make a choice between
democratic and chief programmer team
organisations, which one would you adopt for your
team? Explain the reasoning behind your answer.
• Chief Programmer Team In this team organization, a senior engineer provides the technical leadership and is
designated as the chief programmer. The chief programmer partitions the task into small activities and
assigns them to the team members. He also verifies and integrates the products developed by different team
members. The chief programmer provides an authority, and this structure is arguably more efficient than the
democratic team for well-understood problems. However, the chief programmer team leads to lower team
morale, since team-members work under the constant supervision of the chief programmer. This also
inhibits their original thinking. The chief programmer team is subject to single point failure since too much
responsibility and authority is assigned to the chief programmer.
• The chief programmer team is probably the most efficient way of completing simple and small projects since
the chief programmer can work out a satisfactory design and ask the programmers to code different
modules of his design solution.
• Democratic Team
The democratic team structure, as the name implies, does not enforce any formal team hierarchy
(as shown in fig. 12.3). Typically, a manager provides the administrative leadership. At different
times, different members of the group provide technical leadership.
The democratic organization leads to higher morale and job satisfaction. Consequently, it suffers
from less man-power turnover. Also, democratic team structure is appropriate for less understood
problems, since a group of engineers can invent better solutions than a single individual as in a chief
programmer team.
63. What do you understand by project risk? How can
risks be effectively identified by a project manager?
How can the risks be managed?
• Project risk is an uncertain event or condition that, if it occurs, has an effect on at least
one project objective. Risk management focuses on identifying and assessing the risks to
the project and managing those risks to minimize the impact on the project

Here are seven risk identification techniques:


• Interviews. Select key stakeholders. ...
• Brainstorming. I will not go through the rules of brainstorming here. ...
• Checklists. ...
• Assumption Analysis. ...
• Cause and Effect Diagrams. ...
• Nominal Group Technique (NGT). ...
• Affinity Diagram.
64. Suppose you are appointed as the project manager of a
project to develop a commercial word processing software
product providing features comparable to MS-WORD
software, develop the work breakdown structure (WBS).
Explain your answer.
To build a Work Breakdown Structure for your project using Microsoft Word,
follow these 4 steps:
• Start with the key project deliverables.
• Decompose the key deliverables into their detailed components.
• Assign unique WBS codes to each deliverable.
• Create a WBS dictionary which defines each deliverable.
65. . What are the different project parameters that determine the cost
of a project? What are the important factors which make it hard to
accurately estimate the cost of software projects? If you are a project
manager bidding for a product development to a customer, would you
quote the cost estimated using COCOMO as the price in your bid?
Explain your answer. What are the important factors which make it
hard to accurately estimate the cost of software projects?
• Expert judgment uses the experience and knowledge of experts to estimate the
cost of the project. This technique can take into account unique factors specific to
the project. However, it can also be biased.
• Analogous estimating uses historical data from similar projects as a basis for the
cost estimate. The estimate can be adjusted for known differences between the
projects. This type of estimate is usually used in the early phases of a project and
is less accurate than other methods.
• Parametric estimating uses statistical modeling to develop a cost estimate. It
uses historical data of key cost drivers to calculate an estimate for different
parameters such as cost and duration. For example, square footage is used in
some construction projects.
• Bottom-up estimating uses the estimates of individual work packages which are
then summarized or "rolled up" to determine an overall cost estimate for the
project. This type of estimate is generally more accurate than other methods
since it is looking at costs from a more granular perspective.
• Three-point estimates originated with the
Program Evaluation and Review Technique (PERT). This method uses three
estimates to define an approximate range for an activities cost: Most Likely (Cm),
Optimistic (Co), and Pessimistic (Cp). The cost estimate is calculated using a
weighted average: Cost Estimate = (Co + 4Cm + Cp)/6
• Reserve analysis is used to determine how much contingency reserve, if any,
should be allocated to the project. This funding is used to account for cost
uncertainty.
• Cost of Quality (COQ) includes money spent during the project to avoid failures
and money spent during and after the project due to failures. During cost
estimation, assumptions about the COQ can be included in the project cost
estimate.
• Project management estimating software includes cost estimating software
applications, spreadsheets, simulation applications, and statistical software tools.
This type of software is especially useful for looking at cost estimation
alternatives.
• Vendor analysis can be used to estimate what the project should cost by
comparing the bids submitted by multiple vendors.
• Some of the factors that contribute to this uncertainty include...
• Experience with Similar Projects: The less experience you have with similar
projects, the greater the uncertainty. If you've managed similar projects, you will
be able to better estimate the costs of the project.
• Planning Horizon: The longer the planning horizon, the greater the uncertainty.
The planning horizon you are considering may be the whole project or just a
certain phase. Either way, you will be able to better estimate costs for the time
periods that are closer to the present.
• Project Duration: The longer the project, the greater the uncertainty. This is
similar to planning horizon in the sense that if a project is of a shorter duration
you are more likely to account for most of the costs.
• People: The quantity of people and their skill will be a huge factor in estimating
their costs. Early in the project, you may not even know the specific people that
will be on the project. That will increase the uncertainty of your cost estimates.
- If you are a project manager bidding for a
product development to a customer, would you
quote the cost estimated using COCOMO as the
price in your bid? Explain your answer.
• Estimation by expert judgement is a common way of estimating the effort required for
a project. Unfortunately, this method of estimation does not emphasize re-estimation
during the project life cycle, which is an important part of project tracking, because it
allows the estimates to be improved during the project life cycle. The quality of a cost
estimation model is not so much attributed to the initial estimate, but rather the
speed at which the estimates converges to the actual cost of the project.
• COCOMO is a popular algorithmic model for cost estimation whose cost factors can be
tailored to the individual development environment, which is important for the
accuracy of the cost estimates. More than one method of cost estimation should be
done so that there is some comparison available for the estimates. This is especially
important for unique projects. Cost estimation must be done more diligently
throughout the project life cycle so that in the future there are fewer surprises and
unforseen delays in the release of a product.
CHAPTER 10
Question 1-6
3. Distinguish between an error and a failure in the
context of program testing.

• ERROR: An error is a mistake, misconception, or misunderstanding on the part of


a software developer. In the category of developer we include software
engineers, programmers, analysts, and testers. For example, a developer may
misunderstand a de-sign notation, or a programmer might type a variable name
incorrectly – leads to an Error. It is the one which is generated because of wrong
login, loop or due to syntax. Error normally arises in software; it leads to change
the functionality of the program.
• FAILURE: A failure is the inability of a software system or component to perform
its required functions within specified performance requirements. When a defect
reaches the end customer it is called a Failure. During development Failures are
usually observed by testers.
• Testing is the process of identifying defects, where a defect is any variance
between actual and expected results. “A mistake in coding is called Error, error
found by tester is called Defect, defect accepted by development team then it is
called Bug, build does not meet the requirements then it Is Failure.”
• DEFECT: It can be simply defined as a variance between expected and actual.
Defect is an error found AFTER the application goes into production. It commonly
refers to several troubles with the software products, with its external behavior
or with its internal features. In other words Defect is the difference between
expected and actual result in the context of testing. It is the deviation of the
customer requirement.
5. What are driver and stub modules in the context
of integration and unit testing of a software?

• In software testing life cycle, there are numerous components that play a
prominent part in making the process of testing accurate and hassle free. Every
element related to testing strives to improve its quality and helps deliver accurate
and expected results and services that are in compliance with the defined
specifications. Stubs and drivers are two such elements used in software testing
process, which act as a temporary replacement for a module. These are an
integral part of software testing process as well as general software development.
Therefore, to help you understand the significance of stubs and drivers in
software testing, here is elaborated discussion on the same.
• In the field of software testing, the term stubs and drivers refers to the replica of
the modules, which acts as a substitute to the undeveloped or missing module.
The stubs and drives are specifically developed to meet the necessary
requirements of the unavailable modules and are immensely useful in getting
expected results.
• Stubs and drivers are two types of test harness, which is a collection of software
and test that is configured together in order to test a unit of a program by
stimulating variety of conditions while constantly monitoring its outputs and
behaviour. Stubs and drivers are used in top-down integration and bottom-up
integration testing respectively and are created mainly for the testing purpose.
Chapter 10
Questions 7 to 14
What is the difference between black-box testing and white-box testing? Give an example of a bug that is detected by the black-box test
suite, but is not detected by the white-box test suite, and vice versa.

BLACK BOX TESTING WHITE BOX TESTING


It is a way of software testing in which It is a way of testing the software in
the internal structure or the program or which the tester has knowledge about
the code is hidden and nothing is known the internal structure of the code or the
about it. program of the software.
It is mostly done by software testers. It is mostly done by software developers.
No knowledge of implementation is Knowledge of implementation is
needed. required.
It can be referred as outer or external It is the inner or the internal software
software testing. testing.
It is functional test of the software. It is structural test of the software.
What is the difference between internal and external documentation? What are the different ways of
providing internal documentation?
Internal documentation External documentation
Internal documentation is written in a program as comments. External documentation is written in a place where people who
need to use the software can read about how to use the
software.

Internal documentation would be comments and remarks made External documentation would be things like flow charts, UML
by the programmer in the form of line comments diagrams, requirements documents, design documents etc.

Internal Documentation is created within the programming External Documentation is created by the user and
department and shows the design and implementation of the Programmer/System Analyst.
project (flow charts, UML diagrams, design documents, etc.).

• Comments embedded in the source code.


• Use of meaningful variable names.
• Module and function headers.
• Code indentation.
• Code structuring (i.e., code decomposed into modules and functions).
• Use of enumerated types.
• Use of constant identifiers.
• Use of user-defined data types.
What is meant by structural complexity of a program? Define a metric for measuring the structural
complexity of a program. How is structural complexity of a program different from its computational
complexity? How is structural complexity useful in program development?

• McCabe’s cyclomatic complexity is a measure of the structural complexity of a program. The reason for this is
that it is computed based on the code structure (number of decision and iteration constructs used).
Intuitively, the McCabe’s complexity metric correlates with the difficulty level of understanding a program,
since one understands a program by understanding the computations carried out along all independent
paths of the program.
• This is in contrast to the computational complexity that is based on the execution of the program
statements.
Write a C function for searching an integer value from a large sorted sequence of integer values stored in an array of size
100, using the binary search method.

// A iterative binary search function. It returns location of x in given array arr[l..r] if present, otherwise -1
int binarySearch(int arr[], int l, int r, int x) {
while (l <= r) {
int m = l + (r - l) / 2;
// Check if x is present at mid
if (arr[m] == x)
return m;
// If x greater, ignore left half
if (arr[m] < x)
l = m + 1;
// If x is smaller, ignore right half
else r = m - 1;
}
return -1;
What do you understand by positive and negative test cases? Give one example of each.
• A test case is said to be a positive test case if it is designed to test whether the software correctly
performs a required functionality. A test case is said to be negative test case, if it is designed to test
whether the software carries out something, that is not required of the system. As one example each
of a positive test case and a negative test case, consider a program to manage user login. A positive
test case can be designed to check if a login system validates a user with the correct user name and
password. A negative test case in this case can be a test case that checks whether the login
functionality validates and admits a user with wrong or bogus login user name or password.
Given a software and its requirements specification document, explain how would you design the system
test suite for the software.
• A system test suite is the set of all test that have been designed by a tester to test a given program. The set
of test cases using which a program is to be tested is designed possibly using several test case design
techniques.
• The system test suite is designed based on the SRS document. The two major types of system testing are
functionality testing and performance testing. The functionality test cases are designed based on the
functional requirements and the performance test cases are design to test the compliance of the system to
test the non-functional requirements documented in the SRS document.
What is a coding standard? Identify the problems that might occur if the engineers of an
organization do not adhere to any coding standard?
• Good software development organizations require their programmers to adhere to some well-
defined and standard style of coding which is called their coding standard.
• A coding standard gives a uniform appearance to the codes written by different engineers.
• It facilitates code understanding and code reuse.
• It promotes good programming practices.
Chapter 10
Questions 15 to 24
What is the difference between a coding standard and a coding guideline?
It is mandatory for the programmers to follow the coding standards. Compliance of their code to coding
standards is verified during code inspection. Any code that does not conform to the coding standards is
rejected during code review and the code is reworked by the concerned programmer. In contrast, coding
guidelines provide some general suggestions regarding the coding style to be followed but leave the actual
implementation of these guidelines to the discretion of the individual developers.

Why are formulation and use of suitable coding standards and guidelines considered important to a
software development organisation?
• A coding standard gives a uniform appearance to the codes written by
• different engineers.
• It facilitates code understanding and code reuse.
• It promotes good programming practices.

Write down five important coding standards and coding guidelines that you would recommend.
• Standard headers for different modules
• Conventions regarding error return values and exception handling mechanisms
• Representative coding guidelines
• Do not use a coding style that is too clever or too difficult to Understand
• Avoid obscure side effects
What do you understand by coding standard? When during the development
process is the compliance with coding standards is checked?
The coding standard is a group of rules to unify the code.
Compliance of their code to coding standards is verified during code inspection
What do you understand by testability of a program?
Testability of a requirement denotes the extent to which it is possible
to determine whether an implementation of the requirement conforms
to it in both functionality and performance.
Between the programs written by two different programmers to essentially the
same programming problem, how can you determine which one is more
testable?
A program is more testable, if it can be adequately tested with less number of test
cases. Obviously, a less complex program is more testable. The complexity of a
program can be measured using several types of metrics such as number of
decision statements used in the program.
Discuss different types of code reviews. Explain when and how code review
meetings are conducted. Why is code review considered to be a more efficient
way to remove errors from code compared to testing?
After a module has been coded, usually code review is carried out to ensure that
the coding standards are followed
Code review is an efficient way of removing errors as compared to testing, because
code review identifies errors whereas testing identifies failures.
Distinguish between software verification and software validation. Can one be
used in place of the other? Justify your answer. In which phase(s) of the iterative
waterfall SDLC are the verification and validation activities performed?
Verification does not require execution of the software, whereas validation requires
execution of the software.
It is possible to develop a highly reliable software using validation techniques
alone. However, this would cause the development cost to increase drastically.
Verification techniques help achieve phase containment of errors and provide a
means to cost-effectively remove bugs.
What are the activities carried out during testing a software?
Schematically represent these activities. Which one of these activities
takes the maximum effort?
• Test suite design
• Running test cases and checking the results to detect failures
• Locate error
• Error correction
Debugging often turns out to be the most time-consuming activity
Which one of the following is the strongest structural testing technique —
statement coverage-based testing, branch coverage-based testing, or multiple
condition coverage-based testing? Justify your answer.
In the multiple condition (MC) coverage-based testing, test cases are designed to
make each component of a composite conditional expression to assume both true
and false values.
Condition testing is a stronger testing strategy than branch testing
Prove that branch coverage-based testing technique is a stronger
testing technique compared to a statement coverage-based testing
technique.
Which is a stronger testing—data flow testing or path testing? Give the
reasonings behind your answer.
Clearly, all-uses criterion is stronger than all-definitions criterion. An even stronger
criterion is all definition-use-paths criterion, which requires the coverage of all
possible definition-use paths that either are cycle-free or have only simple cycles. A
simple cycle is a path in which only the end node and the start node are the same.
Briefly highlight the difference between code inspection and code walkthrough.
Compare the relative merits of code inspection and code walkthrough.
The main objective of code walkthrough is to discover the algorithmic and logical
errors in the code.
The principal aim of code inspection is to check for the presence of some common
types of errors that usually creep into code due to programmer mistakes and
oversights and to check whether coding standards have been adhered to.
Chapter 10
Questions 25 to 42
• What is meant by a code walkthrough? What are some of the
important types of errors checked during code walkthrough?
Give one example of each of these types of errors
• Code walkthrough is one of the code review technique, Code walkthrough is
an informal code analysis technique, Each member selects some test cases
and simulates execution of the code by hand (i .e. , traces the execution
through different statements and functions of the code).
• The main objective of code walkthrough is to discover the algorithmic
and logical errors in the code.
• Eg. The team performing code walkthrough should not be either too big or
too small. Ideally, it should consist of between three to seven members.
Suppose two programmers are assigned the same programming problem and they
develop this independently . Explain how can you compare their programs with respect
to:(a) Path testing effort, (b) Understanding difficulty (c) Number of latent bugs (d) Reliability
• Estimation of testing effort: Cyclomatic complexity is a measure of the maximum number of basis
paths. Thus, it indicates the minimum number of test cases required to achieve path coverage.
Therefore, the testing effort and the time required to test a piece of code satisfactorily is
proportional to the cyclomatic complexity of the code. To reduce testing effort, it is necessary to restrict
the cyclomatic complexity of every function to seven.
• Estimation of program reliability: Experimental studies indicate there exists a clear relationship
between the McCabe’ s metric and the number of errors latent in the code after testing. This
relationship exists possibly due to the correlation of cyclomatic complexity with the structural
complexity of code. Usually the larger is the structural complexity , the more difficult it is to test
and debug the code.
• Estimation of structural complexity of code: McCabe’ s cyclomatic complexity is a measure of the
structural complexity of a program. The reason for this is that it is computed based on the code
structure (number of decision and iteration constructs used).
Usually large software products are tested at three different testing level s, i .e. , unit
testing, integration testing, and system testing. What would be the disadvantage of
performing a thorough testing only after the system has been completely developed, e.g. ,
detect all the defects of the product during system testing?
• A software product is normally tested in three levels or stages: Unit testing, Integration testing, System testing
• Unit testing is referred to as testing in the small, whereas integration and system testing are refer red
to as testing in the large.
• After testing all the units individually , the units are slowly integrated and tested after each step of
integration (integration testing). Finally , the fully integrated system is tested (system testing).
Integration and system testing are known as testing in the large.
• First while testing a module, other modules with which this module needs to interface may not be
ready . Moreover , it is always a good idea to first test the module in isolation before integration
because it makes debugging easier . If a failure is detected when an integrated set of modules is
being tested, it would be difficult to determine which module exactly has the error.
What do you understand by system testing? What are the different
kinds of system testing that are usually performed on large
software products?
• The aim of program testing is to help realise identify all defects in a
program. However , in practice, even after satisfactory completion of the testing
phase, it is not possible to guarantee that a program is error free.
• integration and system testing are refer red to as testing in the large.
• After testing all the units individually , the units are slowly integrated and
tested after each step of integration (integration testing). Finally , the fully
integrated system is tested (system testing). Integration and system testing
are known as testing in the large.
Is system testing of object-oriented programs any different from
that for the procedural programs? Explain your answer.
• The satisfactory testing object-oriented programs is much more difficult and
requires much more cost and effort as compared to testing similar
procedural programs. The main reason behind this situation is that
various object-oriented features introduce additional complications and
scope of new types of bugs that are present in procedural programs.
Is integration testing of object-oriented programs any different
from that for the procedural programs? Explain your answer.
• The satisfactory testing object-oriented programs is much more difficult and
requires much more cost and effort as compared to testing similar
procedural programs. The main reason behind this situation is that
various object-oriented features introduce additional complications and
scope of new types of bugs that are present in procedural programs.
Using suitable examples, explain how test cases can be designed for
an object-oriented program from its class diagram.
Class diagram-based testing
Testing derived classes: All derived classes of the base class have to be
instantiated and tested. In addition to testing the new methods defined
in the derive class, the inherited methods must be retested.
Using suitable examples, explain how test cases can be designed for
an object-oriented program from its sequence diagrams.
Sequence diagram-based testing
Method coverage: All methods depicted in the sequence diagrams are
covered. Message path coverage: All message paths that can be
constructed from the sequence diagrams are covered.
Distinguish between alpha, beta, and acceptance testing. How are the test
cases designed for these tests? Are the test cases for the three types of
tests necessarily identical ? Explain your answer.
• System tests are designed to validate a fully developed system to assure that it meets
its requirements. The test cases are therefore designed solely based on the SRS
document.
• Alpha Testing: Alpha testing refers to the system testing carried out by the test team wi
thin the developing organi sation.
• Beta Testing: Beta testing is the system testing performed by a select group of
friendly customers.
• Acceptance Testing: Acceptance testing is the system testing performed by the
customer to determine whether to accept the delivery of the system.
Suppose a developed software has successfully passed all the
three level s of testing, i.e. , unit testing, integration testing, and
system testing. Can we claim that the software is defect free?
Justify your answer
• The aim of program testing is to help realize identify all defects in a
program. However , in practice, even after satisfactory completion of the
testing phase, it is not possible to guarantee that a program is error
free.
Distinguish among a test case, a test suite, a test scenario, and a test script
• A test case is a triplet [I , S, R], where I is the data input to the program under test, S
is the state of the program at which the data is to be input, and R is the result
expected to be produced by the program.
• A test scenario is an abstract test case in the sense that it only identifies the aspects
of the program that are to be tested without identifying the input, state, or output.
• A test script is an encoding of a test case as a short program. Test scripts are
developed for automated execution of the test cases. A test case is said to be a positive
test case if it is designed to test whether the software correctly performs a required
functionality . A test case is said to be negative test case f it is designed to test whether the
software carries out something, that is not required of the system.
Usability of a software product is tested during which type of testing:
unit, integration, or system testing? How is usability tested?
Usability testing concerns checking the user interface to see if it meets
all user requirements concerning the user interface. During usability
testing, the display screens, messages, report formats, and other
aspects relating to the user interface requirements are tested. A GUI
being just being functionally correct is not enough.
Distinguish between the static and dynamic analysis of a program. Explain at least one metric
that a static analysis tool reports and at least one metric that a dynamic analysis tool reports.
How are these metrics useful ?
• automated tool that takes either the source code or the executable code of a program as input and
produces reports regarding several important characteristics of the program, such as its size,
complexity , adequacy of commenting, adherence to programming standards, adequacy of testing, etc.
• Static program analysis tool s assess and compute various characteristics of a program without executing
it. Typically , static analysis tools analyse the source code to compute certain metrics characterising the
source code (such as size, cyclomatic complexity , etc. ) and also report certain analytical conclusions.
• Dynamic program analysis tool s can be used to evaluate several program characteristics based on an
analysis of the run time behaviour of a program. These tools usually record and analyse the actual
behaviour of a program while it is being executed. A dynamic program analysis tool (also called a
dynamic analyser ) usually collects execution trace information by instrumenting the code.
• A major practical limitation of the static analysis tools lies in their inability to analyse run-time
information such as dynamic memory references using pointer variables and pointer arithmetic, etc.
What are the important results that are usually reported by a static analysis tool and dynamic
analysis tool when applied to a program under development? How are these results useful ?
• Static program analysis tool s assess and compute various characteristics of a program without executing
it. Typically , static analysis tools analyse the source code to compute certain metrics characterising the
source code (such as size, cyclomatic complexity , etc. ) and also report certain analytical conclusions.
• Dynamic program analysis tool s can be used to evaluate several program characteristics based on an
analysis of the run time behaviour of a program. These tools usually record and analyse the actual
behaviour of a program while it is being executed. A dynamic program analysis tool (also called a
dynamic analyser ) usually collects execution trace information by instrumenting the code.
• Static analysis tools often summarise the results of analysis of every function in a polar chart known as
Kiviat Chart. A Kiviat Chart typically shows the analysed values for cyclomatic complexity , number of
source lines percentage of comment lines, Hal stead’s metrics, etc.
• the dynamic analysis results are reported in the form of a histogram or pie chart to describe the
structural coverage achieved for different modules of the program.
What do you understand by automatic program analysis? Give a broad
classification of the different types of program analysis tool s used during program
development. What are the different types of information produced by each type of
tool ?
• automated tool that takes either the source code or the executable code of a program as
input and produces reports regarding several important characteristics of the program, such
as its size, complexity , adequacy of commenting, adherence to programming standards,
adequacy of testing, etc.
• Static program analysis tool s assess and compute various characteristics of a program
without executing it. Typically , static analysis tools analyse the source code to compute
certain metrics characterising the source code (such as size, cyclomatic complexity , etc. )
and also report certain analytical conclusions.
• Dynamic program analysis tool s can be used to evaluate several program characteristics
based on an analysis of the run time behaviour of a program. These tools usually record
and analyse the actual behaviour of a program while it is being executed. A dynamic
program analysis tool (also called a dynamic analyser ) usually collects execution trace
information by instrumenting the code.
Design the black-box test suite for a function that checks whether a
character string (of up to twenty-five characters in length) is a
palindrome.
The equivalence classes are the leaf level classes shown in Figure. The
equivalence classes are palindromes, non-palindromes, and invalid
inputs. Now, selecting one representative value from each equivalence
class, we have the required test suite: {abc,aba,abcdef}.
Design the black-box test suite for a function that takes the name of a
book as input and searches a file containing the names of the books
available in the Library and displays the details of the book if the book is
available in the library otherwise displays the message “book not
available”.
A book can be searched in the library catalog by inputting its name. If the book is
available in the library, then the details of the book are displayed. If the book is not
listed in the catalog, then an error message is generated. While developing the DFD
model for this simple problem, many beginners commit the mistake of drawing an
arrow (as shown in Figure 6.6) to indicate that the error function is invoked after
the search book. But, this is a control information and should not be shown on the
DFD.
Chapter 10
Question 43 - 59
• 43 Why is it important to properly document a software?
What are the different ways of documenting a software
product?
• For a programmer reliable documentation is always a must.
The presence of documentation helps keep track of all
aspects of an application and it improves on the quality of
a software product. Its main focuses are development,
maintenance and knowledge transfer to other
developers. Successful documentation
will make information easily accessible, provide a
limited number of user entry points, help new users
learn quickly, simplify the product and help cut
support costs. Documentation is usually focused on the
following components that make up an application: server
environments, business rules, databases/files,
troubleshooting, application installation and code
deployment.
• 44 What do you understand by the clean room strategy?
• The clean room technique is a process in which a new
product is developed by reverse engineering an existing
product, and then the new product is designed in such a
way that patent or copyright infringement is avoided. The
clean room technique is also known as clean room design.
(Sometimes the words "clean room" are merged into the
single word, "cleanroom.") Sometimes this process is called
the Chinese wall method, because the intent is to place a
demonstrable intellectual barrier between the reverse
engineering process and the development of the new
product.
• 45 What is Cyclomatic Complexity?
• Cyclomatic complexity is a source code complexity
measurement that is being correlated to a number of coding
errors. It is calculated by developing a Control Flow Graph of
the code that measures the number of linearly-independent
paths through a program module.Lower the Program's
cyclomatic complexity, lower the risk to modify and easier to
understand. It can be represented using the below formula:
• Cyclomatic complexity = E - N + 2*P
• where,
• E = number of edges in the flow graph.
• N = number of nodes in the flow graph.
• P = number of nodes that have exit points
46. What are the limitations of the error
seeding method?

These are some disadvantages.


First, the selectivity could be affected; a random process may fail to throw up a
particular kind of error. The process could have ‘blind spots’ or the seeding could
not be comprehensive at all. Second, random seeding may change the original
program in a way that does not constitute an error. Even though this situation has
a very low probability of happening. Finally, unrepresentative results may be
given due to the generation of many of one particular kind of error.
• 47 What is Stress Testing?
• Stress Testing is defined as a type of Software Testing that
verified the stability & reliability of the system. This test
mainly determines the system on its robustness and error
handling under extremely heavy load conditions.
• It even tests beyond the normal operating point and
evaluates how the system works under those extreme
conditions. Stress Testing is done to make sure that the
system would not crash under crunch situations.
48. What do you understand by unit testing?

UNIT TESTING is a level of software testing where individual units/ components of


a software are tested. The purpose is to validate that each unit of the software
performs as designed. A unit is the smallest testable part of any software. It
usually has one or a few inputs and usually a single output.
• 49 What do you understand by the term integration
testing? Which types of defects are uncovered during
integration testing? What are the different types of
integration testing methods that can be used to carry
out integration testing of a large software product?
Compare the merits and demerits of these different
integration testing strategies.
• INTEGRATION TESTING is a level of software testing
where individual units are combined and tested as a
group. The purpose of this level of testing is to expose
faults in the interaction between integrated units. Test
drivers and test stubs are used to assist in Integration
Testing.
• Definition by ISTQB
• integration testing: Testing performed to expose defects in the
interfaces and in the
interactions between integrated components or systems. See
also component integration
testing, system integration testing.
• component integration testing: Testing performed to expose
defects in the interfaces and
interaction between integrated components.
• system integration testing: Testing the integration of systems
and packages; testing
interfaces to external organizations (e.g. Electronic Data
Interchange, Internet).
• Definition by ISTQB
• integration testing: Testing performed to expose defects in the
interfaces and in the
interactions between integrated components or systems. See
also component integration
testing, system integration testing.
• component integration testing: Testing performed to expose
defects in the interfaces and
interaction between integrated components.
• system integration testing: Testing the integration of systems
and packages; testing
interfaces to external organizations (e.g. Electronic Data
Interchange, Internet).
• Analogy
• During the process of manufacturing a ballpoint pen, the
cap, the body, the tail and clip, the ink cartridge and the
ballpoint are produced separately and unit tested
separately. When two or more units are ready, they are
assembled and Integration Testing is performed. For
example, whether the cap fits into the body or not.
• Method
• Any of Black Box Testing, White Box Testing and Gray Box
Testing methods can be used. Normally, the method
depends on your definition of ‘unit’.
• Tasks
• Integration Test Plan
• Prepare
• Review
• Rework
• Baseline
• Integration Test Cases/Scripts
• Prepare
• Review
• Rework
• Baseline
• Integration Test
• Perform
• Approaches
• Big Bang is an approach to Integration Testing where all or most of the
units are combined together and tested at one go. This approach is
taken when the testing team receives the entire software in a bundle.
So what is the difference between Big Bang Integration Testing and
System Testing? Well, the former tests only the interactions between
the units while the latter tests the entire system.
• Top Down is an approach to Integration Testing where top-level units are
tested first and lower level units are tested step by step after that. This
approach is taken when top-down development approach is followed.
Test Stubs are needed to simulate lower level units which may not be
available during the initial phases.
• Bottom Up is an approach to Integration Testing where bottom level
units are tested first and upper-level units step by step after that. This
approach is taken when bottom-up development approach is followed.
Test Drivers are needed to simulate higher level units which may not be
available during the initial phases.
• Sandwich/Hybrid is an approach to Integration Testing which is a
combination of Top Down and Bottom Up approaches.
• 50 Discuss how you would perform system testing of a
software that implements a bounded queue of positive
integral elements. Assume that the queue supports only
the functions insert an element, delete an element, and
find an element.
• SYSTEM TESTING is a level of software testing where a
complete and integrated software is tested. The purpose of
this test is to evaluate the system’s compliance with the
specified requirements.
• Definition by ISTQB
• system testing: The process of testing an integrated system to
verify that it meets specified requirements.
• Design a data structure that supports the following operations in
Θ(1) time.
• insert(x): Inserts an item x to the data structure if not already
present.
• remove(x): Removes an item x from the data structure if present.

• search(x): Searches an item x in the data structure.


• getRandom(): Returns a random element from current set of
elements
• Definition by ISTQB
• system testing: The process of testing an integrated system to
verify that it meets specified requirements.
• Design a data structure that supports the following operations in
Θ(1) time.
• insert(x): Inserts an item x to the data structure if not already
present.
• remove(x): Removes an item x from the data structure if present.

• search(x): Searches an item x in the data structure.


• getRandom(): Returns a random element from current set of
elements
• insert(x)
1) Check if x is already present by doing a hash map lookup.
2) If not present, then insert it at the end of the array.
3) Add in the hash table also, x is added as key and last array
index as the index.
• remove(x)
1) Check if x is present by doing a hash map lookup.
2) If present, then find its index and remove it from a hash
map.
3) Swap the last element with this element in an array and
remove the last element.
Swapping is done because the last element can be removed
in O(1) time.
4) Update index of the last element in a hash map.
• getRandom()
1) Generate a random number from 0 to last index.
2) Return the array element at the randomly generated
index.
• search(x)
Do a lookup for x in hash map.
51. What do you understand by side effects of a function call. Disscuss with
examples.

Functional programming is based on the simple premise that your functions


should not have side effects; they are considered evil in this paradigm. If a
function has side effects we call it a procedure, so functions do not have
side effects. We consider that a function has a side effect if it modifies a
mutable data structure or variable, uses IO, throws an exception or halts an
error; all of these things are considered side effects. The reason why side
effects are bad is because, if you had them, a function can be
unpredictable depending on the state of the system; when a function has
no side effects we can execute it anytime, it will always return the same
result, given the same input.
52. What do you mean by regression testing? When is
regression testing carried out? How are regression test cases
designed?

• Regression Testing is defined as a type of software testing to confirm that a recent program or code
change has not adversely affected existing features.
• Need of Regression Testing
• Regression Testing is required when there is a
• Change in requirements and code is modified according to the requirement
• Selecting test cases for regression testing
• It was found from industry data that a good number of the defects reported by customers were due to
last minute bug fixes creating side.
Effective Regression Tests can be done by selecting the following test cases -
• Test cases which have frequent defects
• Functionalities which are more visible to the users
• Test cases which verify core features of the product
• Test cases of Functionalities which has undergone more and recent changes
• 53 Do you agree with the following statement
—“System testing can be considered a pure black-
box test.” Justify your answer.
• BLACK BOX TESTING, also known as Behavioral
Testing, is a software testing method in which the
internal structure/design/implementation of the
item being tested is not known to the tester. These
tests can be functional or non-functional, though
usually functional.
• This method is named so because the software program, in
the eyes of the tester, is like a black box; inside which one
cannot see. This method attempts to find errors in the
following categories:
• Incorrect or missing functions
• Interface errors
• Errors in data structures or external database access
• Behavior or performance errors
• Initialization and termination errors
• Definition by ISTQB
• black box testing: Testing, either functional or non-
functional, without reference to the internal structure of
the component or system.
• black box test design technique: Procedure to derive
and/or select test cases based on an analysis of the
specification, either functional or non-functional, of a
component or system without reference to its internal
structure.
• Definition by ISTQB
• black box testing: Testing, either functional or non-
functional, without reference to the internal structure of
the component or system.
• black box test design technique: Procedure to derive
and/or select test cases based on an analysis of the
specification, either functional or non-functional, of a
component or system without reference to its internal
structure.
• 54 What do you understand by big-bang integration
testing? How is bigbang integration testing performed?
What are the advantages and disadvantages of the big-
bang integration testing strategy? Describe at least one
situation where big-bang integration testing is desirable.
Big Bang Integration Testing is an integration testing strategy,
wherein all units are linked at once, which results in a
complete and efficient system. In this type of integration
testing all the components as well as the modules of the
software are integrated simultaneously, after which
everything is tested as a whole. During the process of big
bang integration testing, most of the developed modules are
coupled together to form a complete software system or a
major part of the system, which is then used for integration
testing. This approach of software testing is very effective as
it enables software testers to save time as well as their
efforts during the integration testing process.
• Benefits:
• Big bang integration testing is used to test the complete system.
• The amount of planning required for this type of testing is
almost negligible.
• All the modules are completed before the inception of
integration testing.
• It does not require assistance from middle components such as
stubs and driver, on which testing is dependent.
• Big bang testing is cost effective.
• There is no need of immediate builds and efforts required for
the system.
• Drawbacks:
• In Big bang integration testing, it is difficult to trace the cause of
failures as the modules are integrated late.
• This approach is quite challenging and risky, as all the modules and
components are integrated together in a single step.
• If any bug is found it becomes difficult to detach all the modules on
order to find out its root cause.
• Defects present at the interface of components are identified at a
later stage, as all the components are integrated in one shot.
• Since all the modules are tested together chances of failure
increases.
• There is a high probability of missing some crucial defects, errors
and issues, which might pop up in the production environment.
• It is difficult and tough to cover all the cases for integration testing
without missing even a single scenario.
• Isolating any defect or bug during the testing process is difficult.
• If the test cases and their results are not recorded properly, it can
complicate the integration testing and prevent developers and
testers from achieving their desired goals.
• 55 What is the relationship between cyclomatic complexity
and program comprehensibility? Can you justify why such
an apparent relationship exists?
• Both the cyclomatic complexity and software testing are
relating terms as Cyclomatic Complexity is software metric
used to some independent way executions in the
application. Presented by Thomas McCabe in 1976, it
gauges the amount of directly independent ways through a
program module.
• The cyclomatic complexity helps to understand the
engineers about independent way executions and pattern
unit tests that they have to approve. The developers
utilizing cyclomatic complexity tool can guarantee that
every one of the ways has tested at least once. It’s an
extraordinary comfort for the developers and their
respective managers.
56. Describe the following
white-box testing strategies
• Statement coverage is a white box testing technique, which involves the execution of all
the statements at least once in the source code. It is a metric, which is used to calculate
and measure the number of statements in the source code which have been executed.
• Branch coverage is a testing method, which aims to ensure that each one of the
possible branch from each decision point is executed at least once and thereby ensuring
that all reachable code is executed. That is, every branch taken each way, true and false.
• Condition coverage. With Condition coverage the possible outcomes of (“true” or “false”)
for each condition are tested at least once. This means that each individual condition is
one time true and false. In other words we cover all conditions, hence condition
coverage.
• Path coverage refers to designing test cases such that all linearly independent paths in
the program are executed at least once. A linearly independent path can be defined in
terms of what's called a control flow graph of an application.
57. What is Selection sort function.
Also draw the flowchart for the same.

Selection Sort
The selection sort algorithm sorts an array by repeatedly finding the
minimum element (considering ascending order) from unsorted part
and putting it at the beginning. The algorithm maintains two subarrays
in a given array.
58. Discuss cyclomatic
complexity for a program.
• Cyclomatic Complexity
• Cyclomatic complexity of a code section is the quantitative measure of the number of linearly
independent paths in it. It is a software metric used to indicate the complexity of a program.
It is computed using the Control Flow Graph of the program. The nodes in the graph indicate
the smallest group of commands of a program, and a directed edge in it connects the two
nodes i.e. if second command might immediately follow the first command.
• cyclomatic complexity M would be defined as,
• M = E – N + 2P
• where,
E = the number of edges in the control flow graph
N = the number of nodes in the control flow graph
P = the number of connected components
How will you determine the minimum number of test cases needed for path
coverage.

• To achieve entire branch coverage let M is an upper bound for the number of test cases that are required
which will also be a lower bound for the number of paths through control flow graph (CFG). Assuming
every test case takes only one path, and then the number of cases needed to achieve complete path
coverage is equal to the number of paths in graph that can actually be considered. But some paths might
be impossible, so even if the number of paths through the CFG is obviously an upper bound for the
number of test cases needed for path coverage, this number of possible paths may sometimes less than
M.

Branch coverage >= cyclomatic complexity [1] [5] >= number of paths.

It may be also concluded that,

Number of Test Cases required achieving branch coverage <= cyclomatic complexity <= Number of Test
Cases required to achieve path coverage.
CHAPTER 10
Questions 60 to 76
Q.60.What does the Fog index signify? How is the Fog
index useful in producing good software documentation?
• Gunning’s fog index
Gunning’s fog index (developed by Robert Gunning in 1952) is a metric that has
been designed to measure the readability of a document. The computed metric
value (fog index) of a document indicates the number of years of formal education
that a person should have, in order to be able to comfortably understand that
document. The Gunning’s fog index of a document D can be computed as follows:
Observe that the fog index is computed as the sum of two different factors. The
first factor computes the average number of words per sentence (total number of
words in the document divided by the total number of sentences). This factor
therefore accounts for the common observation that long sentences are difficult to
understand. The second factor measures the percentage of complex words in the
document.
61. Identify the types of defects that you would be able to
detect during the following: (a) Code inspection (b) Code
walkthrough

Following is a list of some classical programming errors which can be checked during code inspection:
• Use of un-initialized variables.
• Jumps into loops.
• Non-terminating loops.
• Incompatible assignments.
• Array indices out of bounds.
• Improper storage allocation and de-allocation.
• Mismatch between actual and formal parameter in procedure calls.
• Use of incorrect logical operators or incorrect precedence among operators.
• Improper modification of loop variables.
• Comparison of equality of floating point values.
• Dangling reference caused when the referenced memory has not been allocated
62. Design the black-box test suite for a function named quadratic-
solver. The quadratic-solver function accepts three floating point
numbers (a, b, c) representing a quadratic equation of the form ax2 +
bx + c = 0. It computes and displays the solution.

Equivalance solution: Invalid input : {“a,b,c”}

Valid input :case 1-distinct roots {-1,2,1}


Case 2- Real roots {1,-3,1}
Case 3-Imaginary roots {1,1,1}
63. Design the black-box test suite for a function that accepts four pairs
of floating point numbers representing four co-ordinate points. These
four co-ordinate points represent the centres of two circles and a point
on the circumference of each of the two circles. The function prints
whether the two circles are intersecting, one is contained within the
other, or are disjoint.
{center1,peripheral point 1 , center2,peripheral point 2}
Invalid:- {(“a,b”), (“a,b”), (a,b”),(“a,b”),}
Valid:
• Concentric circle : {(0,0),(0,3),(0,0),(0,5)}
• Disjoint circle :{(0,0),(2,0),(5,0),(3,0)}
• Intersecting circle:{(0,0),(3,0),(5,0),(2,0)}
• 64. Design black-box test suites for a function called find-intersection.
The function find- intersection takes four real numbers m1, c1, m2, c2
as its arguments representing two straight lines y = m1x + c1 and y =
m2x + c2. It determines the points of intersection of the two lines.
Depending on the input values to the function, it displays any one of
the following messages:
• single point of intersection :- {(1,2,3,4)}
• overlapping lines—infinite points of intersection:-{(1,2,1,2)}
• parallel lines—no points of intersection :- {(1,2,1,4)}
• invalid input values :- {“a,b,c,d”}
65. Design black-box test suite for the following program. The program
accepts two pairs of co-ordinates (x1,y1),(x2,y2), (x3,y3), (x4,y4). The
first two points (x1,y1) and (x2,y2) represent the lower left and the
upper right points of the first rectangle. The second two points (x3,y3)
and (x4,y4) represent the lower left and the upper right points of the
second rectangle. It is assumed that the length and width of the
rectangle are parallel to either the x-axis or y-axis. The program
computes the points of intersection of the two rectangles and prints
their points of intersection.
• Invalid input:-{(2,3),(1,2),(1,3),(2,4)}
• Two rect. Are Intersecting:-{(0,0),(2,4),(2,0),(4,4)
• Two rect. Are Non-Intersecting(full inside):-{(0,0),(2,4),(1,2),(3,3)}
(whole outside):-{(0,0),(2,4),(3,0),(5,4)}
• 66. Design black-box test suite for a program that accepts up to 3
simultaneous linear equations in up to 3 independent variables and
displays the solution.
• No solution:-{any string}
• 3 points lie on Parallel planes:-{(1,2,3),(2,2,5),(3,3,5)}
• 3 points lie on Intersecting planes:-{(1,2,3),(1,3,4),(1,4,5)}
• 67. Design the black-box test suite for the following Library
Automation Software. The Library Automation Software accepts a
string representing the name of a book. It checks the library catalog,
and displays whether the book is listed in the catalog or not. If the
book is listed in the catalog, it displays the number of copies that are
currently available in the racks and the copies issued out.
• Invalid input:-{any numeric value}
• Valid input:- Book that is available and is in catalogue:
{“abc”€catalogue}
• Is in catalogue and not available:- {“abc” €catalogue}
• Is in catalogue and nit available:-{“agv” not € catalogue}
68. Design the black-box test suite for a program that accepts two
strings and checks if the first string is a substring of the second string
and displays the number of times the first string occurs in the second
string. Assume that each of the two strings has size less than twenty
characters.
• Invalid input:-{any word of length greater than 20 or have integer}
• Valid :-
• With 1 subset:-{(“ABC”,”ABCDE”}
• No subsetting:-{“ABC”,”DEF”}
• With <2 and less than 20 subsetting:-{“ABC”,”ABCABC”}
69. Design black-box test suite for a program that accepts a pair of
points defining a straight line and another point and a float number
defining the center of a circle and its radius. The program is intended to
compute their points of intersection and prints them.
(center of circle , radius, 1st extreme point line ,2nd extreme point line)
• Invalid input:-{“ABC”}
• Both point of line touches on circle:-{(0,0),(5),(10,10),(-10,-10)
• Both point of line does not touches on circle:-{(0,0),(5),(10,10),(10,0)
• Both point lies inside circle:-{(0,0),(5),(1,1),(-1,-1)}
• Point is tangent to circle:-{(0,0),(5),(5,0),(5,2)}
Q.70.What do you understand by an executable specification
language? How is it different from a traditional procedural
programming language? Name an executable specification language.

• When the specification of a system is expressed formally or is described by using a


programming language, then it becomes possible to directly execute the specification
without having to design and write code for implementation.
• However, executable specifications are usually slow and inefficient, 4GLs 4 (4th Generation
Languages) are examples of executable specification languages.
• 4GLs are successful because there i s a lot of large granularity commonality across data
processing applications which have been identified and mapped to program code.
• 4GLs get their power from software reuse, where the common abstractions have been
identified and parameterized.
• Careful experiments have shown that rewriting 4GL programs in 3GLs results in up to 50 per
cent lower memory usage and also the program execution time can reduce up to ten folds.
• 4GLs 4 (4th Generation Languages) are examples of executable specification languages .
71. Among the different development phases of life cycle, testing
typically requires the largest manpower. Identify the main reasons
behind the large manpower requirement for the testing phase.
• Software Testing is a process of evaluating the functionality of a software application to find any
software bugs.
• It checks whether the developed software met the specified requirements and identifies any
defect in the software in order to produce a quality product .
• It is also stated as the process of verifying and validating a software product. It checks whether
the software product.
• Main reasons for large manpower
because it is consolidated in its
six phases itself and these are
shown in figure:
Q.72.What do you understand by performance testing?
What are the different types of performance testing that
should be performed for each of the problems.

• Performance testing is an important type of system testing.


• Performance testing is carried out to check whether the system meets the non
functional requirements identified in the SRS document.
• There are several types of performance testing corresponding to various types of
non-functional requirements.
• For a specific system, the types of performance testing to be carried out on a system
depends on the different non-functional requirements of the system documented in
its SRS document. All performance tests can be considered as black-box tests.
• Stress testing , Volume testing , Configuration testing , Compatibility testing ,
Regression testing , Recovery testing , Maintenance testing , Documentation
testing , Usability testing are common types of Performance testing.
Q.73.Identify the types of information that should be
presented in the test summary report.

• A piece of documentation that is produced towards the end of testing is the test
summary report.
• This report normally covers each subsystem and represents a summary of tests which
have been applied to the subsystem and their outcome.
• It normally specifies the following:
• What is the total number of tests that were applied to a subsystem.
• Out of the total number of tests how many tests were successful.
• How many were unsuccessful, and the degree to which they were unsuccessful, e.g.,
whether a test was an outright failure or whether some of the expected results of the
test were actually observed.
• Other thing like project information , test summary , test objective and defect are some
of the key things test summary report should contain.
74. What is the difference between top-down and bottom-up integration testing approaches?
What are their advantages and disadvantages? Explain your answer using an example. Why is the
mixed integration testing approach preferred by many testers?
• Bottom-up :-Large software products are often made up of several subsystems. A subsystem
might consist of many modules which communicate among each other through well-defined
interfaces. In bottom-up integration testing, first the modules for the each subsystem are
integrated. Thus, the subsystems can be integrated separately and independently. The primary
purpose of carrying out the integration testing a subsystem is to test whether the interfaces
among various modules making up the subsystem work satisfactorily. In a pure bottom-up
testing no stubs are required, and only test-drivers are required.
• Top-down:- integration testing starts with the root module in the structure chart and one or
two subordinate modules of the root module. After the top-level ‘skeleton’ has been tested,
the modules that are at the immediately lower layer of the ‘skeleton’ are combined with it and
tested. Top-down integration testing approach requires the use of program stubs to simulate
the effect of lower-level routines that are called by the routines under test. A pure top-down
integration does not require any driver routines. An advantage of top-down integration testing
is that it requires writing only stubs, and stubs are simpler to write compared to drivers.
• The mixed approach overcomes this shortcoming of the top-down and bottom-up approaches.
In the mixed testing approach, testing can start as and when modules become available after
unit testing. Therefore, this is one of the most commonly used integration testing approaches.
In this approach, both stubs and drivers are required to be designed.
75. What do you understand by “code review effectiveness”? How can review
effectiveness for an organization measured quantitatively?
• It can be considered as static analysis methods since those target to detect errors
based on analyzing the source code. However, strictly speaking, this is not true since
we are using the term static program analysis to denote automated analysis tools.
• On the other hand, a compiler can be considered to be a type of a static program
analysis tool. A major practical limitation of the static analysis tools lies in their
inability to analyse run-time information such as dynamic memory references using
pointer variables and pointer arithmetic, etc.
• In a high level programming languages, pointer variables and dynamic memory
allocation provide the capability for dynamic memory references.
• However, dynamic memory referencing is a major source of programming errors in a
program. Static analysis tools often summarise the results of analysis of every
function in a polar chart known as Kiviat Chart.
• A Kiviat Chart typically shows the analysed values for cyclomatic complexity, number
of source lines, percentage of comment lines, Halstead’s metrics, etc.
76. What do you understand by cyclomatic complexity of a program? How can it be
measured? What are its applications in program development?
• McCabe’s cyclomatic complexity defines an upper bound on the number of independent
paths in a program. We discuss three different ways to compute the cyclomatic complexity
• Method 1: Given a control flow graph G of a program, the cyclomatic complexity V(G) can
be computed as: V(G) = E – N + 2 where, N is the number of nodes of the control flow graph
and E is the number of edges in the control flow graph. For the CFG of example shown in
Figure 10.7, E = 7 and N = 6. Therefore, the value of the Cyclomatic complexity = 7 – 6 + 2 =
3.
• Method 2: An alternate way of computing the cyclomatic complexity of a program is based
on a visual inspection of the control flow graph is as follows —In this method, the
cyclomatic complexity V (G) for a graph G is given by the following expression: V(G) = Total
number of non-overlapping bounded areas + 1
• Method 3: The cyclomatic complexity of a program can also be easily computed by
computing the number of decision and loop statements of the program. If N is the number
of decision and loop statements of a program, then the McCabe’s metric is equal to N + 1.
• Application is Estimation of structural complexity of code:
Chapter 10
Questions 77 to 94
77
• In SCM practices include revision control and the establishment of
baselines. If something goes wrong, SCM can determine what was
changed and who changed it. If a configuration is working well, SCM
can determine how to replicate it across many hosts.

• The acronym "SCM" is also expanded as source configuration


management process and software change and configuration
management.However, "configuration" is generally understood to
cover changes typically made by a system administrator.
78
• Code walkthrough is one of the code review technique, Code walkthrough is
an informal code analysis technique, Each member selects some test cases
and simulates execution of the code by hand (i .e. , traces the execution
through different statements and functions of the code).
• The main objective of code walkthrough is to discover the algorithmic
and logical errors in the code.
• Eg. The team performing code walkthrough should not be either too big or
too small. Ideally, it should consist of between three to seven members.
79
• A version is an iteration, something that is different than before.When
programmers develop software a version is typically a minor software
update, something that addresses issues in the the original release but does
not contain enough to warrant a major release of the software
• A revision is a controlled version. Webster’s dictionary describes a “revision” 
as the act of revising, which is to make a new, amended, improved, or up-
to-date version. Back to the software analogy, a revision is seen as a major 
release of the software. Something that introduces new features and 
functionality, as well as fixing bugs. In the engineering world we use 
revisions to document the changes so that anyone can understand what 
was changed. Versions are usually temporary, revisions are permanent.
80
• Estimation of program reliability: Experimental studies indicate there
exists a clear relationship between the McCabe’ s metric and the
number of errors latent in the code after testing. This relationship exists
possibly due to the correlation of cyclomatic complexity with the
structural complexity of code. Usually the larger is the structural
complexity , the more difficult it is to test and debug the code.
• Estimation of structural complexity of code: McCabe’ s cyclomatic
complexity is a measure of the structural complexity of a program. The
reason for this is that it is computed based on the code structure
(number of decision and iteration constructs used).
81

• Unit testing is referred to as testing in the small, whereas integration and


system testing are refer red to as testing in the large.
• After testing all the units individually , the units are slowly integrated and
tested after each step of integration (integration testing). Finally , the fully
integrated system is tested (system testing). Integration and system testing are
known as testing in the large.
• First while testing a module, other modules with which this module needs to
interface may not be ready . Moreover , it is always a good idea to first test
the module in isolation before integration because it makes debugging easier .
If a failure is detected when an integrated set of modules is being tested, it
would be difficult to determine which module exactly has the error.
82
• The aim of program testing is to help realise identify all defects in a
program. However , in practice, even after satisfactory completion of the
testing phase, it is not possible to guarantee that a program is error
free.
• integration and system testing are refer red to as testing in the large.
• After testing all the units individually , the units are slowly integrated
and tested after each step of integration (integration testing). Finally ,
the fully integrated system is tested (system testing). Integration and
system testing are known as testing in the large.
83
• The satisfactory testing object-oriented programs is much more difficult and
requires much more cost and effort as compared to testing similar
procedural programs. The main reason behind this situation is that
various object-oriented features introduce additional complications and
scope of new types of bugs that are present in procedural programs.
84
• The satisfactory testing object-oriented programs is much more difficult and
requires much more cost and effort as compared to testing similar
procedural programs. The main reason behind this situation is that
various object-oriented features introduce additional complications and
scope of new types of bugs that are present in procedural programs.
85
Class diagram-based testing
Testing derived classes: All derived classes of the base class have to be
instantiated and tested. In addition to testing the new methods defined
in the derive class, the inherited methods must be retested.
86
Sequence diagram-based testing
Method coverage: All methods depicted in the sequence diagrams are
covered. Message path coverage: All message paths that can be
constructed from the sequence diagrams are covered.
87
• Alpha Testing: Alpha testing refers to the system testing carried out by the
test team wi thin the developing organi sation.
• Beta Testing: Beta testing is the system testing performed by a select
group of friendly customers.
• Acceptance Testing: Acceptance testing is the system testing performed
by the customer to determine whether to accept the delivery of the
system.
88
• The aim of program testing is to help realize identify all defects in a
program. However , in practice, even after satisfactory completion of the
testing phase, it is not possible to guarantee that a program is error
free.
89
• A test case is a triplet [I , S, R], where I is the data input to the program
under test, S is the state of the program at which the data is to be input, and
R is the result expected to be produced by the program.
• A test scenario is an abstract test case in the sense that it only identifies the
aspects of the program that are to be tested without identifying the input,
state, or output.
• A test script is an encoding of a test case as a short program. Test scripts are
developed for automated execution of the test cases. A test case is said to be a
positive test case if it is designed to test whether the software correctly
performs a required functionality . A test case is said to be negative test case f it is
designed to test whether the software carries out something, that is not
required of the system.
90
Usability testing concerns checking the user interface to see if it meets
all user requirements concerning the user interface. During usability
testing, the display screens, messages, report formats, and other
aspects relating to the user interface requirements are tested. A GUI
being just being functionally correct is not enough.
91
• automated tool that takes either the source code or the executable code of a program as input
and produces reports regarding several important characteristics of the program, such as its size,
complexity , adequacy of commenting, adherence to programming standards, adequacy of testing, etc.
• Static program analysis tool s assess and compute various characteristics of a program without
executing it. Typically , static analysis tools analyse the source code to compute certain metrics
characterising the source code (such as size, cyclomatic complexity , etc. ) and also report certain
analytical conclusions.
• Dynamic program analysis tool s can be used to evaluate several program characteristics based on
an analysis of the run time behaviour of a program. These tools usually record and analyse the
actual behaviour of a program while it is being executed. A dynamic program analysis tool (also
called a dynamic analyser ) usually collects execution trace information by instrumenting the code.
• A major practical limitation of the static analysis tools lies in their inability to analyse run-time
information such as dynamic memory references using pointer variables and pointer arithmetic, etc.
92
• Static program analysis tool s assess and compute various characteristics of a program
without executing it. Typically , static analysis tools analyse the source code to compute
certain metrics characterising the source code (such as size, cyclomatic complexity , etc. ) and
also report certain analytical conclusions.
• Dynamic program analysis tool s can be used to evaluate several program characteristics
based on an analysis of the run time behaviour of a program. These tools usually record
and analyse the actual behaviour of a program while it is being executed. A dynamic
program analysis tool (also called a dynamic analyser ) usually collects execution trace
information by instrumenting the code.
• Static analysis tools often summarise the results of analysis of every function in a polar chart
known as Kiviat Chart. A Kiviat Chart typically shows the analysed values for cyclomatic
complexity , number of source lines percentage of comment lines, Hal stead’s metrics, etc.
• the dynamic analysis results are reported in the form of a histogram or pie chart to describe
the structural coverage achieved for different modules of the program.
93

• automated tool that takes either the source code or the executable code of a
program as input and produces reports regarding several important
characteristics of the program, such as its size, complexity , adequacy of
commenting, adherence to programming standards, adequacy of testing, etc.
• Static program analysis tool s assess and compute various characteristics of a
program without executing it. Typically , static analysis tools analyse the source
code to compute certain metrics characterising the source code (such as size,
cyclomatic complexity , etc. ) and also report certain analytical conclusions.
• Dynamic program analysis tool s can be used to evaluate several program
characteristics based on an analysis of the run time behaviour of a program.
These tools usually record and analyse the actual behaviour of a program while
it is being executed. A dynamic program analysis tool (also called a dynamic
analyser ) usually collects execution trace information by instrumenting the code.
94
A book can be searched in the library catalog by inputting its name. If the book is
available in the library, then the details of the book are displayed. If the book is not
listed in the catalog, then an error message is generated. While developing the DFD
model for this simple problem, many beginners commit the mistake of drawing an
arrow (as shown in Figure 6.6) to indicate that the error function is invoked after
the search book. But, this is a control information and should not be shown on the
DFD.
Chapter 10
Questions 95 to 97
95. What is the difference between black-box and white-box testing? During unit testing, can
black-box testing be skipped, if one is planning to perform a thorough white-boxtesting? Justify
your answer.
Ans. Black- box test cases are designed solely based on the input-output behaviour of a program.
In contrast, white-box test cases are based on an analysis of the code. These two approaches to
test case design are complementary. That is, a program has to be tested using the test cases
designed by both the approaches, and one testing using one approachdes not substitute testing
using the other.

96. Distinguish between the static and dynamic analysis of a program. Explain at least one metric
that a static analysis tool reports and at least one metric that a dynamic analysis tool reports.
How are these metrics useful?
Ans. Static program analysis tools assess and compute various characteristics of a program
without executing it. While Dynamic program analysis tools can be used to evaluate several
program. characteristics based on an analysis of the run time behaviour of a program and
usually record and analyse the actual behaviour of a program while it is being executed. static
analysis tools analyse the source code to compute certain metrics characterising the source code
(such as size, cyclomatic complexity, etc) while A dynamic program analysis tool (also called a
dynamic analyser )usually collects execution trace Information by instrumenting the code.

97.Suppose the cyclomatic complexities of code segments A and B (shown in Figure 10.8) are m
and n respectively. What would be the cyclomatic complexity of the code segment C which has
been obtained by juxtaposing the code segments A and B?
Ans. Given a control flow graph G of a program, the cyclomatic complexity V(G) can be
computed as: V(G) = E – N + 2
where, N is the number of nodes of the control flow graph and E is the number of edges in
the control flow graph.
Here E = 3 , N= 2 So V(G) = 3 .
Chapter 11
Questions 1 to 14
1. Choose the correct option:
(a) Which of the following is a practical use of reliability growth modelling?
(i) Determine the operational life of an application software
(ii) Determine when to stop testing
(iii) Incorporate reliability information while designing
(iv) Incorporate reliability growth information in the code. Ans (ii)
(b) What is the availability of a software with the following reliability figures?
Mean Time Between Failure (MTBF) = 25 days, Mean Time To Repair (MTTR) = 6 hours:
(i) 1 per cent
(ii) 24 per cent
(iii) 99 per cent
(iv) 99.009 per cent. Ans. (iv)
(c) A software organisation has been assessed at SEI CMM Level 4. Which of the
following is a prerequisite to achieve Level 5:
(i) Defect Detection
(ii) Defect Prevention
(iii) Defect Isolation
(iv) Defect Propagation. Ans. (ii)
(d) Which one of the following is the focus of modern quality paradigms:
(i) Process assurance
(ii) Product assurance
(iii) Thorough testing
(iv) Thorough testing and rejection of bad products
Ans. (iv)
(e) Which of the following is indicated by the SEI C MM repeatable
software development:
(i) Success in development of a software can be repeated
(ii) Success in developmenet of a software can be repeated in related
software development projects.
(iii) Success in developmenet of a software can be repeated in all
software development projects that the organisation might undertake.
(iv) When the same development team is chosen to develop another
software, they can repeat their success.
Ans. (ii)

(f) Which one of the following is the main objective of statistical testing:
(i) Use statistical techniques to design test cases
(ii) Apply statistical techniques to the results of testing to determine if
the software has been adequately tested
(iii) Estimate software reliability
(iv) Apply statistical techniques to the results of testing to determine
how long testing needs to be carried out
Ans. (iii)
2 . Define the terms software reliability and software quality. How can these be measured?
Ans: Software Reliability is the probability of failure-free software operation for a specified
period of time in a specified environment.
It is necessary that the level of reliability required for a software product should be specified in
the software requirements specification (SRS) document. In order to be able to do this, we need
some metrics to quantitatively express the reliability of a software product. A good reliability
measure should be observer-independent, so that different people can agree on the degree
of reliability a system has. However, in practice, it is very difficult to formulate a metric using
which precise reliability measurement would be possible. In the absence of such measures, we
discuss six metrics that correlate with reliability as follows:
Rate of occurrence of failure (ROCOF)
Mean time to failure (MTTF)
Mean time to repair (MTTR)
Mean time between failure (MTBF)
Probability of failure on demand (POFOD)
Availability

Software Quality is the totality of functionality and features of a software product that bear on
its ability to satisfy stated or implied needs.
Measures: Product metrics help measure the characteristics of a product being developed.
Examples of product metrics are LOC and function point to measure size,
PM (person- month) to measure the effort required to develop it, months to
measure the time required to develop the product, time complexity of the
algorithms, etc.
Process Metrics help measure how a process is performing. Examples of process metrics are
review effectiveness, average number of defects found per hour of inspection, average defect
correction time, productivity, average number of failures detected during testing per LOC,
number of latent defects per line of code in the developed product.

3. Identify the factors which make the measurement of software reliability a much harder
problem than the measurement of hardware reliability.
Ans: The main reasons that make software reliability more difficult to measure than hardware
reliability:
•The reliability improvement due to fixing a single bug depends on
where the bug is located in the code.
•The perceived reliability of a software product is observer-dependent.
•The reliability of a product keeps changing as errors are detected and
fixed.
4. Through a simple plot explain how the reliability of a software product changes over its
lifetime. Draw the reliability change for a hardware product over its life time and explain why
the two plots look so different.
Ans:
A comparison of the changes in failure rate over the product life time for a typical hardware
product as well as a software product are sketched in Figure 11.1. Observe that the plot of
change of reliability with time for a hardware component (Figure 11.1(a)) appears like a “bath
tub”. For a software component the failure rate is initially high, but decreases as the faulty
components identified are either repaired or replaced. The system then enters its useful life,
where the rate of failure is almost constant. After some time (called product life time ) the major
components wear out, and the failure rate increases. The initial failures are usually covered
through manufacturer’s warranty. A corollary of this observation (though a digression from our
topic of discussion) is that it may be unwise to buy a product (even at a good discount to its face
value) towards the end of its life time, That is, one need not feel happy to buy a ten year old car
at one tenth of the price of a new car, since it would be near the rising edge of the bath tub
curve, and one would have to spend unduly large time, effort, and money on repairing and end
up as the loser. In contrast to the hardware products, the software product show the highest
failure rate just after purchase and installation (see the initial portion of the plot in Figure 11.1
(b)). As the system is used, more and more errors are identified and removed resulting in
reduced failure rate. This error removal continues at a slower pace during the useful life of the
product. As the software becomes obsolete no more error correction occurs and the failure rate
remains unchanged.
5. What do you understand by a reliability growth model? How is reliability growth modelling
useful?
Ans: A reliability growth model is a mathematical model of how software reliability improves as
errors are detected and repaired.
A reliability growth model can be used to predict when (or if at all) a particular level of reliability
is likely to be attained. Thus, reliability growth modelling can be used to determine when to stop
testing to attain a given reliability level.

6. Explain using one simple sentence each what you understand by the following reliability
measures:
• A POFOD of 0.001
Ans: A POFOD of 0.001 would mean that 1 out of every 1000 service
requests would result in a failure.
• A ROCOF of 0.002
Ans: ROCOF of 0.002 means 2 failures are likely in each 1000 operational time units e.g. 2
failures per 1000 hours of operation.
• MTBF of 200 units
Ans: MTBF of 200 hours indicates that once a failure occurs, the next failure is expected after
300 hours.
• Availability of 0.998
Ans: Availability of 0.998 means that the system is up and running for 99.8% of the time.
7. What is statistical testing? In what way is it useful during software development? Explain in
the different steps of statistical testing.
Ans: Statistical testing is a testing process whose objective is to determine the reliability of the
product rather than discovering errors. The test cases designed for statistical testing with an
entirely different objective from those of conventional testing. To carry out statistical testing, we
need to first define the operation profile of the product.
Statistical testing allows one to concentrate on testing parts of the system that are most likely to
be used. Therefore, it results in a system that the users can find to be more reliable (than
actually it is!). Also, the reliability estimation arrived by using statistical testing is more
accurate compared to those of other methods discussed.
Steps:
The first step is to determine the operation profile of the software. The next step is to generate a
set of test data corresponding to the determined operation profile. The third step is to apply the
test cases to the software and record the time between each failure. After a statistically
significant number of failures have been observed, the reliability can be computed.
8. Define three metrics to measure software reliability. Do you consider these metrics entirely
satisfactory to provide measure of the reliability of a system? Justify your answer.
Ans: Metrics-
1. Probability of failure on demand
POFOD measures the likelihood of the system failing when a service request is made.
2. Rate of occurrence of failures/Mean time to failure
ROCOF measures the frequency of occurrence of failures. ROCOF measure of a software
product can be obtained by observing the behavior of a software product in operation over
a specified time interval and then calculating the ROCOF value as the ratio of the total
number of failures observed and the duration of observation.
3. Availability
Availability of a system is a measure of how likely would the system be available for use over
a given period of time. This metric not only considers the number of failures occurring during
a time interval, but also takes into account the repair time (down time) of a system when a
failure occurs. This metric is important for systems such as telecommunication
systems, and operating systems, and embedded controllers, etc. which are supposed to be
never down and where repair and restart time are significant and loss of service during that
time cannot be overlooked.
All the above reliability metrics suffer from several shortcomings as far as their use in
software reliability measurement is concerned. One of the reasons is that these metrics are
centered around the probability of occurrence of system failures but take no account of the
consequences of failures. That is, these reliability models do not distinguish the relative
severity of different failures. Failures which are transient and whose consequences are not
serious are in practice of little concern in the operational use of a software product.
These types of failures can at best be minor irritants. On the other hand, more severe types of
failures may render the system totally unusable. In order to estimate the reliability of a software
product more accurately, it is necessary to classify various types of failures.

9 . How can you determine the number of latent defects in a software product during the
testing phase?
Ans: It is straight forward to generate test cases for the common types of inputs, since one can
easily write a test case generator program which can automatically generate these test cases.
However, it is also required that a statistically significant percentage of the unlikely inputs should
also be included in the test suite. Creating these unlikely inputs using a test case generator is
very difficult.

10. State TRUE o r FALSE of the following. Support your answer with proper reasoning:
(a) The reliability of a software product increases almost linearly, each time a defect gets
detected and fixed. F
(b) As testing continues, the rate of growth of reliability slows down representing a diminishing
return of reliability growth with testing effort. T
(c) Modern quality assurance paradigms are centered around carrying out thorough product
testing. T
(d) An important use of receiving a ISO 9001 certification by a software organisation is that it can
improve its sales efforts by advertising its products as conforming to ISO 9001 certification. T
(e) A highly reliable software can be termed as a good quality software. T
(f) If an organisation assessed at SEI CMM level 1 has developed one software product
successfully, then it is expected to repeat its success on similar products. F
11. What does the quality parameter “fitness of purpose” mean in the context of software
products? Why is this not a satisfactory criterion for determining the quality of software
products?
Ans. “fitness of purpose” is not a wholly satisfactory definition of quality for software products.
To give an example of why this is so, consider a software product that is functionally correct.
That is, it correctly performs all the functions that have been specified in its SRS document. Even
though it may be functionally correct, we cannot consider it to be a quality product, if it has an
almost unusable user interface.

12. Can reliability of a software product be determined by estimating the number of latent
defects in the software? If your answer is “yes”, explain how reliability can be determined
from an estimation of the number of latent defects in a software product. If your answer is
“no”, explain why can’t reliability of a software product be determined from an estimate of
the number of latent defects?
Ans. Unfortunately, it is very difficult to characterise the observed reliability of a system in terms
of the number of latent defects in the system using a simple mathematical expression. consider
the following. Removing errors from those parts of a software product that are very infrequently
executed, makes little difference to the perceived reliability of the product. It has been
experimentally observed by analysing the behaviour of a large number of programs that 90 per
cent of the execution time of a typical . Based on this discussion we can say that reliability of a
product depends not only on the number of latent errors but also on the the exact location of
the errors. Apart from this, reliability also depends upon how the product is used, or on its
execution profile.
.
13. Why is it important for a software development organisation to obtain ISO9001
certification?
Ans. 1.Confidence of customers in an organisation increases when the organisation qualifies for
ISO 9001 certification. This is especially true in the international market.
2. ISO 9001 makes the development process focused, efficient, and cost effective.
3. ISO 9001 sets the basic framework for the development of an optimal process and TQM.

14. Discuss relative merits of ISO9001 certification and the SEICMM based quality assessment

Ans. We identified ISO 9000 and SEI CMM as two sets of guidelines for setting up a quality
system. ISO 9000 series is a standard applicable to a broad spectrum of industries, whereas SEI
CMM model is a set of guidelines for setting up a quality system specifically addressing the
needs of the software development organisations. Therefore, SEI CMM model addresses various
issues pertaining to software industry in a more focussed manner. For example, SEI CMM model
suggests a 5-tier structure. On the other hand, ISO 9000 has been formulated by a standards
body and therefore the certificate can be used as a contract between externally independent
parties, whereas SEI CMM addresses step by step improvements of an organisation’s quality
practices.
CHAPTER 11
Question 15-31
15. List five salient requirements that a software
development organisation
must comply with before it can be awarded the ISO 9001
certificate.
• Five Salient Features are:
• Document control
• Planning
• Review
• Testing
• Organisational aspects

Shortcoming of ISO certification process :


• ISO 9000 requires a software production process to be adhered to, but does not guarantee the process to be of high quality. It also
does not give any guideline for defining an appropriate process. 
• ISO 9000 certification process is not fool-proof and no international accreditation agency exists. Therefore it is likely that variations
in the norms of awarding certificates can exist among the different accreditation agencies and also among the registrars. 
• Organisations getting ISO 9000 certification often tend to downplay domain expertise and the ingenuity of the developers. These
organisations start to believe that since a good process is in place, the development results are truly person-independent. That is,
any developer is as effective as any other developer in performing any particular software development activity. In manufacturing
industry there is a clear link between process quality and product quality. Once a process is calibrated, it can be run again and again
producing quality goods. Many areas of software development are so specialised that special expertise and experience in these
areas (domain expertise) is required. Also, unlike in case of general product manufacturing, ingenuity and effectiveness of personal
practices play an important part in determining the results produced by a developer . In other words, software development is a
creative process and individual skills and experience are important. 
• ISO 9000 does not automatically lead to continuous process improvement. In other words, it does not automatically lead to TQM
16. With the help of suitable examples discuss the types of
software organizations to which ISO 9001, 9002, and 9003
standards respectively are applicable?
• ISO is the word that represents the International Organization for Standardization. It is the
worldwide federation of national standards bodies for approximately 130 countries.
• These sets of standards form a quality management system and are applicable to any
organization regardless of product, service, organizational size, or whether it’s a public or
private company.
• ISO 9001: This standard applies to the organizations engaged in design, development,
production, and servicing of goods. This is the standard that applies to most software
development organizations.
• ISO 9002: This standard applies to those organizations which do not design products but are
only involved in the production. Examples of these category industries contain steel and car
manufacturing industries that buy the product and plant designs from external sources and are
engaged in only manufacturing those products. Therefore, ISO 9002 does not apply to software
development organizations.
• ISO 9003: This standard applies to organizations that are involved only in the installation and
testing of the products. For example, Gas companies
17. During software testing process, why is the reliability
growth initially
high, but slows down later on?
Reliability growth during software testing is high initially and slows down later because in
initial stage total errors are known and certain, but correction to address defects will
introduce additional errors therefore reliability growth slows down later.
18. If an organization does not document its quality
system, what
problems would it face?
It is merely an incomplete description of the system. It’s incomplete because it probably
wont describe the science that explains how and why the component parts function or
which explains the behaviour of the people who through their interaction with the
physical elements, make the system produce outputs that are greater than the sum of its
parts.
19. What according to you is a quality software product?

• Quality software is reasonably bug or defect free, delivered on time and within budget,
meets requirements and/or expectations, and is maintainable.
• ISO 8402-1986 standard defines quality as  “the totality of features and characteristics of a
product or service that bears its ability to satisfy stated or implied needs.”

• Key aspects of quality software product include:


• Good design – looks and style
• Good functionality – it does the job well
• Reliable – acceptable level of breakdowns or failure
• Consistency
• Durable – lasts as long as it should
• Good after sales service
• Value for money
20. Discuss the stages through which the quality system
paradigm and the
quality assurance methods have evolved over the years.

Whenever an organizational task can be effectively automated, it eventually will be. Classical
statistical process control (SPC) is an example where human intervention has been historically
required because diagnosis and corrective action could not be effectively automated. The
need for classical, human-centered SPC will diminish with advances inautomation, feedback
control, and automated diagnosis.
TQM's focus on the customer is only a half-truth; for the most part, organizations focus on
segments or cliques of customers, not individual customers. The growth of "one to one“
marketing, increasing flexibility in production and logistics, product postponement, and e-
commerce all support the goals of mass customization being able to serve the needs of
individual customers. Quality systems will need to increasingly focus on the management of
individual customer requirements.
The constant improvement of quality in a particular market segment makes it increasingly
difficult for a firm to create new value with its products. As firms get better at understanding
what customers want and delivering it, this skill will not be differentiable it will simply be
required to remain in business. In order to enhance competitive stance, companies will focus
on getting better at understanding the unarticulated needs of their customers, and develop
solutions aimed at “total value creation”.
21. Which standard is applicable to software industry, ISO
9001, ISO 9002,
or ISO 9003?
• The ISO 9000 standard which applies to software industry is ISO 9001, since it applies
to "quality assurance in design, development, production, installation and servicing".
This standard is written for manufacturing industry, and this poses some problems
when applying it to development and maintenance of software.
22. In a software development organization, identify the
persons
responsible for carrying out quality assurance activities.
Explain the
principal
Head of QA role is atasks they
senior position perform
within to which
an organization meet this responsibility.
is normally the next level up from a QA
manager role.
• Depending on the role and the organization, the Head of the QA role can either be hands-on from a technical
point of view or hands-off with a focus on strategy and processes, or it could be a mixture of both.
• Responsible for Defining QA strategy, approach and execution in development projects.
• Responsible for Leading and directing the QA leadership team.
• Provide leadership and technical expertise within Test Automation and Quality Assurance.
• Be accountable for the test automation projects, mentor, and provide leadership to the QA automation
developers and managers.
• Provide technical leadership and expertise within the field of Quality Assurance and Testing.
• Ensuring that the development teams adhere to the principles, guidelines and best practices of the QA
strategy as defined.
• Focus on continuous QA improvements including usage of appropriate testing tools, test techniques, test
automation.
• Building and maintenance of quality standards as well as enforcing technical and testing standards.
• Monitoring of all the QA activities, test results, leaked defects, root cause analysis and identifying areas of
improvement. Implement the steps required to improve the processes.
• Ensure the proper usage of available tools to gain the maximum benefit of the QA effort. This includes testing
tools for functional, performance, automation, etc.
23. Suppose an organisation mentions in its job
advertisement that it has
been assessed at level 3 of SEI CMM, what can you infer
about the
current quality practices at the organisation? What does
this organisation
have to do to reach SEI CMM level 4?
• At this level, the processes for both management and development activities are
defined and documented. There is a common organisation-wide understanding of
activities, roles, and responsibilities.
• The processes though defined, the process and product qualities are not measured. At
this level, the organisation builds up the capabilities of its employees through periodic
training programs. Also, review techniques are emphasized and documented to achieve
phase containment of errors.
• To reach level 4, both process and product metrics are collected. Quantitative quality
goals are set for the products and at the time of completion of development it was
checked whether the quantitative quality goals for the product are met. Various tools
like Pareto charts, fishbone diagrams, etc. are used to measure the product and
process quality.
24. Suppose as the president of a company, you have the
choice to either
go for ISO 9000 based quality model or SEI CMM based
model, which
one would you prefer? Give the reasoning behind your
choice.
CMM focuses strictly on software, while ISO 9001 has included hardware, software,
processed materials, and services. 
Every Level 2 KPA is strongly related to ISO 9001 Every KPA is at least weakly related to
ISO 9001 A CMM Level-1 organization can be ISO 9001 certified; that organization would
have significant Level-2 process strengths and noticeable Level-3 strengths. 
Given a reasonable implementation of the software process, a ISO 9001 certified
organization should be at least close to CMM Level-2.
So, CMM should be chosen.
25. What do you understand by total quality management
(TQM)? What
are the advantages of TQM? Does ISO 9000 standard aim
for TQM?
• Total quality management (TQM) advocates that the process followed by an
organisation must continuously be improved through process measurements. TQM goes
a step further than quality assurance and aims at continuous process improvement.
TQM goes beyond documenting processes to optimizing them through redesign. 
Advantages of TQM:
• Cost reduction
• Customer satisfaction
• Defect reduction
• Morale
26. What are the principal activities of a modern quality
system?
• Principle 1: customer focus

customer focus is the first principle, right where it should be. It covers both customer needs and customer service. This principle stresses that a business should
understand its customers, what they need and when. While trying to meet, but preferably, exceed customers’ expectations.
• Principle 2: leadership

Without clear and strong leadership, a business flounders. Principle 2, is concerned with the direction of the organization. The business should have clear goals
and objectives, and ensure its employees are actively involved in achieving those targets.
• Principle 3: people involvement

The process approach is all about efficiency and effectiveness. Well-managed processes reduce costs, improve consistency, eliminate waste and promote
continuous improvement.
• Principle 4: a process approach

The process approach is all about efficiency and effectiveness. It’s also about consistency and understanding that good processes also speeds up activities.
• Principle 5: a systematic approach to management

“Identifying, understanding and managing interrelated processes as a system contributes to the organization’s effectiveness and efficiency in achieving its
objectives.”

A business focuses its efforts on the key processes as well as aligning complementary processes to get better efficiency. This means that multiple processes are
managed together as a system which should lead to greater efficiency.
• Principle 6: continual improvement

This principle is very straightforward: continual improvement should be an active business objective.
• Principle 7: factual approach to decision making

A logical approach, based on data and analysis, is good business sense. Unfortunately, in a fast-paced workplace, decisions can often be made rashly, without
proper thought. Implementing the Quality Management Principles

Principle 8: mutually beneficial supplier relations

This principle deals with supply chains. It promotes the relationship between the company and its suppliers; recognizing it is interdependent. A strong
relationship enhances productivity and encourages seamless working practices.
27. In a software development organisation whose
responsibility is it to
ensure that the products are of high quality? Explain the
principal tasks
they perform to meet this responsibility.
• Top management is responsible for high quality of the software product. 
• Principle task they perform to maintain quality are:
• Establishing and updating the organization’s software quality policy.
• Assigning one of the executives such as Vice President for SQA to be in charge of
software quality issues
• Conducting regular management reviews of performance with respect to software
quality issues
28. What do you understand by repeatable software
development?
Organizations assessed at which level SEI CMM maturity
achieves repeatable software development?
• Repeatable processes reduce variability through measurement and constant process correction.
The term originated in manufacturing, where results were well defined and repeatability meant
that if a process had consistent inputs, then defined outputs would be produced. Repeatable
means that the conversion of inputs to outputs can be replicated with little variation. It implies
that no new information can be generated during the process because we have to know all the
information in advance to predict the output results accurately.
• CMM was developed by the Software Engineering Institute (SEI) at Carnegie Mellon University in
1987.
• It is not a software process model. It is a framework which is used to analyze the approach and
techniques followed by any organization to develop a software product.
• It also provides guidelines to further enhance the maturity of those software products.
• It is based on profound feedback and development practices adopted by the most successful
organizations worldwide.
• This model describes a strategy that should be followed by moving through 5 different levels.
• Each level of maturity shows a process capability level. All the levels except level-1 are further
described by Key Process Areas (KPA’s).
29. What do you understand by key process area (KPA), in
the context of
SEI CMM? Would there be any problem if an organisation
tries to
implement higher level SEI CMM KPAs before achieving
lower level KPAs?
Justify your answer using suitable examples.
• Key process area identifies a cluster of related activities that, when performed
collectively, achieve a set of goals considered important for enhancing process
capability. 
• Key process areas are building blocks that indicate the areas an organization should
focus on to improve its software process. As KPA are building blocks therefore if higher
level SEI CMM KPA is implemented before lower level then it would be problematic.
30. What is the Six Sigma quality initiative? To which
category of industries it applicable? Explain the Six Sigma
technique adopted by software organizations with respect
to the goal, the procedure, and the outcome.
• Six Sigma strategies seek to improve the quality of the output of a process by identifying and removing
the causes of defects and minimizing variability in manufacturing and business processes. It uses a set of
quality management methods, mainly empirical, statistical methods, and creates a special infrastructure
of people within the organization who are experts in these methods. Each Six Sigma project carried out
within an organization follows a defined sequence of steps and has specific value targets, for example:
reduce process cycle time, reduce pollution, reduce costs, increase customer satisfaction, and increase
profits.
• Continuous efforts to achieve stable and predictable process results (e.g. by reducing process variation)
are of vital importance to business success.
• Manufacturing and business processes have characteristics that can be defined, measured, analysed,
improved, and controlled.
• Achieving sustained quality improvement requires commitment from the entire organization, particularly
from top-level management.
• Features that set Six Sigma apart from previous quality-improvement initiatives include:
• A clear focus on achieving measurable and quantifiable financial returns from any Six Sigma project.
• An increased emphasis on strong and passionate management leadership and support.
• A clear commitment to making decisions on the basis of verifiable data and statistical methods, rather
than assumptions and guesswork.
31. What is the difference between process metrics and
product metrics?
Give four examples of each.
• Process Matrices - These are metrics that pertain to Process Quality. They are used to
measure the efficiency and effectiveness of various processes.
• Product Matrices - These are metrics that pertain to Product Quality. They are used to
measure cost, quality, and the product’s time-to-market.
Chapter 13
Question 1-19
Choose the correct option:
(a) Which of the following is not a cause for software maintenance for a typical
product?
(i) It is not possible to guarantee that a software is defect-free even after thorough
testing.
(ii) The deployment platform may change over time.
(iii) The user’s needs may change over time.
(iv) Software undergoes wear and tear after long usage.
Chapter 13: Q1
(b) A legacy software product refers to a software that is:
(i) Developed at least 50 years ago.
(ii) Obsolete software product.
(iii) Software product that has poor design structure and code.
(iv) Software product that could not be tested properly before product
delivery
Chapter 13: Q1
(c) Which of the following assertions is true?
(i) Legacy products automatically imply very old products.
(ii) The total effort spent in maintaining an average product typically
exceeds the effort in developing it.
(iii) Reverse engineering encompasses re-engineering.
(iv) Re-engineering encompasses reverse engineering.
Chapter 13: Q1
(d) Which of the following types of maintenance consumes the
maximum effort for a typical software?
(i) Adaptive
(ii) Corrective
(iii) Preventive
(iv) Perfective
Chapter 13: Q2
Q) What are the different types of maintenance that a software product might need? Why are these maintenance
required?
Answer:
Types of Software Maintenance
There are three types of software maintenance, which are described as
follows:
Corrective: Corrective maintenance of a software product is necessary either
to rectify the bugs observed while the system is in use.
Adaptive: A software product might need maintenance when the customers
need the product to run on new platforms, on new operating systems, or
when they need the product to interface with new hardware or software.
Perfective: A software product needs maintenance to support the new
features that users want it to support, to change different functionalities of
the system according to customer demands, or to enhance the performance
of the system.
Chapter 13: Q3
Q) Explain why every software system must undergo maintenance or
progressively become less useful.
Answer :
In this section, we first classify the different maintenance efforts into a few classes. Next, we discuss some
general characteristics of the maintenance projects. We also discuss some special problems associated
with maintenance projects. Software maintenance is becoming an important activity of a large number of
organisations. This is no surprise, given the rate of hardware obsolescence, the immortality of a software
product per se, and the demand of the user community to see the existing software products run on newer
platforms, run in newer environments, and/or with enhanced features. When the hardware platform
changes, and a software product performs some low-level functions, maintenance is necessary. Also,
whenever the support environment of a software product changes, the software product requires rework
to cope up with the newer interface. For instance, a software product may need to be maintained when
the operating system changes. Thus, every software product continues to evolve after its development
through maintenance efforts.
Chapter 13: Q4
Q) Discuss the process models for software maintenance and indicate how you would select an
appropriate maintenance model for a maintenance project at hand.
• First model
The first model is preferred for projects involving small reworks where the code is changed directly and
the changes are reflected in the relevant documents later. This maintenance process is graphically
presented in Figure 13.3. In this approach, the project starts by gathering the requirements for changes.
The requirements are next analysed to formulate the strategies to be adopted for code change. At this
stage, the association of at least a few members of the original development team goes a long way in
reducing the cycle time, especially for projects involving unstructured and inadequately documented
code. The availability of a working old system to the maintenance engineers at the maintenance site
greatly facilitates the task of the maintenance team as they get a good insight into the working of the
old system and also can compare the working of their modified system with the old system. Also,
debugging of the reengineered system becomes easier as the program traces of both the systems can
be compared to localise the bugs.
Second model The second model is preferred for projects where the amount of rework required
is significant. This approach can be represented by a reverse engineering cycle followed by a
forward engineering cycle. Such an approach is also known as software re-engineering. The
reverse engineering cycle is required for legacy products. During the reverse engineering, the old
code is analysed (abstracted) to extract the module specifications. The module specifications are
then analysed to produce the design. The design is analysed (abstracted) to produce the original
requirements specification. The change requests are then applied to this requirements
specification to arrive at the new requirements specification. At this point a forward engineering
is carried out to produce the new code. At the design, module specification, and coding a
substantial reuse is made from the reverse engineered products. An important advantage of this
approach is that it produces a more structured design compared to what the original product
had, produces good documentation, and very often results in increased efficiency. The efficiency
improvements are brought about by a more efficient design. However, this approach is more
costly than the first approach. An empirical study indicates that process 1 is preferable when the
amount of rework is no more than 15 per cent (see Figure 13.5).
Chapter 13: Q5
Q) State whether the following statements are TR U E or FALSE. Give
reasons for your answer.
(a) Legacy software products are those products which have been
developed long time back. - True
(b) Corrective maintenance is the type of maintenance that is most
frequently carried out on a typical software product. - False
Chapter 13: Q6
Q) What do you mean by the term software reverse engineering? Why is it
required? Explain the different activities undertaken during reverse engineering.
Answer:
Software reverse engineering is the process of recovering the design and the
requirements specification of a product from an analysis of its code. The purpose of
reverse engineering is to facilitate maintenance work improving the
understandability of a system and to produce the necessary documents for a legacy
system. Reverse engineering is becoming important, since legacy software products
lack proper documentation, and are highly unstructured. Even well-designed
products become legacy software as their structure degrades through a series of
maintenance efforts.
Chapter 13: Q7
Q) What do you mean by the term software re-engineering? Why is it required?
Explain the different activities undertaken during reverse engineering
Answer:
Software reverse engineering is the process of recovering the design and the
requirements specification of a product from an analysis of its code. The purpose of
reverse engineering is to facilitate maintenance work improving the
understandability of a system and to produce the necessary documents for a legacy
system. Reverse engineering is becoming important, since legacy software products
lack proper documentation, and are highly unstructured. Even well-designed
products become legacy software as their structure degrades through a series of
maintenance efforts.
Chapter 13: Q8
Q) If a software product costed Rs. 10,000,000 for development, compute the annual
maintenance cost given that every year approximately 5 per cent of the code needs
modification. Identify the factors which render the maintenance cost estimation inaccurate?
Answer:
KLOC added = 5%
KLOC deleted = 5%
ACT = 10%
Maintenance Cost = ACT*Development Cost = 10%*10Million = 1Million/Annum
• Most maintenance cost estimation models, however, give only approximate results because
they do not take into account several factors such as experience level of the engineers, and
familiarity of the engineers with the product, hardware requirements, software complexity, etc.
Chapter 13: Q9
Q) What is a legacy software product? Explain the problems one would encounter while maintaining a legacy product.
Answer :
Software maintenance work currently is typically much more expensive than what it should be and takes more time than
required. The reasons for this situation are the following: Software maintenance work in organisations is mostly carried out using
ad hoc techniques. The primary reason being that software maintenance is one of the most neglected areas of software
engineering. Even though software maintenance is fast becoming an important area of work for many companies as the
software products of yester years age, still software maintenance is mostly being carried out as fire-fighting operations, rather
than through systematic and planned activities. Software maintenance has a very poor image in industry. Therefore, an
organisation often cannot employ bright engineers to carry out maintenance work. Even though maintenance suffers from a
poor image, the work involved is often more challenging than development work. During maintenance it is necessary to
thoroughly understand someone else’s work, and then carry out the required modifications and extensions. Another problem
associated with maintenance work is that the majority of software products needing maintenance are legacy products. Though
the word legacy implies “aged” software, but there is no agreement on what exactly is a legacy system. It is prudent to define a
legacy system as any software system that is hard to maintain. The typical problems associated with legacy systems are poor
documentation, unstructured (spaghetti code with ugly control structure), and lack of personnel knowledgeable in the product.
Many of the legacy systems were developed long time back. But, it is possible that a recently developed system having poor
design and documentation can be considered to be a legacy system.

You might also like