SPM Assignment
SPM Assignment
SPM Assignment
MANAGEMENT
• The development time is a sublinear function of the size of the product. That is,
when the size of the product increases by two times, the time to develop the
product does not double but rises moderately. For example, to develop a product
twice as large as a product of size 100KLOC, the increase in duration may only be
20 per cent. It may appear surprising that the duration curve does not increase
super linearly—one would normally expect the curves to behave similar to those
in the effort-size plots. This apparent anomaly can be explained by the fact that
COCOMO assumes that a project development is carried out not by a single
person but by a team of developers
Explain why the development time of a software product of given size remains
almost the same, regardless of whether it is organic, semidetached, or embedded
type.
• The length of a program (i.e., the total number of operators and operands used in the code)
depends on the choice of the operators and operands used. In other words, for the same
programming problem, the length would depend on the programming style. This type of
dependency would produce different measures of length for essentially the same problem
when different programming languages are used. Thus, while expressing program size, the
programming language used must be taken into consideration:
V = N log2 h
•
Let us try to understand the important idea behind this expression. Intuitively, the program
volume V is the minimum number of bits needed to encode the program. In fact, to
represent h different identifiers uniquely, we need at least log2 h bits (where h is the
program vocabulary). In this scheme, we need N log2 h bits to store a program of length N.
Therefore, the volume V represents the size of the program by approximately compensating
for the effect of the programming language used.
What are the relative advantages of using either the LOC
or the function point metric to measure the size of a
software product for software project planning?
• LOC is possibly the simplest among all metrics available to measure project size.
Consequently, this metric is extremely popular. This metric measures the size of a
project by counting the number of source instructions in the developed program.
Obviously, while counting the number of source instructions, comment lines, and
header lines are ignored. Determining the LOC count at the end of a project is
very simple. However, accurate estimation of LOC count at the beginning of a
project is a very difficult task. One can possibly estimate the LOC count at the
starting of a project, only by using some form of systematic guess work.
List the important shortcomings of LOC for use as a
software size
metric for carrying out project estimations.
PERT stands for Program Evaluation and Review Technique. A PERT chart illustrates a project
as a network diagram. The U.S. Navy created this tool in the 1950s as they developed the
Polaris missile (and time was of the essence—this was during the Cold War, after all).
PERT charts are best utilized by project managers that the beginning of a project to ensure
that it is accurately scoped. This tool gives users a birds-eye view of the entire project before
it's started to avoid potential bottlenecks. While PERT charts can be used during the project's
implementation to track progress, they lack the flexibility to adapt to small changes with
confronted with roadblocks.
Created by Henry Gantt during WWI, Gantt charts are used to visualize a project’s schedule
from start to finish. Similar to a PERT chart, Gantt charts display tasks over time to ensure
the project is completed on time.
Project managers use Gantt charts to identify task dependencies, increase efficiencies, and
improve time management. Gantt charts make it simple to break down projects into
manageable steps that can adjust to the project as needed.
How is Gantt chart useful in software project management? What
problems might be encountered, if project monitoring and control is
carried out using a Gantt chart?
A project can be susceptible to large variety of risks. There are 3 main categories of
risks which can affect software projects. They are as follows;
1) Project Risks:- Project risks concern various forms of budgetary, schedule, personnel, resource,
and customer-related problems. An important project risk is schedule slippage. Since, software
is intangible, it is very difficult to monitor and control a software project.
2) Technical Risks:- Technical risks concern potential design, implementation, interfacing, testing,
and maintenance problems. Technical risks also include ambiguous specification, incomplete
specification, changing specification, technical uncertainty, and technical obsolescence. Most
technical risks occur due the development team’s insufficient knowledge about the product.
3) Business Risks:- This type of risks includes the risk of building an excellent product that no one
wants, losing budgetary commitments, etc.
Identification of Risks – Risk assessment is done to rank the risks in terms of their
damage causing potential. For risk assessment, first each risk should be rated in
two ways;
1) The likelihood of a risk becoming real (r).
2) The consequence of the problems associated with that risk (s).
Based on these two factors, the priority of each risk can be computed as follows:
p=r*s
where, p is the priority with which the risk must be handled, r is the probability of
the risk becoming real, and s is the severity of damage caused due to the risk
becoming real.
Q48.) Schedule slippage is a very common form of risk that almost every
project manager has to encounter. Explain in 3 to 4 sentences how you
would manage the risk of schedule slippage as the project manager of a
medium-sized project.
Risks relating to schedule slippage arise primarily due to the intangible nature of software. For a
project such as building a house, the progress can easily be seen and assessed by the project
manager. If he finds that the project is lagging behind, then corrective actions can be initiated.
Considering that software development per se is invisible, the first step in managing the risks of
schedule slippage, is to increase the visibility of the software product. Visibility of a software
product can be increased by producing relevant documents during the development process and
getting these documents reviewed by an appropriate team.
Milestones should be placed at regular intervals to provide a manager with regular indication of
progress. Completion of a phase of the development process being followed need not be the only
milestones. Every phase can be broken down to reasonable-sized tasks and milestones can be
associated with these tasks. A milestone is reached, once documentation produced as part of a
software engineering task is produced and gets successfully reviewed. Milestones need not be
placed for every activity. An approximate rule of thumb is to set a milestone every 10 to 15 days. If
milestones are placed too close each other than the overheads in managing the milestones would
be too much.
Q49.) Explain how you can choose the best risk reduction technique when there
are many ways of reducing a risk.
• reduction involves planning ways to contain the damage due to a risk.
Risk
For example, if there is risk that some key personnel might leave, new recruitment may be
planned. The most important risk reduction techniques for technical risks is to build a
prototype that tries out the technology that you are trying to use. For example, if you are using
a compiler for recognizing user commands, you would have to construct a compiler for a small
and very primitive command language first.
There can be several strategies to cope up with a risk. To choose the most appropriate strategy
for handling a risk, the project manager must consider the cost of handling the risk and the
corresponding reduction of risk. For this we may compute the risk leverage of the different
risks. Risk leverage is the difference in risk exposure divided by the cost of reducing the risk.
More formally;
Even though we identified three broad ways to handle any risk, effective risk handling cannot
be achieved by mechanically following a set procedure, but requires a lot of ingenuity on the
part of the project manager.
Q50.) What are the important types of risks that a project might suffer from? How would
you identify the risks that a project is susceptible to during project the project planning
stage?
The different types of risks that a project might suffer from are as follows;
1) Process-related risk: These risks arise due to aggressive work schedule, budget, and
resource utilisation.
2) Product-related risks: These risks arise due to commitment to challenging product
features (e.g. response time of one second, etc.), quality, reliability, etc.
3) Technology-related risks: These risks arise due to commitment to use certain technology
(e.g., satellite communication).
In order to be able to successfully foresee and identify different risks that might affect a
software project, it is a good idea to have a company disaster list. This list would contain all
the bad events that have happened to software projects of the company over the years
including events that can be laid at the customer’s doors. This list can be read by the
project mangers in order to be aware of some of the risks that a project might be
susceptible to. Such a disaster list has been found to help in performing better risk analysis.
Q51.) As a project manager, identify the characteristics that you would look for in a
software developer while trying to select personnel for your team.
Characteristics to look for in software developer for selecting software
development team are;
• Exposure to systematic techniques, i.e. familiarity with softwareengineering
principles.
• Good technical knowledge of the project areas (Domain knowledge)
• Good programming abilities.
• Good communication skills. These skills comprise of oral, written, and
interpersonal skills.
• High motivation.
• Sound knowledge of fundamentals of computer science
• Intelligence.
• Ability to work in a team.
• Discipline, etc.
Q52.) What is egoless programming? How can it be realized?
• The acronym "SCM" is also expanded as source configuration management process and software
change and configuration management.However, "configuration" is generally understood to cover
changes typically made by a system administrator.
58. What is the difference between a revision and
a version of a software product? What do you
understand by the terms change control and
version control? Why are these necessary? Explain
how change and version control are achieved using
a configuration management tool
• A version is an iteration, something that is different than before.When
programmers develop software a version is typically a minor software update,
something that addresses issues in the the original release but does not contain
enough to warrant a major release of the software
• A revision is a controlled version. Webster’s dictionary describes a “revision” as
the act of revising, which is to make a new, amended, improved, or up-to-date
version. Back to the software analogy, a revision is seen as a major release of the
software. Something that introduces new features and functionality, as well as
fixing bugs. In the engineering world we use revisions to document the changes so
that anyone can understand what was changed. Versions are usually temporary,
revisions are permanent.
59. Discuss how SCCS or RCS can be used to efficiently manage
the configuration of source code
• Using the Revision Control System (RCS) or the Source Code Control System
(SCCS) lets you keep your source files in a common library and maintain control
over them. Both systems provide easy-to-use, command-line interfaces. Knowing
the basic commands lets you check in the source file to be modified into
a version control file that contains all of the revisions of that source file. When
you want to check out a version control file for editing, the system retrieves the
revision or revisions you specify from the library and creates a working file for
you to use.
Using more advanced interface commands lets you do the following:
• Identify the current status of any file, including the name of the person editing it.
• Reconstruct earlier versions of your files. For each version, the system stores the
changes made to produce that version, the name of the person making the
changes, and the reasons for the changes.
• Prevent the problems that can occur when two people change a file at the same
time without each other's knowledge.
• Maintain multiple branch versions of your files. Branched versions can be merged
back into the original sequence.
• Protect files from unauthorized modification.
60. Consider a software project with 5 tasks T1-T5.
Duration of the 5 tasks (in days) are 15, 10, 12, 25 and 10,
respectively. T2 and T4 can start when T1 is complete. T3
can start when T2 is complete. T5 can start when both T3
and T4 are complete. When is the latest start date of the
task T3? What is the slack time of the task T4?
• The work breakdown structure has a number of benefits in addition to defining and organizing
the project work. A project budget can be allocated to the top levels of the work breakdown
structure, and department budgets can be quickly calculated based on the each project's work
breakdown structure. By allocating time and cost estimates to specific sections of the work
breakdown structure, a project schedule and budget can be quickly developed. As the project
executes, specific sections of the work breakdown structure can be tracked to identify project
cost performance and identify issues and problem areas in the project organization.
62. If you are asked to make a choice between
democratic and chief programmer team
organisations, which one would you adopt for your
team? Explain the reasoning behind your answer.
• Chief Programmer Team In this team organization, a senior engineer provides the technical leadership and is
designated as the chief programmer. The chief programmer partitions the task into small activities and
assigns them to the team members. He also verifies and integrates the products developed by different team
members. The chief programmer provides an authority, and this structure is arguably more efficient than the
democratic team for well-understood problems. However, the chief programmer team leads to lower team
morale, since team-members work under the constant supervision of the chief programmer. This also
inhibits their original thinking. The chief programmer team is subject to single point failure since too much
responsibility and authority is assigned to the chief programmer.
• The chief programmer team is probably the most efficient way of completing simple and small projects since
the chief programmer can work out a satisfactory design and ask the programmers to code different
modules of his design solution.
• Democratic Team
The democratic team structure, as the name implies, does not enforce any formal team hierarchy
(as shown in fig. 12.3). Typically, a manager provides the administrative leadership. At different
times, different members of the group provide technical leadership.
The democratic organization leads to higher morale and job satisfaction. Consequently, it suffers
from less man-power turnover. Also, democratic team structure is appropriate for less understood
problems, since a group of engineers can invent better solutions than a single individual as in a chief
programmer team.
63. What do you understand by project risk? How can
risks be effectively identified by a project manager?
How can the risks be managed?
• Project risk is an uncertain event or condition that, if it occurs, has an effect on at least
one project objective. Risk management focuses on identifying and assessing the risks to
the project and managing those risks to minimize the impact on the project
• In software testing life cycle, there are numerous components that play a
prominent part in making the process of testing accurate and hassle free. Every
element related to testing strives to improve its quality and helps deliver accurate
and expected results and services that are in compliance with the defined
specifications. Stubs and drivers are two such elements used in software testing
process, which act as a temporary replacement for a module. These are an
integral part of software testing process as well as general software development.
Therefore, to help you understand the significance of stubs and drivers in
software testing, here is elaborated discussion on the same.
• In the field of software testing, the term stubs and drivers refers to the replica of
the modules, which acts as a substitute to the undeveloped or missing module.
The stubs and drives are specifically developed to meet the necessary
requirements of the unavailable modules and are immensely useful in getting
expected results.
• Stubs and drivers are two types of test harness, which is a collection of software
and test that is configured together in order to test a unit of a program by
stimulating variety of conditions while constantly monitoring its outputs and
behaviour. Stubs and drivers are used in top-down integration and bottom-up
integration testing respectively and are created mainly for the testing purpose.
Chapter 10
Questions 7 to 14
What is the difference between black-box testing and white-box testing? Give an example of a bug that is detected by the black-box test
suite, but is not detected by the white-box test suite, and vice versa.
Internal documentation would be comments and remarks made External documentation would be things like flow charts, UML
by the programmer in the form of line comments diagrams, requirements documents, design documents etc.
Internal Documentation is created within the programming External Documentation is created by the user and
department and shows the design and implementation of the Programmer/System Analyst.
project (flow charts, UML diagrams, design documents, etc.).
• McCabe’s cyclomatic complexity is a measure of the structural complexity of a program. The reason for this is
that it is computed based on the code structure (number of decision and iteration constructs used).
Intuitively, the McCabe’s complexity metric correlates with the difficulty level of understanding a program,
since one understands a program by understanding the computations carried out along all independent
paths of the program.
• This is in contrast to the computational complexity that is based on the execution of the program
statements.
Write a C function for searching an integer value from a large sorted sequence of integer values stored in an array of size
100, using the binary search method.
// A iterative binary search function. It returns location of x in given array arr[l..r] if present, otherwise -1
int binarySearch(int arr[], int l, int r, int x) {
while (l <= r) {
int m = l + (r - l) / 2;
// Check if x is present at mid
if (arr[m] == x)
return m;
// If x greater, ignore left half
if (arr[m] < x)
l = m + 1;
// If x is smaller, ignore right half
else r = m - 1;
}
return -1;
What do you understand by positive and negative test cases? Give one example of each.
• A test case is said to be a positive test case if it is designed to test whether the software correctly
performs a required functionality. A test case is said to be negative test case, if it is designed to test
whether the software carries out something, that is not required of the system. As one example each
of a positive test case and a negative test case, consider a program to manage user login. A positive
test case can be designed to check if a login system validates a user with the correct user name and
password. A negative test case in this case can be a test case that checks whether the login
functionality validates and admits a user with wrong or bogus login user name or password.
Given a software and its requirements specification document, explain how would you design the system
test suite for the software.
• A system test suite is the set of all test that have been designed by a tester to test a given program. The set
of test cases using which a program is to be tested is designed possibly using several test case design
techniques.
• The system test suite is designed based on the SRS document. The two major types of system testing are
functionality testing and performance testing. The functionality test cases are designed based on the
functional requirements and the performance test cases are design to test the compliance of the system to
test the non-functional requirements documented in the SRS document.
What is a coding standard? Identify the problems that might occur if the engineers of an
organization do not adhere to any coding standard?
• Good software development organizations require their programmers to adhere to some well-
defined and standard style of coding which is called their coding standard.
• A coding standard gives a uniform appearance to the codes written by different engineers.
• It facilitates code understanding and code reuse.
• It promotes good programming practices.
Chapter 10
Questions 15 to 24
What is the difference between a coding standard and a coding guideline?
It is mandatory for the programmers to follow the coding standards. Compliance of their code to coding
standards is verified during code inspection. Any code that does not conform to the coding standards is
rejected during code review and the code is reworked by the concerned programmer. In contrast, coding
guidelines provide some general suggestions regarding the coding style to be followed but leave the actual
implementation of these guidelines to the discretion of the individual developers.
Why are formulation and use of suitable coding standards and guidelines considered important to a
software development organisation?
• A coding standard gives a uniform appearance to the codes written by
• different engineers.
• It facilitates code understanding and code reuse.
• It promotes good programming practices.
Write down five important coding standards and coding guidelines that you would recommend.
• Standard headers for different modules
• Conventions regarding error return values and exception handling mechanisms
• Representative coding guidelines
• Do not use a coding style that is too clever or too difficult to Understand
• Avoid obscure side effects
What do you understand by coding standard? When during the development
process is the compliance with coding standards is checked?
The coding standard is a group of rules to unify the code.
Compliance of their code to coding standards is verified during code inspection
What do you understand by testability of a program?
Testability of a requirement denotes the extent to which it is possible
to determine whether an implementation of the requirement conforms
to it in both functionality and performance.
Between the programs written by two different programmers to essentially the
same programming problem, how can you determine which one is more
testable?
A program is more testable, if it can be adequately tested with less number of test
cases. Obviously, a less complex program is more testable. The complexity of a
program can be measured using several types of metrics such as number of
decision statements used in the program.
Discuss different types of code reviews. Explain when and how code review
meetings are conducted. Why is code review considered to be a more efficient
way to remove errors from code compared to testing?
After a module has been coded, usually code review is carried out to ensure that
the coding standards are followed
Code review is an efficient way of removing errors as compared to testing, because
code review identifies errors whereas testing identifies failures.
Distinguish between software verification and software validation. Can one be
used in place of the other? Justify your answer. In which phase(s) of the iterative
waterfall SDLC are the verification and validation activities performed?
Verification does not require execution of the software, whereas validation requires
execution of the software.
It is possible to develop a highly reliable software using validation techniques
alone. However, this would cause the development cost to increase drastically.
Verification techniques help achieve phase containment of errors and provide a
means to cost-effectively remove bugs.
What are the activities carried out during testing a software?
Schematically represent these activities. Which one of these activities
takes the maximum effort?
• Test suite design
• Running test cases and checking the results to detect failures
• Locate error
• Error correction
Debugging often turns out to be the most time-consuming activity
Which one of the following is the strongest structural testing technique —
statement coverage-based testing, branch coverage-based testing, or multiple
condition coverage-based testing? Justify your answer.
In the multiple condition (MC) coverage-based testing, test cases are designed to
make each component of a composite conditional expression to assume both true
and false values.
Condition testing is a stronger testing strategy than branch testing
Prove that branch coverage-based testing technique is a stronger
testing technique compared to a statement coverage-based testing
technique.
Which is a stronger testing—data flow testing or path testing? Give the
reasonings behind your answer.
Clearly, all-uses criterion is stronger than all-definitions criterion. An even stronger
criterion is all definition-use-paths criterion, which requires the coverage of all
possible definition-use paths that either are cycle-free or have only simple cycles. A
simple cycle is a path in which only the end node and the start node are the same.
Briefly highlight the difference between code inspection and code walkthrough.
Compare the relative merits of code inspection and code walkthrough.
The main objective of code walkthrough is to discover the algorithmic and logical
errors in the code.
The principal aim of code inspection is to check for the presence of some common
types of errors that usually creep into code due to programmer mistakes and
oversights and to check whether coding standards have been adhered to.
Chapter 10
Questions 25 to 42
• What is meant by a code walkthrough? What are some of the
important types of errors checked during code walkthrough?
Give one example of each of these types of errors
• Code walkthrough is one of the code review technique, Code walkthrough is
an informal code analysis technique, Each member selects some test cases
and simulates execution of the code by hand (i .e. , traces the execution
through different statements and functions of the code).
• The main objective of code walkthrough is to discover the algorithmic
and logical errors in the code.
• Eg. The team performing code walkthrough should not be either too big or
too small. Ideally, it should consist of between three to seven members.
Suppose two programmers are assigned the same programming problem and they
develop this independently . Explain how can you compare their programs with respect
to:(a) Path testing effort, (b) Understanding difficulty (c) Number of latent bugs (d) Reliability
• Estimation of testing effort: Cyclomatic complexity is a measure of the maximum number of basis
paths. Thus, it indicates the minimum number of test cases required to achieve path coverage.
Therefore, the testing effort and the time required to test a piece of code satisfactorily is
proportional to the cyclomatic complexity of the code. To reduce testing effort, it is necessary to restrict
the cyclomatic complexity of every function to seven.
• Estimation of program reliability: Experimental studies indicate there exists a clear relationship
between the McCabe’ s metric and the number of errors latent in the code after testing. This
relationship exists possibly due to the correlation of cyclomatic complexity with the structural
complexity of code. Usually the larger is the structural complexity , the more difficult it is to test
and debug the code.
• Estimation of structural complexity of code: McCabe’ s cyclomatic complexity is a measure of the
structural complexity of a program. The reason for this is that it is computed based on the code
structure (number of decision and iteration constructs used).
Usually large software products are tested at three different testing level s, i .e. , unit
testing, integration testing, and system testing. What would be the disadvantage of
performing a thorough testing only after the system has been completely developed, e.g. ,
detect all the defects of the product during system testing?
• A software product is normally tested in three levels or stages: Unit testing, Integration testing, System testing
• Unit testing is referred to as testing in the small, whereas integration and system testing are refer red
to as testing in the large.
• After testing all the units individually , the units are slowly integrated and tested after each step of
integration (integration testing). Finally , the fully integrated system is tested (system testing).
Integration and system testing are known as testing in the large.
• First while testing a module, other modules with which this module needs to interface may not be
ready . Moreover , it is always a good idea to first test the module in isolation before integration
because it makes debugging easier . If a failure is detected when an integrated set of modules is
being tested, it would be difficult to determine which module exactly has the error.
What do you understand by system testing? What are the different
kinds of system testing that are usually performed on large
software products?
• The aim of program testing is to help realise identify all defects in a
program. However , in practice, even after satisfactory completion of the testing
phase, it is not possible to guarantee that a program is error free.
• integration and system testing are refer red to as testing in the large.
• After testing all the units individually , the units are slowly integrated and
tested after each step of integration (integration testing). Finally , the fully
integrated system is tested (system testing). Integration and system testing
are known as testing in the large.
Is system testing of object-oriented programs any different from
that for the procedural programs? Explain your answer.
• The satisfactory testing object-oriented programs is much more difficult and
requires much more cost and effort as compared to testing similar
procedural programs. The main reason behind this situation is that
various object-oriented features introduce additional complications and
scope of new types of bugs that are present in procedural programs.
Is integration testing of object-oriented programs any different
from that for the procedural programs? Explain your answer.
• The satisfactory testing object-oriented programs is much more difficult and
requires much more cost and effort as compared to testing similar
procedural programs. The main reason behind this situation is that
various object-oriented features introduce additional complications and
scope of new types of bugs that are present in procedural programs.
Using suitable examples, explain how test cases can be designed for
an object-oriented program from its class diagram.
Class diagram-based testing
Testing derived classes: All derived classes of the base class have to be
instantiated and tested. In addition to testing the new methods defined
in the derive class, the inherited methods must be retested.
Using suitable examples, explain how test cases can be designed for
an object-oriented program from its sequence diagrams.
Sequence diagram-based testing
Method coverage: All methods depicted in the sequence diagrams are
covered. Message path coverage: All message paths that can be
constructed from the sequence diagrams are covered.
Distinguish between alpha, beta, and acceptance testing. How are the test
cases designed for these tests? Are the test cases for the three types of
tests necessarily identical ? Explain your answer.
• System tests are designed to validate a fully developed system to assure that it meets
its requirements. The test cases are therefore designed solely based on the SRS
document.
• Alpha Testing: Alpha testing refers to the system testing carried out by the test team wi
thin the developing organi sation.
• Beta Testing: Beta testing is the system testing performed by a select group of
friendly customers.
• Acceptance Testing: Acceptance testing is the system testing performed by the
customer to determine whether to accept the delivery of the system.
Suppose a developed software has successfully passed all the
three level s of testing, i.e. , unit testing, integration testing, and
system testing. Can we claim that the software is defect free?
Justify your answer
• The aim of program testing is to help realize identify all defects in a
program. However , in practice, even after satisfactory completion of the
testing phase, it is not possible to guarantee that a program is error
free.
Distinguish among a test case, a test suite, a test scenario, and a test script
• A test case is a triplet [I , S, R], where I is the data input to the program under test, S
is the state of the program at which the data is to be input, and R is the result
expected to be produced by the program.
• A test scenario is an abstract test case in the sense that it only identifies the aspects
of the program that are to be tested without identifying the input, state, or output.
• A test script is an encoding of a test case as a short program. Test scripts are
developed for automated execution of the test cases. A test case is said to be a positive
test case if it is designed to test whether the software correctly performs a required
functionality . A test case is said to be negative test case f it is designed to test whether the
software carries out something, that is not required of the system.
Usability of a software product is tested during which type of testing:
unit, integration, or system testing? How is usability tested?
Usability testing concerns checking the user interface to see if it meets
all user requirements concerning the user interface. During usability
testing, the display screens, messages, report formats, and other
aspects relating to the user interface requirements are tested. A GUI
being just being functionally correct is not enough.
Distinguish between the static and dynamic analysis of a program. Explain at least one metric
that a static analysis tool reports and at least one metric that a dynamic analysis tool reports.
How are these metrics useful ?
• automated tool that takes either the source code or the executable code of a program as input and
produces reports regarding several important characteristics of the program, such as its size,
complexity , adequacy of commenting, adherence to programming standards, adequacy of testing, etc.
• Static program analysis tool s assess and compute various characteristics of a program without executing
it. Typically , static analysis tools analyse the source code to compute certain metrics characterising the
source code (such as size, cyclomatic complexity , etc. ) and also report certain analytical conclusions.
• Dynamic program analysis tool s can be used to evaluate several program characteristics based on an
analysis of the run time behaviour of a program. These tools usually record and analyse the actual
behaviour of a program while it is being executed. A dynamic program analysis tool (also called a
dynamic analyser ) usually collects execution trace information by instrumenting the code.
• A major practical limitation of the static analysis tools lies in their inability to analyse run-time
information such as dynamic memory references using pointer variables and pointer arithmetic, etc.
What are the important results that are usually reported by a static analysis tool and dynamic
analysis tool when applied to a program under development? How are these results useful ?
• Static program analysis tool s assess and compute various characteristics of a program without executing
it. Typically , static analysis tools analyse the source code to compute certain metrics characterising the
source code (such as size, cyclomatic complexity , etc. ) and also report certain analytical conclusions.
• Dynamic program analysis tool s can be used to evaluate several program characteristics based on an
analysis of the run time behaviour of a program. These tools usually record and analyse the actual
behaviour of a program while it is being executed. A dynamic program analysis tool (also called a
dynamic analyser ) usually collects execution trace information by instrumenting the code.
• Static analysis tools often summarise the results of analysis of every function in a polar chart known as
Kiviat Chart. A Kiviat Chart typically shows the analysed values for cyclomatic complexity , number of
source lines percentage of comment lines, Hal stead’s metrics, etc.
• the dynamic analysis results are reported in the form of a histogram or pie chart to describe the
structural coverage achieved for different modules of the program.
What do you understand by automatic program analysis? Give a broad
classification of the different types of program analysis tool s used during program
development. What are the different types of information produced by each type of
tool ?
• automated tool that takes either the source code or the executable code of a program as
input and produces reports regarding several important characteristics of the program, such
as its size, complexity , adequacy of commenting, adherence to programming standards,
adequacy of testing, etc.
• Static program analysis tool s assess and compute various characteristics of a program
without executing it. Typically , static analysis tools analyse the source code to compute
certain metrics characterising the source code (such as size, cyclomatic complexity , etc. )
and also report certain analytical conclusions.
• Dynamic program analysis tool s can be used to evaluate several program characteristics
based on an analysis of the run time behaviour of a program. These tools usually record
and analyse the actual behaviour of a program while it is being executed. A dynamic
program analysis tool (also called a dynamic analyser ) usually collects execution trace
information by instrumenting the code.
Design the black-box test suite for a function that checks whether a
character string (of up to twenty-five characters in length) is a
palindrome.
The equivalence classes are the leaf level classes shown in Figure. The
equivalence classes are palindromes, non-palindromes, and invalid
inputs. Now, selecting one representative value from each equivalence
class, we have the required test suite: {abc,aba,abcdef}.
Design the black-box test suite for a function that takes the name of a
book as input and searches a file containing the names of the books
available in the Library and displays the details of the book if the book is
available in the library otherwise displays the message “book not
available”.
A book can be searched in the library catalog by inputting its name. If the book is
available in the library, then the details of the book are displayed. If the book is not
listed in the catalog, then an error message is generated. While developing the DFD
model for this simple problem, many beginners commit the mistake of drawing an
arrow (as shown in Figure 6.6) to indicate that the error function is invoked after
the search book. But, this is a control information and should not be shown on the
DFD.
Chapter 10
Question 43 - 59
• 43 Why is it important to properly document a software?
What are the different ways of documenting a software
product?
• For a programmer reliable documentation is always a must.
The presence of documentation helps keep track of all
aspects of an application and it improves on the quality of
a software product. Its main focuses are development,
maintenance and knowledge transfer to other
developers. Successful documentation
will make information easily accessible, provide a
limited number of user entry points, help new users
learn quickly, simplify the product and help cut
support costs. Documentation is usually focused on the
following components that make up an application: server
environments, business rules, databases/files,
troubleshooting, application installation and code
deployment.
• 44 What do you understand by the clean room strategy?
• The clean room technique is a process in which a new
product is developed by reverse engineering an existing
product, and then the new product is designed in such a
way that patent or copyright infringement is avoided. The
clean room technique is also known as clean room design.
(Sometimes the words "clean room" are merged into the
single word, "cleanroom.") Sometimes this process is called
the Chinese wall method, because the intent is to place a
demonstrable intellectual barrier between the reverse
engineering process and the development of the new
product.
• 45 What is Cyclomatic Complexity?
• Cyclomatic complexity is a source code complexity
measurement that is being correlated to a number of coding
errors. It is calculated by developing a Control Flow Graph of
the code that measures the number of linearly-independent
paths through a program module.Lower the Program's
cyclomatic complexity, lower the risk to modify and easier to
understand. It can be represented using the below formula:
• Cyclomatic complexity = E - N + 2*P
• where,
• E = number of edges in the flow graph.
• N = number of nodes in the flow graph.
• P = number of nodes that have exit points
46. What are the limitations of the error
seeding method?
• Regression Testing is defined as a type of software testing to confirm that a recent program or code
change has not adversely affected existing features.
• Need of Regression Testing
• Regression Testing is required when there is a
• Change in requirements and code is modified according to the requirement
• Selecting test cases for regression testing
• It was found from industry data that a good number of the defects reported by customers were due to
last minute bug fixes creating side.
Effective Regression Tests can be done by selecting the following test cases -
• Test cases which have frequent defects
• Functionalities which are more visible to the users
• Test cases which verify core features of the product
• Test cases of Functionalities which has undergone more and recent changes
• 53 Do you agree with the following statement
—“System testing can be considered a pure black-
box test.” Justify your answer.
• BLACK BOX TESTING, also known as Behavioral
Testing, is a software testing method in which the
internal structure/design/implementation of the
item being tested is not known to the tester. These
tests can be functional or non-functional, though
usually functional.
• This method is named so because the software program, in
the eyes of the tester, is like a black box; inside which one
cannot see. This method attempts to find errors in the
following categories:
• Incorrect or missing functions
• Interface errors
• Errors in data structures or external database access
• Behavior or performance errors
• Initialization and termination errors
• Definition by ISTQB
• black box testing: Testing, either functional or non-
functional, without reference to the internal structure of
the component or system.
• black box test design technique: Procedure to derive
and/or select test cases based on an analysis of the
specification, either functional or non-functional, of a
component or system without reference to its internal
structure.
• Definition by ISTQB
• black box testing: Testing, either functional or non-
functional, without reference to the internal structure of
the component or system.
• black box test design technique: Procedure to derive
and/or select test cases based on an analysis of the
specification, either functional or non-functional, of a
component or system without reference to its internal
structure.
• 54 What do you understand by big-bang integration
testing? How is bigbang integration testing performed?
What are the advantages and disadvantages of the big-
bang integration testing strategy? Describe at least one
situation where big-bang integration testing is desirable.
Big Bang Integration Testing is an integration testing strategy,
wherein all units are linked at once, which results in a
complete and efficient system. In this type of integration
testing all the components as well as the modules of the
software are integrated simultaneously, after which
everything is tested as a whole. During the process of big
bang integration testing, most of the developed modules are
coupled together to form a complete software system or a
major part of the system, which is then used for integration
testing. This approach of software testing is very effective as
it enables software testers to save time as well as their
efforts during the integration testing process.
• Benefits:
• Big bang integration testing is used to test the complete system.
• The amount of planning required for this type of testing is
almost negligible.
• All the modules are completed before the inception of
integration testing.
• It does not require assistance from middle components such as
stubs and driver, on which testing is dependent.
• Big bang testing is cost effective.
• There is no need of immediate builds and efforts required for
the system.
• Drawbacks:
• In Big bang integration testing, it is difficult to trace the cause of
failures as the modules are integrated late.
• This approach is quite challenging and risky, as all the modules and
components are integrated together in a single step.
• If any bug is found it becomes difficult to detach all the modules on
order to find out its root cause.
• Defects present at the interface of components are identified at a
later stage, as all the components are integrated in one shot.
• Since all the modules are tested together chances of failure
increases.
• There is a high probability of missing some crucial defects, errors
and issues, which might pop up in the production environment.
• It is difficult and tough to cover all the cases for integration testing
without missing even a single scenario.
• Isolating any defect or bug during the testing process is difficult.
• If the test cases and their results are not recorded properly, it can
complicate the integration testing and prevent developers and
testers from achieving their desired goals.
• 55 What is the relationship between cyclomatic complexity
and program comprehensibility? Can you justify why such
an apparent relationship exists?
• Both the cyclomatic complexity and software testing are
relating terms as Cyclomatic Complexity is software metric
used to some independent way executions in the
application. Presented by Thomas McCabe in 1976, it
gauges the amount of directly independent ways through a
program module.
• The cyclomatic complexity helps to understand the
engineers about independent way executions and pattern
unit tests that they have to approve. The developers
utilizing cyclomatic complexity tool can guarantee that
every one of the ways has tested at least once. It’s an
extraordinary comfort for the developers and their
respective managers.
56. Describe the following
white-box testing strategies
• Statement coverage is a white box testing technique, which involves the execution of all
the statements at least once in the source code. It is a metric, which is used to calculate
and measure the number of statements in the source code which have been executed.
• Branch coverage is a testing method, which aims to ensure that each one of the
possible branch from each decision point is executed at least once and thereby ensuring
that all reachable code is executed. That is, every branch taken each way, true and false.
• Condition coverage. With Condition coverage the possible outcomes of (“true” or “false”)
for each condition are tested at least once. This means that each individual condition is
one time true and false. In other words we cover all conditions, hence condition
coverage.
• Path coverage refers to designing test cases such that all linearly independent paths in
the program are executed at least once. A linearly independent path can be defined in
terms of what's called a control flow graph of an application.
57. What is Selection sort function.
Also draw the flowchart for the same.
Selection Sort
The selection sort algorithm sorts an array by repeatedly finding the
minimum element (considering ascending order) from unsorted part
and putting it at the beginning. The algorithm maintains two subarrays
in a given array.
58. Discuss cyclomatic
complexity for a program.
• Cyclomatic Complexity
• Cyclomatic complexity of a code section is the quantitative measure of the number of linearly
independent paths in it. It is a software metric used to indicate the complexity of a program.
It is computed using the Control Flow Graph of the program. The nodes in the graph indicate
the smallest group of commands of a program, and a directed edge in it connects the two
nodes i.e. if second command might immediately follow the first command.
• cyclomatic complexity M would be defined as,
• M = E – N + 2P
• where,
E = the number of edges in the control flow graph
N = the number of nodes in the control flow graph
P = the number of connected components
How will you determine the minimum number of test cases needed for path
coverage.
• To achieve entire branch coverage let M is an upper bound for the number of test cases that are required
which will also be a lower bound for the number of paths through control flow graph (CFG). Assuming
every test case takes only one path, and then the number of cases needed to achieve complete path
coverage is equal to the number of paths in graph that can actually be considered. But some paths might
be impossible, so even if the number of paths through the CFG is obviously an upper bound for the
number of test cases needed for path coverage, this number of possible paths may sometimes less than
M.
Branch coverage >= cyclomatic complexity [1] [5] >= number of paths.
Number of Test Cases required achieving branch coverage <= cyclomatic complexity <= Number of Test
Cases required to achieve path coverage.
CHAPTER 10
Questions 60 to 76
Q.60.What does the Fog index signify? How is the Fog
index useful in producing good software documentation?
• Gunning’s fog index
Gunning’s fog index (developed by Robert Gunning in 1952) is a metric that has
been designed to measure the readability of a document. The computed metric
value (fog index) of a document indicates the number of years of formal education
that a person should have, in order to be able to comfortably understand that
document. The Gunning’s fog index of a document D can be computed as follows:
Observe that the fog index is computed as the sum of two different factors. The
first factor computes the average number of words per sentence (total number of
words in the document divided by the total number of sentences). This factor
therefore accounts for the common observation that long sentences are difficult to
understand. The second factor measures the percentage of complex words in the
document.
61. Identify the types of defects that you would be able to
detect during the following: (a) Code inspection (b) Code
walkthrough
Following is a list of some classical programming errors which can be checked during code inspection:
• Use of un-initialized variables.
• Jumps into loops.
• Non-terminating loops.
• Incompatible assignments.
• Array indices out of bounds.
• Improper storage allocation and de-allocation.
• Mismatch between actual and formal parameter in procedure calls.
• Use of incorrect logical operators or incorrect precedence among operators.
• Improper modification of loop variables.
• Comparison of equality of floating point values.
• Dangling reference caused when the referenced memory has not been allocated
62. Design the black-box test suite for a function named quadratic-
solver. The quadratic-solver function accepts three floating point
numbers (a, b, c) representing a quadratic equation of the form ax2 +
bx + c = 0. It computes and displays the solution.
• A piece of documentation that is produced towards the end of testing is the test
summary report.
• This report normally covers each subsystem and represents a summary of tests which
have been applied to the subsystem and their outcome.
• It normally specifies the following:
• What is the total number of tests that were applied to a subsystem.
• Out of the total number of tests how many tests were successful.
• How many were unsuccessful, and the degree to which they were unsuccessful, e.g.,
whether a test was an outright failure or whether some of the expected results of the
test were actually observed.
• Other thing like project information , test summary , test objective and defect are some
of the key things test summary report should contain.
74. What is the difference between top-down and bottom-up integration testing approaches?
What are their advantages and disadvantages? Explain your answer using an example. Why is the
mixed integration testing approach preferred by many testers?
• Bottom-up :-Large software products are often made up of several subsystems. A subsystem
might consist of many modules which communicate among each other through well-defined
interfaces. In bottom-up integration testing, first the modules for the each subsystem are
integrated. Thus, the subsystems can be integrated separately and independently. The primary
purpose of carrying out the integration testing a subsystem is to test whether the interfaces
among various modules making up the subsystem work satisfactorily. In a pure bottom-up
testing no stubs are required, and only test-drivers are required.
• Top-down:- integration testing starts with the root module in the structure chart and one or
two subordinate modules of the root module. After the top-level ‘skeleton’ has been tested,
the modules that are at the immediately lower layer of the ‘skeleton’ are combined with it and
tested. Top-down integration testing approach requires the use of program stubs to simulate
the effect of lower-level routines that are called by the routines under test. A pure top-down
integration does not require any driver routines. An advantage of top-down integration testing
is that it requires writing only stubs, and stubs are simpler to write compared to drivers.
• The mixed approach overcomes this shortcoming of the top-down and bottom-up approaches.
In the mixed testing approach, testing can start as and when modules become available after
unit testing. Therefore, this is one of the most commonly used integration testing approaches.
In this approach, both stubs and drivers are required to be designed.
75. What do you understand by “code review effectiveness”? How can review
effectiveness for an organization measured quantitatively?
• It can be considered as static analysis methods since those target to detect errors
based on analyzing the source code. However, strictly speaking, this is not true since
we are using the term static program analysis to denote automated analysis tools.
• On the other hand, a compiler can be considered to be a type of a static program
analysis tool. A major practical limitation of the static analysis tools lies in their
inability to analyse run-time information such as dynamic memory references using
pointer variables and pointer arithmetic, etc.
• In a high level programming languages, pointer variables and dynamic memory
allocation provide the capability for dynamic memory references.
• However, dynamic memory referencing is a major source of programming errors in a
program. Static analysis tools often summarise the results of analysis of every
function in a polar chart known as Kiviat Chart.
• A Kiviat Chart typically shows the analysed values for cyclomatic complexity, number
of source lines, percentage of comment lines, Halstead’s metrics, etc.
76. What do you understand by cyclomatic complexity of a program? How can it be
measured? What are its applications in program development?
• McCabe’s cyclomatic complexity defines an upper bound on the number of independent
paths in a program. We discuss three different ways to compute the cyclomatic complexity
• Method 1: Given a control flow graph G of a program, the cyclomatic complexity V(G) can
be computed as: V(G) = E – N + 2 where, N is the number of nodes of the control flow graph
and E is the number of edges in the control flow graph. For the CFG of example shown in
Figure 10.7, E = 7 and N = 6. Therefore, the value of the Cyclomatic complexity = 7 – 6 + 2 =
3.
• Method 2: An alternate way of computing the cyclomatic complexity of a program is based
on a visual inspection of the control flow graph is as follows —In this method, the
cyclomatic complexity V (G) for a graph G is given by the following expression: V(G) = Total
number of non-overlapping bounded areas + 1
• Method 3: The cyclomatic complexity of a program can also be easily computed by
computing the number of decision and loop statements of the program. If N is the number
of decision and loop statements of a program, then the McCabe’s metric is equal to N + 1.
• Application is Estimation of structural complexity of code:
Chapter 10
Questions 77 to 94
77
• In SCM practices include revision control and the establishment of
baselines. If something goes wrong, SCM can determine what was
changed and who changed it. If a configuration is working well, SCM
can determine how to replicate it across many hosts.
• automated tool that takes either the source code or the executable code of a
program as input and produces reports regarding several important
characteristics of the program, such as its size, complexity , adequacy of
commenting, adherence to programming standards, adequacy of testing, etc.
• Static program analysis tool s assess and compute various characteristics of a
program without executing it. Typically , static analysis tools analyse the source
code to compute certain metrics characterising the source code (such as size,
cyclomatic complexity , etc. ) and also report certain analytical conclusions.
• Dynamic program analysis tool s can be used to evaluate several program
characteristics based on an analysis of the run time behaviour of a program.
These tools usually record and analyse the actual behaviour of a program while
it is being executed. A dynamic program analysis tool (also called a dynamic
analyser ) usually collects execution trace information by instrumenting the code.
94
A book can be searched in the library catalog by inputting its name. If the book is
available in the library, then the details of the book are displayed. If the book is not
listed in the catalog, then an error message is generated. While developing the DFD
model for this simple problem, many beginners commit the mistake of drawing an
arrow (as shown in Figure 6.6) to indicate that the error function is invoked after
the search book. But, this is a control information and should not be shown on the
DFD.
Chapter 10
Questions 95 to 97
95. What is the difference between black-box and white-box testing? During unit testing, can
black-box testing be skipped, if one is planning to perform a thorough white-boxtesting? Justify
your answer.
Ans. Black- box test cases are designed solely based on the input-output behaviour of a program.
In contrast, white-box test cases are based on an analysis of the code. These two approaches to
test case design are complementary. That is, a program has to be tested using the test cases
designed by both the approaches, and one testing using one approachdes not substitute testing
using the other.
96. Distinguish between the static and dynamic analysis of a program. Explain at least one metric
that a static analysis tool reports and at least one metric that a dynamic analysis tool reports.
How are these metrics useful?
Ans. Static program analysis tools assess and compute various characteristics of a program
without executing it. While Dynamic program analysis tools can be used to evaluate several
program. characteristics based on an analysis of the run time behaviour of a program and
usually record and analyse the actual behaviour of a program while it is being executed. static
analysis tools analyse the source code to compute certain metrics characterising the source code
(such as size, cyclomatic complexity, etc) while A dynamic program analysis tool (also called a
dynamic analyser )usually collects execution trace Information by instrumenting the code.
97.Suppose the cyclomatic complexities of code segments A and B (shown in Figure 10.8) are m
and n respectively. What would be the cyclomatic complexity of the code segment C which has
been obtained by juxtaposing the code segments A and B?
Ans. Given a control flow graph G of a program, the cyclomatic complexity V(G) can be
computed as: V(G) = E – N + 2
where, N is the number of nodes of the control flow graph and E is the number of edges in
the control flow graph.
Here E = 3 , N= 2 So V(G) = 3 .
Chapter 11
Questions 1 to 14
1. Choose the correct option:
(a) Which of the following is a practical use of reliability growth modelling?
(i) Determine the operational life of an application software
(ii) Determine when to stop testing
(iii) Incorporate reliability information while designing
(iv) Incorporate reliability growth information in the code. Ans (ii)
(b) What is the availability of a software with the following reliability figures?
Mean Time Between Failure (MTBF) = 25 days, Mean Time To Repair (MTTR) = 6 hours:
(i) 1 per cent
(ii) 24 per cent
(iii) 99 per cent
(iv) 99.009 per cent. Ans. (iv)
(c) A software organisation has been assessed at SEI CMM Level 4. Which of the
following is a prerequisite to achieve Level 5:
(i) Defect Detection
(ii) Defect Prevention
(iii) Defect Isolation
(iv) Defect Propagation. Ans. (ii)
(d) Which one of the following is the focus of modern quality paradigms:
(i) Process assurance
(ii) Product assurance
(iii) Thorough testing
(iv) Thorough testing and rejection of bad products
Ans. (iv)
(e) Which of the following is indicated by the SEI C MM repeatable
software development:
(i) Success in development of a software can be repeated
(ii) Success in developmenet of a software can be repeated in related
software development projects.
(iii) Success in developmenet of a software can be repeated in all
software development projects that the organisation might undertake.
(iv) When the same development team is chosen to develop another
software, they can repeat their success.
Ans. (ii)
(f) Which one of the following is the main objective of statistical testing:
(i) Use statistical techniques to design test cases
(ii) Apply statistical techniques to the results of testing to determine if
the software has been adequately tested
(iii) Estimate software reliability
(iv) Apply statistical techniques to the results of testing to determine
how long testing needs to be carried out
Ans. (iii)
2 . Define the terms software reliability and software quality. How can these be measured?
Ans: Software Reliability is the probability of failure-free software operation for a specified
period of time in a specified environment.
It is necessary that the level of reliability required for a software product should be specified in
the software requirements specification (SRS) document. In order to be able to do this, we need
some metrics to quantitatively express the reliability of a software product. A good reliability
measure should be observer-independent, so that different people can agree on the degree
of reliability a system has. However, in practice, it is very difficult to formulate a metric using
which precise reliability measurement would be possible. In the absence of such measures, we
discuss six metrics that correlate with reliability as follows:
Rate of occurrence of failure (ROCOF)
Mean time to failure (MTTF)
Mean time to repair (MTTR)
Mean time between failure (MTBF)
Probability of failure on demand (POFOD)
Availability
Software Quality is the totality of functionality and features of a software product that bear on
its ability to satisfy stated or implied needs.
Measures: Product metrics help measure the characteristics of a product being developed.
Examples of product metrics are LOC and function point to measure size,
PM (person- month) to measure the effort required to develop it, months to
measure the time required to develop the product, time complexity of the
algorithms, etc.
Process Metrics help measure how a process is performing. Examples of process metrics are
review effectiveness, average number of defects found per hour of inspection, average defect
correction time, productivity, average number of failures detected during testing per LOC,
number of latent defects per line of code in the developed product.
3. Identify the factors which make the measurement of software reliability a much harder
problem than the measurement of hardware reliability.
Ans: The main reasons that make software reliability more difficult to measure than hardware
reliability:
•The reliability improvement due to fixing a single bug depends on
where the bug is located in the code.
•The perceived reliability of a software product is observer-dependent.
•The reliability of a product keeps changing as errors are detected and
fixed.
4. Through a simple plot explain how the reliability of a software product changes over its
lifetime. Draw the reliability change for a hardware product over its life time and explain why
the two plots look so different.
Ans:
A comparison of the changes in failure rate over the product life time for a typical hardware
product as well as a software product are sketched in Figure 11.1. Observe that the plot of
change of reliability with time for a hardware component (Figure 11.1(a)) appears like a “bath
tub”. For a software component the failure rate is initially high, but decreases as the faulty
components identified are either repaired or replaced. The system then enters its useful life,
where the rate of failure is almost constant. After some time (called product life time ) the major
components wear out, and the failure rate increases. The initial failures are usually covered
through manufacturer’s warranty. A corollary of this observation (though a digression from our
topic of discussion) is that it may be unwise to buy a product (even at a good discount to its face
value) towards the end of its life time, That is, one need not feel happy to buy a ten year old car
at one tenth of the price of a new car, since it would be near the rising edge of the bath tub
curve, and one would have to spend unduly large time, effort, and money on repairing and end
up as the loser. In contrast to the hardware products, the software product show the highest
failure rate just after purchase and installation (see the initial portion of the plot in Figure 11.1
(b)). As the system is used, more and more errors are identified and removed resulting in
reduced failure rate. This error removal continues at a slower pace during the useful life of the
product. As the software becomes obsolete no more error correction occurs and the failure rate
remains unchanged.
5. What do you understand by a reliability growth model? How is reliability growth modelling
useful?
Ans: A reliability growth model is a mathematical model of how software reliability improves as
errors are detected and repaired.
A reliability growth model can be used to predict when (or if at all) a particular level of reliability
is likely to be attained. Thus, reliability growth modelling can be used to determine when to stop
testing to attain a given reliability level.
6. Explain using one simple sentence each what you understand by the following reliability
measures:
• A POFOD of 0.001
Ans: A POFOD of 0.001 would mean that 1 out of every 1000 service
requests would result in a failure.
• A ROCOF of 0.002
Ans: ROCOF of 0.002 means 2 failures are likely in each 1000 operational time units e.g. 2
failures per 1000 hours of operation.
• MTBF of 200 units
Ans: MTBF of 200 hours indicates that once a failure occurs, the next failure is expected after
300 hours.
• Availability of 0.998
Ans: Availability of 0.998 means that the system is up and running for 99.8% of the time.
7. What is statistical testing? In what way is it useful during software development? Explain in
the different steps of statistical testing.
Ans: Statistical testing is a testing process whose objective is to determine the reliability of the
product rather than discovering errors. The test cases designed for statistical testing with an
entirely different objective from those of conventional testing. To carry out statistical testing, we
need to first define the operation profile of the product.
Statistical testing allows one to concentrate on testing parts of the system that are most likely to
be used. Therefore, it results in a system that the users can find to be more reliable (than
actually it is!). Also, the reliability estimation arrived by using statistical testing is more
accurate compared to those of other methods discussed.
Steps:
The first step is to determine the operation profile of the software. The next step is to generate a
set of test data corresponding to the determined operation profile. The third step is to apply the
test cases to the software and record the time between each failure. After a statistically
significant number of failures have been observed, the reliability can be computed.
8. Define three metrics to measure software reliability. Do you consider these metrics entirely
satisfactory to provide measure of the reliability of a system? Justify your answer.
Ans: Metrics-
1. Probability of failure on demand
POFOD measures the likelihood of the system failing when a service request is made.
2. Rate of occurrence of failures/Mean time to failure
ROCOF measures the frequency of occurrence of failures. ROCOF measure of a software
product can be obtained by observing the behavior of a software product in operation over
a specified time interval and then calculating the ROCOF value as the ratio of the total
number of failures observed and the duration of observation.
3. Availability
Availability of a system is a measure of how likely would the system be available for use over
a given period of time. This metric not only considers the number of failures occurring during
a time interval, but also takes into account the repair time (down time) of a system when a
failure occurs. This metric is important for systems such as telecommunication
systems, and operating systems, and embedded controllers, etc. which are supposed to be
never down and where repair and restart time are significant and loss of service during that
time cannot be overlooked.
All the above reliability metrics suffer from several shortcomings as far as their use in
software reliability measurement is concerned. One of the reasons is that these metrics are
centered around the probability of occurrence of system failures but take no account of the
consequences of failures. That is, these reliability models do not distinguish the relative
severity of different failures. Failures which are transient and whose consequences are not
serious are in practice of little concern in the operational use of a software product.
These types of failures can at best be minor irritants. On the other hand, more severe types of
failures may render the system totally unusable. In order to estimate the reliability of a software
product more accurately, it is necessary to classify various types of failures.
9 . How can you determine the number of latent defects in a software product during the
testing phase?
Ans: It is straight forward to generate test cases for the common types of inputs, since one can
easily write a test case generator program which can automatically generate these test cases.
However, it is also required that a statistically significant percentage of the unlikely inputs should
also be included in the test suite. Creating these unlikely inputs using a test case generator is
very difficult.
10. State TRUE o r FALSE of the following. Support your answer with proper reasoning:
(a) The reliability of a software product increases almost linearly, each time a defect gets
detected and fixed. F
(b) As testing continues, the rate of growth of reliability slows down representing a diminishing
return of reliability growth with testing effort. T
(c) Modern quality assurance paradigms are centered around carrying out thorough product
testing. T
(d) An important use of receiving a ISO 9001 certification by a software organisation is that it can
improve its sales efforts by advertising its products as conforming to ISO 9001 certification. T
(e) A highly reliable software can be termed as a good quality software. T
(f) If an organisation assessed at SEI CMM level 1 has developed one software product
successfully, then it is expected to repeat its success on similar products. F
11. What does the quality parameter “fitness of purpose” mean in the context of software
products? Why is this not a satisfactory criterion for determining the quality of software
products?
Ans. “fitness of purpose” is not a wholly satisfactory definition of quality for software products.
To give an example of why this is so, consider a software product that is functionally correct.
That is, it correctly performs all the functions that have been specified in its SRS document. Even
though it may be functionally correct, we cannot consider it to be a quality product, if it has an
almost unusable user interface.
12. Can reliability of a software product be determined by estimating the number of latent
defects in the software? If your answer is “yes”, explain how reliability can be determined
from an estimation of the number of latent defects in a software product. If your answer is
“no”, explain why can’t reliability of a software product be determined from an estimate of
the number of latent defects?
Ans. Unfortunately, it is very difficult to characterise the observed reliability of a system in terms
of the number of latent defects in the system using a simple mathematical expression. consider
the following. Removing errors from those parts of a software product that are very infrequently
executed, makes little difference to the perceived reliability of the product. It has been
experimentally observed by analysing the behaviour of a large number of programs that 90 per
cent of the execution time of a typical . Based on this discussion we can say that reliability of a
product depends not only on the number of latent errors but also on the the exact location of
the errors. Apart from this, reliability also depends upon how the product is used, or on its
execution profile.
.
13. Why is it important for a software development organisation to obtain ISO9001
certification?
Ans. 1.Confidence of customers in an organisation increases when the organisation qualifies for
ISO 9001 certification. This is especially true in the international market.
2. ISO 9001 makes the development process focused, efficient, and cost effective.
3. ISO 9001 sets the basic framework for the development of an optimal process and TQM.
14. Discuss relative merits of ISO9001 certification and the SEICMM based quality assessment
Ans. We identified ISO 9000 and SEI CMM as two sets of guidelines for setting up a quality
system. ISO 9000 series is a standard applicable to a broad spectrum of industries, whereas SEI
CMM model is a set of guidelines for setting up a quality system specifically addressing the
needs of the software development organisations. Therefore, SEI CMM model addresses various
issues pertaining to software industry in a more focussed manner. For example, SEI CMM model
suggests a 5-tier structure. On the other hand, ISO 9000 has been formulated by a standards
body and therefore the certificate can be used as a contract between externally independent
parties, whereas SEI CMM addresses step by step improvements of an organisation’s quality
practices.
CHAPTER 11
Question 15-31
15. List five salient requirements that a software
development organisation
must comply with before it can be awarded the ISO 9001
certificate.
• Five Salient Features are:
• Document control
• Planning
• Review
• Testing
• Organisational aspects
• Quality software is reasonably bug or defect free, delivered on time and within budget,
meets requirements and/or expectations, and is maintainable.
• ISO 8402-1986 standard defines quality as “the totality of features and characteristics of a
product or service that bears its ability to satisfy stated or implied needs.”
Whenever an organizational task can be effectively automated, it eventually will be. Classical
statistical process control (SPC) is an example where human intervention has been historically
required because diagnosis and corrective action could not be effectively automated. The
need for classical, human-centered SPC will diminish with advances inautomation, feedback
control, and automated diagnosis.
TQM's focus on the customer is only a half-truth; for the most part, organizations focus on
segments or cliques of customers, not individual customers. The growth of "one to one“
marketing, increasing flexibility in production and logistics, product postponement, and e-
commerce all support the goals of mass customization being able to serve the needs of
individual customers. Quality systems will need to increasingly focus on the management of
individual customer requirements.
The constant improvement of quality in a particular market segment makes it increasingly
difficult for a firm to create new value with its products. As firms get better at understanding
what customers want and delivering it, this skill will not be differentiable it will simply be
required to remain in business. In order to enhance competitive stance, companies will focus
on getting better at understanding the unarticulated needs of their customers, and develop
solutions aimed at “total value creation”.
21. Which standard is applicable to software industry, ISO
9001, ISO 9002,
or ISO 9003?
• The ISO 9000 standard which applies to software industry is ISO 9001, since it applies
to "quality assurance in design, development, production, installation and servicing".
This standard is written for manufacturing industry, and this poses some problems
when applying it to development and maintenance of software.
22. In a software development organization, identify the
persons
responsible for carrying out quality assurance activities.
Explain the
principal
Head of QA role is atasks they
senior position perform
within to which
an organization meet this responsibility.
is normally the next level up from a QA
manager role.
• Depending on the role and the organization, the Head of the QA role can either be hands-on from a technical
point of view or hands-off with a focus on strategy and processes, or it could be a mixture of both.
• Responsible for Defining QA strategy, approach and execution in development projects.
• Responsible for Leading and directing the QA leadership team.
• Provide leadership and technical expertise within Test Automation and Quality Assurance.
• Be accountable for the test automation projects, mentor, and provide leadership to the QA automation
developers and managers.
• Provide technical leadership and expertise within the field of Quality Assurance and Testing.
• Ensuring that the development teams adhere to the principles, guidelines and best practices of the QA
strategy as defined.
• Focus on continuous QA improvements including usage of appropriate testing tools, test techniques, test
automation.
• Building and maintenance of quality standards as well as enforcing technical and testing standards.
• Monitoring of all the QA activities, test results, leaked defects, root cause analysis and identifying areas of
improvement. Implement the steps required to improve the processes.
• Ensure the proper usage of available tools to gain the maximum benefit of the QA effort. This includes testing
tools for functional, performance, automation, etc.
23. Suppose an organisation mentions in its job
advertisement that it has
been assessed at level 3 of SEI CMM, what can you infer
about the
current quality practices at the organisation? What does
this organisation
have to do to reach SEI CMM level 4?
• At this level, the processes for both management and development activities are
defined and documented. There is a common organisation-wide understanding of
activities, roles, and responsibilities.
• The processes though defined, the process and product qualities are not measured. At
this level, the organisation builds up the capabilities of its employees through periodic
training programs. Also, review techniques are emphasized and documented to achieve
phase containment of errors.
• To reach level 4, both process and product metrics are collected. Quantitative quality
goals are set for the products and at the time of completion of development it was
checked whether the quantitative quality goals for the product are met. Various tools
like Pareto charts, fishbone diagrams, etc. are used to measure the product and
process quality.
24. Suppose as the president of a company, you have the
choice to either
go for ISO 9000 based quality model or SEI CMM based
model, which
one would you prefer? Give the reasoning behind your
choice.
CMM focuses strictly on software, while ISO 9001 has included hardware, software,
processed materials, and services.
Every Level 2 KPA is strongly related to ISO 9001 Every KPA is at least weakly related to
ISO 9001 A CMM Level-1 organization can be ISO 9001 certified; that organization would
have significant Level-2 process strengths and noticeable Level-3 strengths.
Given a reasonable implementation of the software process, a ISO 9001 certified
organization should be at least close to CMM Level-2.
So, CMM should be chosen.
25. What do you understand by total quality management
(TQM)? What
are the advantages of TQM? Does ISO 9000 standard aim
for TQM?
• Total quality management (TQM) advocates that the process followed by an
organisation must continuously be improved through process measurements. TQM goes
a step further than quality assurance and aims at continuous process improvement.
TQM goes beyond documenting processes to optimizing them through redesign.
Advantages of TQM:
• Cost reduction
• Customer satisfaction
• Defect reduction
• Morale
26. What are the principal activities of a modern quality
system?
• Principle 1: customer focus
customer focus is the first principle, right where it should be. It covers both customer needs and customer service. This principle stresses that a business should
understand its customers, what they need and when. While trying to meet, but preferably, exceed customers’ expectations.
• Principle 2: leadership
Without clear and strong leadership, a business flounders. Principle 2, is concerned with the direction of the organization. The business should have clear goals
and objectives, and ensure its employees are actively involved in achieving those targets.
• Principle 3: people involvement
The process approach is all about efficiency and effectiveness. Well-managed processes reduce costs, improve consistency, eliminate waste and promote
continuous improvement.
• Principle 4: a process approach
The process approach is all about efficiency and effectiveness. It’s also about consistency and understanding that good processes also speeds up activities.
• Principle 5: a systematic approach to management
“Identifying, understanding and managing interrelated processes as a system contributes to the organization’s effectiveness and efficiency in achieving its
objectives.”
A business focuses its efforts on the key processes as well as aligning complementary processes to get better efficiency. This means that multiple processes are
managed together as a system which should lead to greater efficiency.
• Principle 6: continual improvement
This principle is very straightforward: continual improvement should be an active business objective.
• Principle 7: factual approach to decision making
A logical approach, based on data and analysis, is good business sense. Unfortunately, in a fast-paced workplace, decisions can often be made rashly, without
proper thought. Implementing the Quality Management Principles
This principle deals with supply chains. It promotes the relationship between the company and its suppliers; recognizing it is interdependent. A strong
relationship enhances productivity and encourages seamless working practices.
27. In a software development organisation whose
responsibility is it to
ensure that the products are of high quality? Explain the
principal tasks
they perform to meet this responsibility.
• Top management is responsible for high quality of the software product.
• Principle task they perform to maintain quality are:
• Establishing and updating the organization’s software quality policy.
• Assigning one of the executives such as Vice President for SQA to be in charge of
software quality issues
• Conducting regular management reviews of performance with respect to software
quality issues
28. What do you understand by repeatable software
development?
Organizations assessed at which level SEI CMM maturity
achieves repeatable software development?
• Repeatable processes reduce variability through measurement and constant process correction.
The term originated in manufacturing, where results were well defined and repeatability meant
that if a process had consistent inputs, then defined outputs would be produced. Repeatable
means that the conversion of inputs to outputs can be replicated with little variation. It implies
that no new information can be generated during the process because we have to know all the
information in advance to predict the output results accurately.
• CMM was developed by the Software Engineering Institute (SEI) at Carnegie Mellon University in
1987.
• It is not a software process model. It is a framework which is used to analyze the approach and
techniques followed by any organization to develop a software product.
• It also provides guidelines to further enhance the maturity of those software products.
• It is based on profound feedback and development practices adopted by the most successful
organizations worldwide.
• This model describes a strategy that should be followed by moving through 5 different levels.
• Each level of maturity shows a process capability level. All the levels except level-1 are further
described by Key Process Areas (KPA’s).
29. What do you understand by key process area (KPA), in
the context of
SEI CMM? Would there be any problem if an organisation
tries to
implement higher level SEI CMM KPAs before achieving
lower level KPAs?
Justify your answer using suitable examples.
• Key process area identifies a cluster of related activities that, when performed
collectively, achieve a set of goals considered important for enhancing process
capability.
• Key process areas are building blocks that indicate the areas an organization should
focus on to improve its software process. As KPA are building blocks therefore if higher
level SEI CMM KPA is implemented before lower level then it would be problematic.
30. What is the Six Sigma quality initiative? To which
category of industries it applicable? Explain the Six Sigma
technique adopted by software organizations with respect
to the goal, the procedure, and the outcome.
• Six Sigma strategies seek to improve the quality of the output of a process by identifying and removing
the causes of defects and minimizing variability in manufacturing and business processes. It uses a set of
quality management methods, mainly empirical, statistical methods, and creates a special infrastructure
of people within the organization who are experts in these methods. Each Six Sigma project carried out
within an organization follows a defined sequence of steps and has specific value targets, for example:
reduce process cycle time, reduce pollution, reduce costs, increase customer satisfaction, and increase
profits.
• Continuous efforts to achieve stable and predictable process results (e.g. by reducing process variation)
are of vital importance to business success.
• Manufacturing and business processes have characteristics that can be defined, measured, analysed,
improved, and controlled.
• Achieving sustained quality improvement requires commitment from the entire organization, particularly
from top-level management.
• Features that set Six Sigma apart from previous quality-improvement initiatives include:
• A clear focus on achieving measurable and quantifiable financial returns from any Six Sigma project.
• An increased emphasis on strong and passionate management leadership and support.
• A clear commitment to making decisions on the basis of verifiable data and statistical methods, rather
than assumptions and guesswork.
31. What is the difference between process metrics and
product metrics?
Give four examples of each.
• Process Matrices - These are metrics that pertain to Process Quality. They are used to
measure the efficiency and effectiveness of various processes.
• Product Matrices - These are metrics that pertain to Product Quality. They are used to
measure cost, quality, and the product’s time-to-market.
Chapter 13
Question 1-19
Choose the correct option:
(a) Which of the following is not a cause for software maintenance for a typical
product?
(i) It is not possible to guarantee that a software is defect-free even after thorough
testing.
(ii) The deployment platform may change over time.
(iii) The user’s needs may change over time.
(iv) Software undergoes wear and tear after long usage.
Chapter 13: Q1
(b) A legacy software product refers to a software that is:
(i) Developed at least 50 years ago.
(ii) Obsolete software product.
(iii) Software product that has poor design structure and code.
(iv) Software product that could not be tested properly before product
delivery
Chapter 13: Q1
(c) Which of the following assertions is true?
(i) Legacy products automatically imply very old products.
(ii) The total effort spent in maintaining an average product typically
exceeds the effort in developing it.
(iii) Reverse engineering encompasses re-engineering.
(iv) Re-engineering encompasses reverse engineering.
Chapter 13: Q1
(d) Which of the following types of maintenance consumes the
maximum effort for a typical software?
(i) Adaptive
(ii) Corrective
(iii) Preventive
(iv) Perfective
Chapter 13: Q2
Q) What are the different types of maintenance that a software product might need? Why are these maintenance
required?
Answer:
Types of Software Maintenance
There are three types of software maintenance, which are described as
follows:
Corrective: Corrective maintenance of a software product is necessary either
to rectify the bugs observed while the system is in use.
Adaptive: A software product might need maintenance when the customers
need the product to run on new platforms, on new operating systems, or
when they need the product to interface with new hardware or software.
Perfective: A software product needs maintenance to support the new
features that users want it to support, to change different functionalities of
the system according to customer demands, or to enhance the performance
of the system.
Chapter 13: Q3
Q) Explain why every software system must undergo maintenance or
progressively become less useful.
Answer :
In this section, we first classify the different maintenance efforts into a few classes. Next, we discuss some
general characteristics of the maintenance projects. We also discuss some special problems associated
with maintenance projects. Software maintenance is becoming an important activity of a large number of
organisations. This is no surprise, given the rate of hardware obsolescence, the immortality of a software
product per se, and the demand of the user community to see the existing software products run on newer
platforms, run in newer environments, and/or with enhanced features. When the hardware platform
changes, and a software product performs some low-level functions, maintenance is necessary. Also,
whenever the support environment of a software product changes, the software product requires rework
to cope up with the newer interface. For instance, a software product may need to be maintained when
the operating system changes. Thus, every software product continues to evolve after its development
through maintenance efforts.
Chapter 13: Q4
Q) Discuss the process models for software maintenance and indicate how you would select an
appropriate maintenance model for a maintenance project at hand.
• First model
The first model is preferred for projects involving small reworks where the code is changed directly and
the changes are reflected in the relevant documents later. This maintenance process is graphically
presented in Figure 13.3. In this approach, the project starts by gathering the requirements for changes.
The requirements are next analysed to formulate the strategies to be adopted for code change. At this
stage, the association of at least a few members of the original development team goes a long way in
reducing the cycle time, especially for projects involving unstructured and inadequately documented
code. The availability of a working old system to the maintenance engineers at the maintenance site
greatly facilitates the task of the maintenance team as they get a good insight into the working of the
old system and also can compare the working of their modified system with the old system. Also,
debugging of the reengineered system becomes easier as the program traces of both the systems can
be compared to localise the bugs.
Second model The second model is preferred for projects where the amount of rework required
is significant. This approach can be represented by a reverse engineering cycle followed by a
forward engineering cycle. Such an approach is also known as software re-engineering. The
reverse engineering cycle is required for legacy products. During the reverse engineering, the old
code is analysed (abstracted) to extract the module specifications. The module specifications are
then analysed to produce the design. The design is analysed (abstracted) to produce the original
requirements specification. The change requests are then applied to this requirements
specification to arrive at the new requirements specification. At this point a forward engineering
is carried out to produce the new code. At the design, module specification, and coding a
substantial reuse is made from the reverse engineered products. An important advantage of this
approach is that it produces a more structured design compared to what the original product
had, produces good documentation, and very often results in increased efficiency. The efficiency
improvements are brought about by a more efficient design. However, this approach is more
costly than the first approach. An empirical study indicates that process 1 is preferable when the
amount of rework is no more than 15 per cent (see Figure 13.5).
Chapter 13: Q5
Q) State whether the following statements are TR U E or FALSE. Give
reasons for your answer.
(a) Legacy software products are those products which have been
developed long time back. - True
(b) Corrective maintenance is the type of maintenance that is most
frequently carried out on a typical software product. - False
Chapter 13: Q6
Q) What do you mean by the term software reverse engineering? Why is it
required? Explain the different activities undertaken during reverse engineering.
Answer:
Software reverse engineering is the process of recovering the design and the
requirements specification of a product from an analysis of its code. The purpose of
reverse engineering is to facilitate maintenance work improving the
understandability of a system and to produce the necessary documents for a legacy
system. Reverse engineering is becoming important, since legacy software products
lack proper documentation, and are highly unstructured. Even well-designed
products become legacy software as their structure degrades through a series of
maintenance efforts.
Chapter 13: Q7
Q) What do you mean by the term software re-engineering? Why is it required?
Explain the different activities undertaken during reverse engineering
Answer:
Software reverse engineering is the process of recovering the design and the
requirements specification of a product from an analysis of its code. The purpose of
reverse engineering is to facilitate maintenance work improving the
understandability of a system and to produce the necessary documents for a legacy
system. Reverse engineering is becoming important, since legacy software products
lack proper documentation, and are highly unstructured. Even well-designed
products become legacy software as their structure degrades through a series of
maintenance efforts.
Chapter 13: Q8
Q) If a software product costed Rs. 10,000,000 for development, compute the annual
maintenance cost given that every year approximately 5 per cent of the code needs
modification. Identify the factors which render the maintenance cost estimation inaccurate?
Answer:
KLOC added = 5%
KLOC deleted = 5%
ACT = 10%
Maintenance Cost = ACT*Development Cost = 10%*10Million = 1Million/Annum
• Most maintenance cost estimation models, however, give only approximate results because
they do not take into account several factors such as experience level of the engineers, and
familiarity of the engineers with the product, hardware requirements, software complexity, etc.
Chapter 13: Q9
Q) What is a legacy software product? Explain the problems one would encounter while maintaining a legacy product.
Answer :
Software maintenance work currently is typically much more expensive than what it should be and takes more time than
required. The reasons for this situation are the following: Software maintenance work in organisations is mostly carried out using
ad hoc techniques. The primary reason being that software maintenance is one of the most neglected areas of software
engineering. Even though software maintenance is fast becoming an important area of work for many companies as the
software products of yester years age, still software maintenance is mostly being carried out as fire-fighting operations, rather
than through systematic and planned activities. Software maintenance has a very poor image in industry. Therefore, an
organisation often cannot employ bright engineers to carry out maintenance work. Even though maintenance suffers from a
poor image, the work involved is often more challenging than development work. During maintenance it is necessary to
thoroughly understand someone else’s work, and then carry out the required modifications and extensions. Another problem
associated with maintenance work is that the majority of software products needing maintenance are legacy products. Though
the word legacy implies “aged” software, but there is no agreement on what exactly is a legacy system. It is prudent to define a
legacy system as any software system that is hard to maintain. The typical problems associated with legacy systems are poor
documentation, unstructured (spaghetti code with ugly control structure), and lack of personnel knowledgeable in the product.
Many of the legacy systems were developed long time back. But, it is possible that a recently developed system having poor
design and documentation can be considered to be a legacy system.