Unit - Iv Testing and Maintenance Software Testing Fundamentals

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 61

UNIT –IV

TESTING AND MAINTENANCE


Software testing fundamentals

The entire site is dedicated to the basics of software testing. However, you need
to first master the basics of the basics before you begin. We strongly recommend
you to go through the following fundamental articles if you are just starting the
journey into the world of software testing.

 Software Quality: Learn how software quality is defined and what it means.

 Software quality is the degree of conformance to explicit or implicit
requirements and expectations.

 Dimensions of Quality: Learn the dimensions of quality.

 Software quality has dimensions such as Accessibility, Compatibility,
Concurrency,

Efficiency …

 Software Quality Assurance: Learn what it means and what its relationship is
with Software Quality Control.

 Software Quality Assurance is a set of activities for ensuring quality in
software engineering processes.

 Software Quality Control: Learn what it means and what its relationship is with
Software Quality Assurance.

 Software Quality Control is a set of activities for ensuring quality in
software products.

 SQA and SQC Differences: Learn the differences between Software Quality
Assurance and Software Quality Control.

 SQA is process-focused and prevention-oriented but SQC is product-
focused and detection-oriented.

 Software Development Life Cycle: Learn what SDLC means and what
activities a typical SDLC model comprises of.

 Software Development Life Cycle defines the steps/stages/phases in the


building of

software.

 Definition of Test: Learn the various definitions of the term ‘test’.



 Merriam Webster defines Test as “a critical examination, observation, or
evaluation”.
Internal and external views of Testing

Inferences are said to possess internal validity if a causal relation between two variables is
properly demonstrated. A causal inference may be based on a relation when three criteria are
satisfied:

1. the "cause" precedes the "effect" in time (temporal precedence),

2. the "cause" and the "effect" are related (covariation), and


3. there are no plausible alternative explanations for the observed covariation
(nonspuriousness)
In scientific experimental settings, researchers often manipulate a variable
(the independent variable) to see what effect it has on a second variable
(the dependent variable)] For example, a researcher might, for different
experimental groups, manipulate the dosage of a particular drug between groups to
see what effect it has on health. In this example, the researcher wants to make a
causal inference, namely, that different doses of the drug may be held
responsible for observed changes or differences. When the researcher may
confidently attribute the observed changes or differences in the dependent variable
to the independent variable, and when he can rule out other explanations (or rival
hypotheses), then his causal inference is said to be internally valid

In many cases, however, the magnitude of effects found in the dependent variable
may not just depend on

 variations in the independent variable,



 the power of the instruments and statistical procedures used to measure and
detect the effects, and

 the choice of statistical methods (see: Statistical conclusion validity).

Rather, a number of variables or circumstances uncontrolled for (or uncontrollable)


may lead to additional or alternative explanations (a) for the effects found and/or
(b) for the magnitude of the effects found. Internal validity, therefore, is more a
matter of degree than of either-or, and that is exactly why research designs other
than true experiments may also yield results with a high degree of internal validity.

In order to allow for inferences with a high degree of internal validity, precautions
may be taken during the design of the scientific study. As a rule of thumb,
conclusions based on correlations or associations may only allow for lesser degrees
of internal validity than conclusions drawn on the basis of direct manipulation of
the independent variable. And, when viewed only from the perspective of Internal
Validity, highly controlled true experimental designs (i.e. with random selection,
random assignment to either the control or experimental groups, reliable
instruments, reliable manipulation processes, and safeguards against confounding
factors) may be the "gold standard" of scientific research. By contrast, however,
the very strategies employed to control these factors may also limit the
generalizability or External Validity of the findings.

External validity is the validity of generalized (causal) inferences in scientific


research, usually based on experiments as experimental validity. In other words, it
is the extent to which the results of a study can be generalized to other situations
and to other people For example, inferences based on comparative psychotherapy
studies often employ specific samples (e.g. volunteers, highly depressed, no
comorbidity).
If psychotherapy is found effective for these sample patients, will it also be
effective for non-volunteers or the mildly depressed or patients with concurrent
other disorders?

 Situation: All situational specifics (e.g. treatment conditions, time, location,


lighting, noise, treatment administration, investigator, timing, scope and
extent of measurement, etc. etc.) of a study potentially limit generalizability.

 Pre-test effects: If cause-effect relationships can only be found when pre-
tests are carried out, then this also limits the generality of the findings.

 Post-test effects: If cause-effect relationships can only be found when post-
tests are carried out, then this also limits the generality of the findings.

 Reactivity (placebo, novelty, and Hawthorne effects): If cause-effect
relationships are found they might not be generalizable to other settings or
situations if the effects found only occurred as an effect of studying the
situation.

 Rosenthal effects: Inferences about cause-consequence relationships may
not be generalizable to other investigators or researchers.
White box testing-basis path testing
In software engineering, basis path testing, or structured testing, is a white
box method for designing test cases. The method analyzes the control flow
graph of a program to find a set of linearly independent paths of execution. The
method normally uses McCabe' cyclomatic complexity to determine the number of
linearly independent paths and then generates test cases for each path thus
obtained. Basis path testing guarantees complete branch coverage (all CFG edges),
but achieves that without covering all possible CFG paths—the latter is usually too
costly .Basis path testing has been widely used and studied

decision-to-decision path, or DD-path, is a path of execution (usually through


a flow graph representing a program, such as a flow chart) between two decisions.
More recent versions of the concept also include the decisions themselves in their
own DD-paths.
Control Structure Testing

Control structure testing is a group of white-box testing methods.

 Branch Testing

 Condition Testing

 Data Flow Testing

 Loop Testing

1. Branch Testing

 also called Decision Testing



 definition: "For every decision, each branch needs to be executed at least
once."

 shortcoming - ignores implicit paths that result from compound
conditionals.

 Treats a compound conditional as a single statement. (We count each branch
taken out of the decision, regardless which condition lead to the branch.)

 This example has two branches to be executed:



IF ( a equals b) THEN statement 1

ELSE statement 2

END IF

 This examples also has just two branches to be executed, despite the
compound conditional:

IF ( a equals b AND c less than d ) THEN statement 1

ELSE statement 2

END IF

 This example has four branches to be executed:

IF ( a equals b) THEN statement 1

ELSE

IF ( c equals d) THEN statement 2

ELSE statement 3

END IF
END IF

 Obvious decision statements are if, for, while, switch.

 Subtle decisions are return boolean expression, ternary expressions, try-
catch.

 For this course you don't need to write test cases for IOException and
OutOfMemory exception.

 See this problem and solution.



2.Condition Testing

Condition testing is a test construction method that focuses on exercising the


logical conditions in a program module.

Errors in conditions can be due to:

 Boolean operator error



 Boolean variable error

 Boolean parenthesis error

 Relational operator error

 Arithmetic expression error

definition: "For a compound condition C, the true and false branches of C and
every simple condition in C need to be executed at least once."

Multiple-condition testing requires that all true-false combinations of simple


conditions be exercised at least once. Therefore, all statements, branches, and
conditions are necessarily covered.

3. Data Flow Testing


Selects test paths according to the location of definitions and use of variables. This
is a somewhat sophisticated technique and is not practical for extensive use. Its use
should be targeted to modules with nested if and loop statements.

4. Loop Testing

Loops are fundamental to many algorithms and need thorough testing.

There are four different classes of loops: simple, concatenated, nested, and
unstructured.

Examples:
Create a set of tests that force the following situations:

 Simple Loops, where n is the maximum number of allowable passes


through the loop. o Skip loop entirely

o Only one pass through loop o Two passes through loop
o m passes through loop where m<n.
o (n-1), n, and (n+1) passes through the loop.

 Nested Loops
o Start with inner loop. Set all other loops to minimum values.
o Conduct simple loop testing on inner loop.
o Work outwards
o Continue until all loops tested.

 Concatenated Loops

o If independent loops, use simple loop testing.
o If dependent, treat as nested loops.

 Unstructured loops

o Don't test - redesign.


black box testing

Black-box testing is a method of software testing that examines the functionality


of an application without peering into its internal structures or workings. This
method of test can be applied to virtually every level of software
testing: unit, integration, system and acceptance. It typically comprises most if not
all higher level testing, but can also dominate unit testing as well.

TEST PROCEDURES

Specific knowledge of the application's code/internal structure and programming


knowledge in general is not required. The tester is aware of what the software is
supposed to do but is not aware of how it does it. For instance, the tester is aware
that a particular input returns a certain, invariable output but is not aware
of how the software produces the output in the first place.

Test cases

Test cases are built around specifications and requirements, i.e., what the
application is supposed to do. Test cases are generally derived from external
descriptions of the software, including specifications, requirements and design
parameters. Although the tests used are primarily functional in nature, non-
functional tests may also be used. The test designer selects both valid and invalid
inputs and determines the correct output without any knowledge of the test object's
internal structure.

Test design technique

Typical black-box test design techniques include:

 Decision table testing



 All-pairs testing

 State transition analysis

 Equivalence partitioning

 Boundary value analysis

 Cause–effect graph

 Error guessing
Regression Testing

Regression testing is a type of software testing that seeks to uncover new software
bugs, or regressions, in existing functional and non-functional areas of a system
after changes such as enhancements, patches or configuration changes, have been
made to them.
The intent of regression testing is to ensure that changes such as those mentioned
above have not introduced new faults. One of the main reasons for regression
testing is to determine whether a change in one part of the software affects other
parts of the software

Common methods of regression testing include rerunning previously completed


tests and checking whether program behavior has changed and whether previously
fixed faults have re-emerged. Regression testing can be performed to test a system
efficiently by systematically selecting the appropriate minimum set of tests needed
to adequately cover a particular change.

Contrast with non-regression testing (usually validation-test for a new issue),


which aims to verify whether, after introducing or updating a given software
application, the change has had the intended effect.

Unit Testing

In computer programming, unit testing is a software testing method by which


individual units of source code, sets of one or more computer program modules
together with associated control data, usage procedures, and operating procedures
are tested to determine if they are fit for use. Intuitively, one can view a unit as the
smallest testable part of an application. In procedural programming, a unit could be
an entire module, but it is more commonly an individual function or procedure.
In object-oriented programming, a unit is often an entire interface, such as a class,
but could be an individual method. Unit tests are short code fragments created by
programmers or occasionally by white box testers during the development process.

Ideally, each test case is independent from the others. Substitutes such as method
stubs, mock objects, fakes, and test harnesses can be used to assist testing a module
in isolation. Unit tests are typically written and run by software developers to
ensure that code meets its design and behaves as intended.

UNIT TESTING LIMITATIONS


Testing will not catch every error in the program, since it cannot evaluate every
execution path in any but the most trivial programs. The same is true for unit
testing. Additionally, unit testing by definition only tests the functionality of the
units themselves. Therefore, it will not catch integration errors or broader system-
level errors (such as functions performed across multiple units, or non-functional
test areas such as performance). Unit testing should be done in conjunction with
other software testing activities, as they can only show the presence or absence of
particular errors; they cannot prove a complete absence of errors. In order to
guarantee correct behavior for every execution path and every possible input, and
ensure the absence of errors, other techniques are required, namely the application
of formal methods to proving that a software component has no unexpected
behavior.

Software testing is a combinatorial problem. For example, every boolean decision


statement requires at least two tests: one with an outcome of "true" and one with an
outcome of "false". As a result, for every line of code written, programmers often
need 3 to 5 lines of test code. This obviously takes time and its investment may not
be worth the effort. There are also many problems that cannot easily be tested at all
– for example those that are nondeterministic or involve multiple threads. In
addition, code for a unit test is likely to be at least as buggy as the code it is
testing. Fred Brooks in The Mythical Man-Month quotes: "Never go to sea with
two chronometers; take one or three.Meaning, if two chronometers contradict, how
do you know which one is correct?

Another challenge related to writing the unit tests is the difficulty of setting up
realistic and useful tests. It is necessary to create relevant initial conditions so the
part of the application being tested behaves like part of the complete system. If
these initial conditions are not set correctly, the test will not be exercising the code
in a realistic context, which diminishes the value and accuracy of unit test results

To obtain the intended benefits from unit testing, rigorous discipline is needed
throughout the software development process. It is essential to keep careful records
not only of the tests that have been performed, but also of all changes that have
been made to the source code of this or any other unit in the software. Use of
a version control system is essential. If a later version of the unit fails a particular
test that it had previously passed, the version-control software can provide a list of
the source code changes (if any) that have been applied to the unit since that time.
It is also essential to implement a sustainable process for ensuring that test case
failures are reviewed daily and addressed immediately. If such a process is not
implemented and ingrained into the team's workflow, the application will evolve
out of sync with the unit test suite, increasing false positives and reducing the
effectiveness of the test suite.

Unit testing embedded system software presents a unique challenge: Since the
software is being developed on a different platform than the one it will eventually
run on, you cannot readily run a test program in the actual deployment
environment, as is possible with desktop programs.
Integration Testing

Integration testing (sometimes called integration and testing, abbreviated I&T)


is the phase in software testing in which individual software modules are combined
and tested as a group. It occurs after unit testing and before validation testing.
Integration testing takes as its input modules that have been unit tested, groups
them in larger aggregates, applies tests defined in an integration test plan to those
aggregates, and delivers as its output the integrated system ready for system
testing.

The purpose of integration testing is to verify functional, performance, and


reliability requirements placed on major design items. These "design items", i.e.
assemblages (or groups of units), are exercised through their interfaces using black
box testing, success and error cases being simulated via appropriate parameter and
data inputs. Simulated usage of shared data areas and inter-process
communication is tested and individual subsystems are exercised through their
input interface. Test cases are constructed to test whether all the components
within assemblages interact correctly, for example across procedure calls or
process activations, and this is done after testing individual modules, i.e. unit
testing. The overall idea is a "building block" approach, in which verified
assemblages are added to a verified base which is then used to support the
integration testing of further assemblages.
Some different types of integration testing are big bang, top-down, and bottom-up.
Other Integration Patterns are: Collaboration Integration, Backbone Integration,
Layer Integration, Client/Server Integration, Distributed Services Integration and
High-frequency Integration.

Big Bang

In this approach, all or most of the developed modules are coupled together to form
a complete software system or major part of the system and then used for
integration testing. The Big Bang method is very effective for saving time in the
integration testing process. However, if the test cases and their results are not
recorded properly, the entire integration process will be more complicated and may
prevent the testing team from achieving the goal of integration testing.

A type of Big Bang Integration testing is called Usage Model testing. Usage
Model Testing can be used in both software and hardware integration testing. The
basis behind this type of integration testing is to run user-like workloads in
integrated user-like environments. In doing the testing in this manner, the
environment is proofed, while the individual components are proofed indirectly
through their use. Usage Model testing takes an optimistic approach to testing,
because it expects to have few problems with the individual components. The
strategy relies heavily on the component developers to do the isolated unit testing
for their product. The goal of the strategy is to avoid redoing the testing done by
the developers, and instead flesh-out problems caused by the interaction of the
components in the environment. For integration testing, Usage Model testing can
be more efficient and provides better test coverage than traditional focused
functional integration testing. To be more efficient and accurate, care must be used
in defining the user-like workloads for creating realistic scenarios in exercising the
environment. This gives confidence that the integrated environment will work as
expected for the target customers.

Top-down and Bottom-up

Bottom Up Testing is an approach to integrated testing where the lowest level


components are tested first, then used to facilitate the testing of higher level
components. The process is repeated until the component at the top of the
hierarchy is tested.

All the bottom or low-level modules, procedures or functions are integrated and
then tested. After the integration testing of lower level integrated modules, the next
level of modules will be formed and can be used for integration testing. This
approach is helpful only when all or most of the modules of the same development
level are ready. This method also helps to determine the levels of software
developed and makes it easier to report testing progress in the form of a
percentage.

Top Down Testing is an approach to integrated testing where the top integrated
modules are tested and the branch of the module is tested step by step until the end
of the related module.

Sandwich Testing is an approach to combine top down testing with bottom up


testing.

The main advantage of the Bottom-Up approach is that bugs are more easily found.
With Top-Down, it is easier to find a missing branch link

LIMITATIONS

Any conditions not stated in specified integration tests, outside of the confirmation
of the execution of design items, will generally not be tested.
In software project management, software testing, and software
engineering, verification and validation (V&V) is the process of checking that a
software system meets specifications and that it fulfills its intended purpose. It may
also be referred to as software quality control. It is normally the responsibility
of software testers as part of the software development lifecycle
Validation checks that the product design satisfies or fits the intended use (high-
level checking), i.e., the software meets the user requirements. This is done
through dynamic testing and other forms of review.

Verification and validation are not the same thing, although they are often
confused. Boehm succinctly expressed the difference between

 Validation: Are we building the right product? (This is dynamic process for
checking and testing the real product. Software validation always involves
with executing the code)[citation needed])

 Verification: Are we building the product right? (This is static method for verifying
design,code. Software verification is human based checking of documents and
files)[citation needed])

According to the Capability Maturity Model

 Software Validation: The process of evaluating software during or at the


end of the development process to determine whether it satisfies specified
requirements.

 Software Verification: The process of evaluating software to determine
whether the products of a given development phase satisfy the conditions
imposed at the start of that phaseIn other words, software validation ensures
that the product actually meets the user's needs, and that the specifications
were correct in the first place, while software verification is ensuring that the
product has been built according to the requirements and design
specifications. Software validation ensures that "you built the right thing".
Software verification ensures that "you built it right". Software validation
confirms that the product, as provided, will fulfill its intended use.

From testing perspective:


 Fault – wrong or missing function in the code.

 Failure – the manifestation of a fault during execution.

 Malfunction – according to its specification the system does not meet its
specified functionality.

RELATED CONCEPTS

Both verification and validation are related to the concepts of quality and
of software quality assurance. By themselves, verification and validation do not
guarantee software quality; planning, traceability, configuration management and
other aspects of software engineering are required.

Within the modeling and simulation (M&S) community, the definitions of


validation, verification and accreditation are similar:

 M&S Validation is the process of determining the degree to which a model,


simulation, or federation of models and simulations, and their associated
data are accurate representations of the real world from the perspective of
the intended use(s) is the formal certification that a model or simulation is
acceptable to be used for a specific purpose.

 Verification is the process of determining that a computer model,
simulation, or federation of models and simulations implementations and
their associated data accurately represent the developer's conceptual
description and specifications.

The definition of M&S validation focuses on the accuracy with which the M&S
represents the real-world intended use(s). Determining the degree of M&S
accuracy is required because all M&S are approximations of reality, and it is
usually critical to determine if the degree of approximation is acceptable for the
intended use(s). This stands in contrast to software validation.
CLASSIFICATION OF METHODS

In mission-critical software systems, where flawless performance is absolutely


necessary, formal methods may be used to ensure the correct operation of a system.
However, often for non-mission-critical software systems, formal methods prove to
be very costly and an alternative method of software V&V must be sought out. In
such cases, syntactic methods are often used.

Test cases

A test case is a tool used in the process. Test cases may be prepared for software
verification and software validation to determine if the product was built according
to the requirements of the user. Other methods, such as reviews, may be used early
in the life cycle to provide for software validation.
System Testing And Debugging

Debugging is a methodical process of finding and reducing the number of bugs, or


defects, in a computer program or a piece of electronic hardware, thus making it
behave as expected. Debugging tends to be harder when various subsystems
are tightly coupled, as changes in one may cause bugs to emerge in another.

Numerous books have been written about debugging (see below: Further reading),
as it involves numerous aspects, including interactive debugging, control
flow, integration testing, log files, monitoring (application, system), memory
dumps, profiling, Statistical Process Control, and special design tactics to improve
detection while simplifying changes

Normally the first step in debugging is to attempt to reproduce the problem. This
can be a non-trivial task, for example as with parallel processes or some unusual
software bugs. Also, specific user environment and usage history can make it
difficult to reproduce the problem.
After the bug is reproduced, the input of the program may need to be simplified to
make it easier to debug. For example, a bug in a compiler can make it crash when
parsing some large source file. However, after simplification of the test case, only
few lines from the original source file can be sufficient to reproduce the same
crash. Such simplification can be made manually, using a divide-and-
conquer approach. The programmer will try to remove some parts of original test
case and check if the problem still exists. When debugging the problem in a GUI,
the programmer can try to skip some user interaction from the original problem
description and check if remaining actions are sufficient for bugs to appear.

After the test case is sufficiently simplified, a programmer can use a debugger tool
to examine program states (values of variables, plus the call stack) and track down
the origin of the problem(s). Alternatively, tracing can be used. In simple cases,
tracing is just a few print statements, which output the values of variables at certain
points of program execution

TECHNIQUES

 Print debugging (or tracing) is the act of watching (live or recorded) trace
statements, or print statements, that indicate the flow of execution of a
process. This is sometimes called printf debugging, due to the use of
the printf function in C. This kind of debugging was turned on by
the command TRON in the original versions of the novice-
oriented BASIC programming language. TRON stood for, "Trace On."
TRON caused the line numbers of each BASIC command line to print as the
program ran.

 Remote debugging is the process of debugging a program running on a
system different from the debugger. To start remote debugging, a debugger
connects to a remote system over a network. The debugger can then control
the execution of the program on the remote system and retrieve information
about its state.

Post-mortem debugging is debugging of the program after it has
already crashed. Related techniques often include various tracing techniques
(for example,) and/or analysis of memory dump (or core dump) of the
crashed process. The dump of the process could be obtained automatically by
the system (for example, when process has terminated due to an unhandled exception), or
by a programmer-inserted instruction, or manually by the interactive user.

 "Wolf fence" algorithm: Edward Gauss described this simple but very useful
and now famous algorithm in a 1982 article for communications of the ACM
as follows: "There's one wolf in Alaska; how do you find it? First build a
fence down the middle of the state, wait for the wolf to howl, determine
which side of the fence it is on. Repeat process on that side only, until you
get to the point where you can see the wolf. This is implemented e.g. in
the Git version control system as the command git bisect, which uses the
above algorithm to determine which commit introduced a particular bug.

 Delta Debugging – a technique of automating test case simplification.

Software Implementation Techniques: Coding practices

Best coding practices are a set of informal rules that the software
development community has learned over time which can help improve the quality
of software

Many computer programs remain in use for far longer than the original authors
ever envisaged (sometimes 40 years or more) so any rules need to facilitate both
initial development and subsequent maintenance and enhancement by people other
than the original authors.

In Ninety-ninety rule, Tim Cargill is credited with this explanation as to why


programming projects often run late: "The first 90% of the code accounts for the
first 90% of the development time. The remaining 10% of the code accounts for
the other 90% of the development time." Any guidance which can redress this lack
of foresight is worth considering.
The size of a project or program has a significant effect on error rates, programmer
productivity, and the amount of management needed

 Maintainability.

 Dependability.

 Efficiency.

 Usability.
Software Implementation Techniques: Coding practices

Best coding practices are a set of informal rules that the software
development community has learned over time which can help improve the quality
of software

Many computer programs remain in use for far longer than the original authors
ever envisaged (sometimes 40 years or more) so any rules need to facilitate both
initial development and subsequent maintenance and enhancement by people other
than the original authors.

In Ninety-ninety rule, Tim Cargill is credited with this explanation as to why


programming projects often run late: "The first 90% of the code accounts for the
first 90% of the development time. The remaining 10% of the code accounts for
the other 90% of the development time." Any guidance which can redress this lack
of foresight is worth considering.

The size of a project or program has a significant effect on error rates, programmer
productivity, and the amount of management needed
 Maintainability.

 Dependability.

 Efficiency.

 Usability.
Refactoring

Refactoring is usually motivated by noticing a code smell. For example the method
at hand may be very long, or it may be a near duplicate of another nearby method.
Once recognized, such problems can be addressed by refactoring the source code,
or transforming it into a new form that behaves the same as before but that no
longer "smells". For a long routine, one or more smaller subroutines can be
extracted; or for duplicate routines, the duplication can be removed and replaced
with one shared function. Failure to perform refactoring can result in
accumulating technical debt.

There are two general categories of benefits to the activity of refactoring.

1. Maintainability. It is easier to fix bugs because the source code is easy to


read and the intent of its author is easy to grasp. This might be achieved by
reducing large monolithic routines into a set of individually concise, well-
named, single-purpose methods. It might be achieved by moving a method
to a more appropriate class, or by removing misleading comments.

2. Extensibility. It is easier to extend the capabilities of the application if it


uses recognizable design patterns, and it provides some flexibility where
none before may have existed.

Before applying a refactoring to a section of code, a solid set of automatic unit


tests is needed. The tests are used to demonstrate that the behavior of the module is
correct before the refactoring. If it inadvertently turns out that a test fails, then it's
generally best to fix the test first, because otherwise it is hard to distinguish
between failures introduced by refactoring and failures that were already there.
After the refactoring, the tests are run again to verify the refactoring didn't break
the tests. Of course, the tests can never prove that there are no bugs, but the
important point is that this process can be cost-effective: good unit tests can catch
enough errors to make them worthwhile and to make refactoring safe enough.

The process is then an iterative cycle of making a small program transformation,


testing it to ensure correctness, and making another small transformation. If at any
point a test fails, the last small change is undone and repeated in a different way.
Through many small steps the program moves from where it was to where you
want it to be. For this very iterative process to be practical, the tests must run very
quickly, or the programmer would have to spend a large fraction of his or her time
waiting for the tests to finish. Proponents of extreme programming and other agile
methodologies describe this activity as an integral part of the software
development cycle.

 Techniques that allow for more abstraction

o Encapsulate Field – force code to access the field with getter and setter
methods

o Generalize Type – create more general types to allow for more code
sharing
o Replace type-checking code with State/Strateg
o Replace conditional with polymorphism

 Techniques for breaking code apart into more logical pieces



o Componentization breaks code down into reusable semantic units that
present clear, well-

defined, simple-to-use interfaces.


o Extract Class moves part of the code from an existing class into a new
class.
o Extract Method, to turn part of a larger method into a new method. By
breaking down code in smaller pieces, it is more easily
understandable. This is also applicable to functions.

 Techniques for improving names and location of code



o Move Method or Move Field – move to a more appropriate Class or
source file
o Rename Method or Rename Field – changing the name into a new one
that better reveals its purpose
o Pull Up – in OOP, move to a superclass o Push Down – in OOP, move to
a subclass

UNIT-V
PROJECT MANAGEMENT

Estimation – FP Based, LOC Based, Make/Buy Decision,


COCOMO II
A function point is a unit of measurement to express the amount of business
functionality an information system (as a product) provides to a user. Function
points measure software size. The cost (in dollars or hours) of a single unit is
calculated from past projects

 The risk of "inflation" of the created lines of code, and thus reducing the
value of the measurement system, if developers are incentivized to be more
productive. FP advocates refer to this as measuring the size of the solution
instead of the size of the problem.

 Lines of Code (LOC) measures reward low level languages because more
lines of code are needed to deliver a similar amount of functionality to a
higher level language. C. Jones offers a method of correcting this in his
work.

 LOC measures are not useful during early project phases where estimating
the number of lines of code that will be delivered is challenging. However,
Function Points can be derived from requirements and therefore are useful in
methods such as estimation by proxy.

Loc:

Source lines of code (SLOC), also known as lines of code (LOC), is a software
metric used to measure the size of a computer program by counting the number of
lines in the text of the program's source code. SLOC is typically used to predict the
amount of effort that will be required to develop a program, as well as to
estimate programming productivity or maintainability once the software is
produced

SLOC measures are somewhat controversial, particularly in the way that they are
sometimes misused. Experiments have repeatedly confirmed that effort is highly
correlated with SLOC, that is, programs with larger SLOC values take more time
to develop. Thus, SLOC can be very effective in estimating effort. However,
functionality is less well correlated with SLOC: skilled developers may be able to
develop the same functionality with far less code, so one program with fewer
SLOC may exhibit more functionality than another similar program. In particular,
SLOC is a poor productivity measure of individuals, since a developer can develop
only a few lines and yet be far more productive in terms of functionality than a
developer who ends up creating more lines (and generally spending more effort).
Good developers may merge multiple code modules into a single module,
improving the system yet appearing to have negative productivity because they
remove code. Also, especially skilled developers tend to be assigned the most
difficult tasks, and thus may sometimes appear less "productive" than other
developers on a task by this measure. Furthermore, inexperienced developers often
resort to code duplication, which is highly discouraged as it is more bug-prone and
costly to maintain, but it results in higher SLOC.
Make/buy decision:

The act of choosing between manufacturing a product in-house or purchasing it from an external
supplier.

In a make-or-buy decision, the two most important factors to consider are cost and
availability of production capacity.

An enterprise may decide to purchase the product rather than producing it, if is
cheaper to buy than make or if it does not have sufficient production capacity to
produce it in-house. With the phenomenal surge in global outsourcing over the past
decades, the make-or-buy decision is one that managers have to grapple with very
frequently.
nvestopedia explains 'make-or-buy decision'

Factors that may influence a firm's decision to buy a part rather than produce it
internally include lack of in-house expertise, small volume requirements, desire for
multiple sourcing, and the fact that the item may not be critical to its strategy.
Similarly, factors that may tilt a firm towards making an item in-house include
existing idle production capacity, better quality control or proprietary technology
that needs to be protected.
COCOMO II

COnstructive COst MOdel II (COCOMO® II) is a model that allows one to


estimat e the cost, effort, and schedule when planning a new software development
activity. COCOMO® II is the latest major extension to the original COCOMO® (
COCOMO® 81 ) model published in 1981. It consists of three submodels, each
one offering increased fidelity the further along one is in the project planning and
design process. Listed in increasing fidelity, these submodels are called the
Applications Composition, Early Design, and Post-architecture models.

COCOMO® II can be used for the following major decision situations


 Making investment or other financial decisions involving a software
development effort

 Setting project budgets and schedules as a basis for planning and control

 Deciding on or negotiating tradeoffs among software cost, schedule,
functionality, performance or quality factors
 Making software cost and schedule risk management decisions

 Deciding which parts of a software system to develop, reuse, lease, or
purchase

 Making legacy software inventory decisions: what parts to modify, phase
out, outsource, etc

 Setting mixed investment strategies to improve organization's software
capability, via reuse, tools, process maturity, outsourcing, etc
 Deciding how to implement a process improvement strategy, such as that
provided in the SEI

CMM

The original COCOMO® model was first published by Dr. Barry Boehm in 1981,
and reflected the software development practices of the day. In the ensuing decade
and a half, software development techniques changed dramatically. These changes
included a move away from mainframe overnight batch processing to desktop-
based real-time turnaround; a greatly increased emphasis on reusing existing
software and building new systems using off-the-shelf software components; and
spending as much effort to design and manage the software development process
as was once spent creating the software product.

These changes and others began to make applying the original COCOMO® model
problematic. The solution to the problem was to reinvent the model for the 1990s.
After several years and the combined efforts of USC-CSSE, ISR at UC Irvine, and
the COCOMO® II Project Affiliate Organizations , the result is COCOMO® II, a
revised cost estimation model reflecting the changes in professional software
development practice that have come about since the 1970s. This new, improved
COCOMO® is now ready to assist professional software cost estimators for many
years to come.

About the Nomenclature


The original model published in 1981 went by the simple name of COCOMO®.
This is an acronym derived from the first two letters of each word in the longer
phrase COnstructive COst MOdel. The word constructive refers to the fact that the
model helps an estimator better understand the complexities of the software job to
be done, and by its openness permits the estimator to know exactly why the model
gives the estimate it does. Not surprisingly, the new model (composed of all three
submodels) was initially given the name COCOMO® 2.0. However, after some
confusion in how to designate subsequent releases of the software implementation
of the new model, the name was permanently changed to COCOMO® II. To
further avoid confusion, the original COCOMO® model was also then re -
designated COCOMO® 81. All references to COCOMO® found in books and
literature published before 1995 refer to wh at is now called COCOMO® 81. Most
references to COCOMO® published from 1995 onward refer to what is now called
COCOMO® II.

If in examining a reference you are still unsure as to which model is being


discussed, there are a few obvious clues. If in the context of discussing
COCOMO® these terms are used: Basic, Intermediate, or Detailed for model
names; Organic, Semidetached, or Embedded for development mode, then the
model being discussed is COCOMO® 81. However, if the model names mentioned
are Application Composition, Early Design, or Post-architecture; or if there is
mention of scale factors Precedentedness (PREC), Development Flexibility
(FLEX), Architecture/Risk Resolution (RESL), Team Cohesion (TEAM), or
Process Maturity (PMAT), then the model being discussed is COCOMO® II.

Intermediate COCOMO computes software development effort as function of program size and a
set of "cost drivers" that include subjective assessment of product, hardware, personnel and
project attributes. This extension considers a set of four "cost drivers",each with a number of
subsidiary attributes:-
 Product attributes
o Required software reliability
o Size of application database
o Complexity of the product
 Hardware attributes
o Run-time performance constraints
o Memory constraints
o Volatility of the virtual machine environment
o Required turnabout time
 Personnel attributes
o Analyst capability
o Software engineering capability
o Applications experience
o Virtual machine experience
o Programming language experience

 Project attributes

o Use of software tools

o Application of software engineering methods o Required


development schedule

Each of the 15 attributes receives a rating on a six-point scale that ranges


from "very low" to "extra high" (in importance or value). An effort
multiplier from the table below applies to the rating. The product of all effort
multipliers results in an effort adjustment factor (EAF). Typical values for
EAF range from 0.9 to 1.4.

The Intermediate Cocomo formula now takes the form:

E=ai(KLoC)(bi).EAF
where E is the effort applied in person-months, KLoC is the estimated number of
thousands of delivered lines of code for the project, and EAF is the factor
calculated above. The coefficient ai and the exponent bi are given in the next table.

The Development time D calculation uses E in the same way as in the Basic
COCOMO.

DETAILED COCOMO

Detailed COCOMO incorporates all characteristics of the intermediate version


with an assessment of the cost driver's impact on each step (analysis, design, etc.)
of the software engineering process.

The detailed model uses different effort multipliers for each cost driver attribute.
These Phase Sensitive effort multipliers are each to determine the amount of effort
required to complete each phase. In detailed cocomo,the whole software is divided
in different modules and then we apply COCOMO in different modules to estimate
effort and then sum the effort
In detailed COCOMO, the effort is calculated as function of program size and a set
of cost drivers given according to each phase of software life cycle.

A Detailed project schedule is never static.

The five phases of detailed COCOMO are:-

 plan and requirement.



 system design.

 detailed design.

 module code and test.

 integration and test.

Planning – Project Plan, Planning Process, RFP Risk


Management

The Project Planning Phase is the second phase in the project life cycle. It
involves creating of a set of plans to help guide your team through the execution
and closure phases of the project.

The plans created during this phase will help you to manage time, cost, quality,
change, risk and issues. They will also help you manage staff and external
suppliers, to ensure that you deliver the project on time and within budget.

There are 10 Project Planning steps you need to take to complete the
Project Planning Phase efficiently. These steps and the templates needed to
perform them, are shown in the following diagram.
Click each link in the diagram below, to learn how these templates will help you to
plan projects efficiently.

Project Plan

Resource Plan

Financial Plan

Quality Plan
Risk Plan

Acceptance Plan

Communications Plan

Procurement Plan

Tender ProcessStatement of WorkRequest for InformationRequest for


ProposalSupplier ContractTender Register

Phase Review Form

The Project Planning Phase is often the most challenging phase for a Project
Manager, as you need to make an educated guess of the staff, resources and
equipment needed to complete your project. You may also need to plan your
communications and procurement activities, as well as contract any 3rd party
suppliers.

In short, you need to create a comprehensive suite of project plans which set out a
clear project roadmap ahead.

The Project Planning Template suite will help you to do this, by giving you a
comprehensive collection of Project Planning templates.
Identification, Projection, RMMM

Stakeholders (SH) are people or organizations (legal entities such as companies,


standards bodies) which have a valid interest in the system. They may be affected
by it either directly or indirectly. A major new emphasis in the 1990s was a focus
on the identification of stakeholders. It is increasingly recognized that stakeholders
are not limited to the organization employing the analyst. Other stakeholders will
include:

 anyone who operates the system (normal and maintenance operators)



 anyone who benefits from the system (functional, political, financial and
social beneficiaries)

 anyone involved in purchasing or procuring the system. In a mass-market
product organization, product management, marketing and sometimes sales
act as surrogate consumers (mass-market customers) to guide development
of the product
 organizations which regulate aspects of the system (financial, safety, and
other regulators)

 people or organizations opposed to the system (negative stakeholders; see
also Misuse case)
 organizations responsible for systems which interface with the system under
design

 those organizations who integrate horizontally with the organization for
whom the analyst is designing the system

Projection

Risk Projection (aka Risk Estimation)

Attempts to rate each risk in two ways

 · The probability that the risk is real



 · The consequences of the problems associated with the risk, should it occur.

Project planner, along with other managers and technical staff, performs four risk
projection activities:

(1) (1) establish a measure that reflects the perceived likelihood of a risk

(2) (2) delineate the consequences of the risk

(3) (3) estimate the impact of the risk on the project and the product

(4) (4) note the overall accuracy of the risk projection so that there will be no
misunderstandings.

Developing a Risk Table

Risk table provides a project manager with a simple technique for risk projection
Steps in Setting up Risk Table

(1) (1) Project team begins by listing all risks in the first column of the table.
Accomplished with the help of the risk item checklists.

(2) (2) Each risk is categorized in the second column


(e.g., PS implies a project size risk, BU implies a business risk).

(3) (3) The probability of occurrence of each risk is entered in the next column
of the table. The probability value for each risk can be estimated by team
members individually.

(4) (4) Individual team members are polled in round-robin fashion until their
assessment of risk probability begins to converge.

Assessing Impact of Each Risk

(1) (1) Each risk component is assessed using the Risk Charcterization Table
(Figure 1) and impact category is determined.

(2) (2) Categories for each of the four risk components—performance, support,
cost, and schedule—are averaged to determine an overall impact value.

1. 1. A risk that is 100 percent probable is a constraint on the software


project.

2. 2. The risk table should be implemented as a spreadsheet model. This


enables easy manipulation and sorting of the entries.

3. 3. A weighted average can be used if one risk component has more


significance for the project.

(3) (3) Once the first four columns of the risk table have been completed, the table
is sorted by probability and by impact.
 · High-probability, high-impact risks percolate to the top of the table,
and low-probability risks drop to the bottom.

(4) (4) Project manager studies the resultant sorted table and defines a cutoff line.

 · cutoff line (drawn horizontally at some point in the table) implies that
only risks that lie above the line will be given further attention.

 · Risks below the line are re-evaluated to accomplish second-order
prioritization.

 · Risk impact and probability have a distinct influence on management
concern.

o o Risk factor with a high impact but a very low probability of


occurrence should not absorb a significant amount of management
time.

o o High-impact risks with moderate to high probability and low-


impact risks with high probability should be carried forward into
the risk analysis steps that follow.

 · All risks that lie above the cutoff line must be managed.

 · The column labeled RMMM contains a pointer into a Risk Mitigation,
Monitoring and Management Plan

Assessing Risk Impact

 · Three factors determine the consequences if a risk occurs:



o o Nature of the risk - the problems that are likely if it occurs.

o o e.g., a poorly defined external interface to customer hardware (a
technical risk) will preclude early design and testing and will likely
lead to system integration problems late in a project.

o o Scope of a risk - combines the severity with its overall distribution (how
much of the project will be affected or how many customers are harmed?).

o o Timing of a risk - when and how long the impact will be felt.

 · Steps recommended to determine the overall consequences of a risk:

1. Determine the average probability of occurrence value for each risk
component.

2. Using Figure 1, determine the impact for each component based on the
criteria shown.
3. Complete the risk table and analyze the results as described in the preceding
sections.

Overall risk exposure, RE, determined using:

RE = P x C

P is the probability of occurrence for a risk

C is the the cost to the project should the risk occur.

Example
Assume the software team defines a project risk in the following manner:

Risk identification.

 · Only 70 percent of the software components scheduled for reuse will be


integrated into the application.
 · The remaining functionality will have to be custom developed.

Risk probability. 80% (likely).

Risk impact.

 · 60 reusable software components were planned.



 · If only 70 percent can be used, 18 components would have to be developed
from scratch (in addition to other custom software that has been scheduled for
development).

 · Since the average component is 100 LOC and local data indicate that the
software engineering cost for each LOC is $14.00, the overall cost (impact) to
develop the components would be 18 x 100 x 14 = $25,200.

Risk exposure. RE = 0.80 x 25,200 ~ $20,200.

Risk Assessment
 · Have established a set of triplets of the form:
[ri, li, xi]

where ri is risk

li is the likelihood (probability) of the risk xi is the impact of the risk.

 · During risk assessment :

o o further examine the accuracy of the estimates that were made during risk
projection

o o attempt to rank the risks that have been uncovered

o o begin thinking about ways to control and/or avert risks that are likely to
occur.

 · Must Define a risk referent level



o o performance, cost, support, and schedule represent risk referent levels.

o o There is a level for performance degradation, cost overrun, support


difficulty, or schedule slippage (or any combination of the four) that will
cause the project to be terminated.

o o A risk referent level has a single point, called the referent


point or break point, at which the decision to proceed with the project or
terminate it are equally weighted.
 · Referent level rarely represented as a smooth line on a graph.

 · Most cases - a region in which there are areas of uncertainty

 · Therefore, during risk assessment, perform the following steps:

1. 1. Define the risk referent levels for the project.

2. 2. Attempt to develop a relationship between each (ri, li, xi) and each of
the referent levels.
3. 3. Predict the set of referent points that define a region of termination,
bounded by a curve or areas of uncertainty.
4. 4. Try to predict how compound combinations of risks will affect a
referent level.

Risk Refinement

 · A risk may be stated generally during early stages of project planning.


 · With time, more is learned about the project and the risk
o o may be possible to refine the risk into a set of more detailed risks

 · Represent risk in condition-transition-consequence (CTC) format.


o o Stated in the following form:

Given that <condition> then there is concern that (possibly) <consequence>

 · Using CTC format for the reuse we can write:



Given that all reusable software components must conform to specific design
standards and that some do not conform, then there is concern that (possibly)
only 70 percent of the planned reusable modules may actually be integrated into
the as-built system, resulting in the need to custom engineer the remaining 30
percent of components.


 · This general condition can be refined in the following manner:


Subcondition 1. Certain reusable components were developed by a third party
with no knowledge of internal design standards.

Subcondition 2. The design standard for component interfaces has not been
solidified and may not conform to certain existing reusable components.

Subcondition 3. Certain reusable components have been implemented in a
language that is not supported on the target environment.

Risk Mitigation, Monitoring, and Management

Effective strategy must consider three issues:

 risk avoidance
 risk monitoring
 risk management and contingency planning


 · Proactive approach to risk - avoidance strategy.
 · Develop risk mitigation plan.
 · e.g. assume high staff turnover is noted as a project risk, r1.
 · Based on past history
o o the likelihood, l1, of high turnover is estimated to be 0.70 o o the
impact, x1, is projected at level 2.

o o So… high turnover will have a critical impact on project cost and schedule.

 · Develop a strategy to mitigate this risk for reducing turnover.  · Possible steps
to be taken

o Meet with current staff to determine causes for turnover (e.g., poor working
conditions, low pay, competitive job market).
o Mitigate those causes that are under our control before the project starts.

o Once the project commences, assume turnover will occur and develop
techniques to ensure continuity when people leave.

o Organize project teams so that information about each development activity


is widely dispersed. o Define documentation standards and establish
mechanisms to be sure that documents are
developed in a timely manner.

o Conduct peer reviews of all work (so that more than one person is "up to
speed"). o Assign a backup staff member for every critical technologist.

 · Project manager monitors for likelihood of risk



 · For high staff turnover, the following factors can be monitored: o General
attitude of team members based on project pressures. o The degree to which
the team has jelled.

o Interpersonal relationships among team members. o Potential problems
with compensation and benefits.
o The availability of jobs within the company and outside it.
 · Project manager should monitor the effectiveness of risk mitigation steps.

 · Risk management and contingency planning assumes that mitigation efforts
have failed and that the risk has become a reality.

e.g., the project is underway and a number of people announce that they will be
leaving.

 · Mitigation strategy makes sure:
 § backup is available

 § information is documented
 § knowledge has been dispersed across the team.
 · RMMM steps incur additional project cost

e.g. spending time to "backup" every critical technologist costs money.

 · Large project - 30 or 40 risks.

 · 80 percent of the overall project risk (i.e., 80 percent of the potential for
project failure) can be accounted for by only 20 percent of the identified risks.

 · Work performed during earlier risk analysis steps will help the planner to
determine which of the risks reside in that 20 percent (e.g., risks that lead to the
highest risk exposure).

The RMMM Plan

 · Risk Mitigation, Monitoring and Management Plan (RMMM) - documents


all work performed as part of risk analysis and is used by the project manager as
part of the overall project plan.

 · Alternative to RMMM - risk information sheet (RIS)

RIS is maintained using a database system, so that creation and information
entry, priority ordering, searches, and other analysis may be accomplished
easily.
Pppppppppppppppppp
1. Risk monitoring is a project tracking activity

2. Three primary objectives:

1. assess whether predicted risks do, in fact, occur


2. ensure that risk aversion steps defined for the risk are being properly
applied
3. collect information that can be used for future risk analysis.

 Problems that occur during a project can be traced to more than one risk.

o Another job of risk monitoring is to attempt to allocate origin (what


risk(s) caused which problems throughout the project).

Scheduling and Tracking

Software project scheduling is an activity that distributes estimated effort across


the planed project duration by allocating the effort to specific software engineering
tasks.

First, a macroscopic schedule is developed.

---> a detailed schedule is redefined for each entry in the macroscopic schedule.

A schedule evolves over time.

Basic principles guide software project scheduling:

- Compartmentalization
- Interdependency

- Time allocation

- Effort allocation

- Effort validation

- Defined responsibilities

- Defined outcomes

- Defined milestones

There is no single set of tasks that is appropriate for all projects.

An effective software process should define a collection of task sets, each

designed to meet the needs of different types of projects.

A task set is a collection of software engineering work

-> tasks, milestones, and deliverables.

Tasks sets are designed to accommodate different types of projects and different
degrees of rigor.

Typical project types:

- Concept Development Projects


- New Application Development Projects

- Application Enhancement Projects

- Application Maintenance Projects

- Reengineering Projects

Degree of Rigor:

- Casual

- Structured

- Strict

- Quick Reaction

Obta

Defining Adaptation Criteria:

-- This is used to determine the recommended degree of rigor.

Eleven criteria are defined for software projects:

- Size of the project


- Number of potential users

- Mission criticality

- Application longevity

- Ease of customer/developer communication

- Maturity of applicable technology

- Performance constraints

- Embedded/non-embedded characteristics

- Project staffing

- Reengineering factors

Individual tasks and subtasks have interdependencies based on their sequence.

A task network is a graphic representation of the task flow for a project.

Figure 7.3 shows a schematic network for a concept development project.

Critical path:
-- the tasks on a critical path must be completed on schedule to make the whole

project on schedule.
Scheduling of a software project does not differ greatly from scheduling of any
multitask

engineering effort.

Two project scheduling methods:

- Program Evaluation and Review Technique (PERT)

- Critical Path Method (CPM)

Both methods are driven by information developed in earlier project planning


activities:

- Estimates of effort

- A decomposition of product function

- The selection of the appropriate process model

- The selection of project type and task set Both methods allow a planer to
do:

- determine the critical path

- time estimation

- calculate boundary times for each task Boundary times:

- the earliest time and latest time to begin a task


- the earliest time and latest time to complete a task

- the total float.

Relationship between people and effort, Task Set &


Network, Scheduling, EVA

Software engineering handles the relationship between people and effort


management for product development phase. These some points are as follows:-

1. When software size is small single person can handle same project by
performing steps like requirement analysis, designing, code generation, and
testing etc.

2. If the project is large additional people are required to complete. the project in stipulated in
time it become easy to complete project by distributing work among people and get it done as
early as possible.
3. The communication path of new comer also increase as time increase and day
by day the project become extra complicated. And new customer gets confusion
become more after the days by days.

4. It is possible to reduce a desire project completion date by getting more people


to same point. It also possible to expand the completion date by reducing
number of resources. And you can mention for the date of completion.

5. The Putnam norden Rayleigh (PNR) curve is an indication of relationship


which exists between effort applied and delivery time for software project.

6. The curve indicate a minimum time value at to which indicates test cost time
for delivery as use move to left to right. It is observed that curved raised non-
linearly.
7. It is possible to make delivery fast; the curve rises very sharply to left of td. The
pnr curve indicates that project delivery time should not be compressed much
behind on td.

8. The number of delivery lines of code are also known as source statements L.
Relationship of L with effort & development time by equation can be described
as

L= P * E1/3 T ¾

Here ‘E’ represents development effort in person months, P is productivity

1. After rearranging the last equation can arrive at an expansion for


development effort e.

E = L3/(P3 T4)

E is called as effort expanded over entire life cycle for software development
and maintenance

T is the development period in years. And this equation is lead to

E = L3 / (P3 T4) ~ 3.8 Person years.

This shows that by extending last date of project with six month e.g. we can
reduce the no of people from eight to four. The outcome benefit can be gained
by using less number of people over longer time to achieve the same objective.

These are all about the relationship between people and effort in software
engineering.
Description of People and process management spectrum

The people:-

People management capability maturity model (PM-CMM) has been developed


by software engineering institute.

The people management capability maturity model defines following key


practice area for software people:-

 Recruiting

 Selection

 Performance

 Training

 Team / culture development

 Carrier development

It guides organizations in a creation of mature software process. The various


groups are involved for the most needed communication and co-ordination issue
required for any effective software. These groups can be categorized as under:-

1)Stack holder: – Those people who are involved in software process and every
software project. Stack holder can be senior manager, project manager
practitioner’s customer and end user.

2)Team leader: – The MOI model of leadership state characteristics that define
and effective project manager motivation, organization and ideas. Successful
project leaders use problem solving strategy.

3)Software team:- The people directly involved in a software project are within
the software project manager scope seven project factor should be considered
when planning structure of software engineering team these are as follows:-

 Difficult of problem to be solved.



 Size of resultant program.

 Time that team will stay together.
 The rigidity of a delivery date.

 The degree of communication required for project.
4) Agile teams: it is very active team which is a very small highly motivated
project team consist of a informal method which result into overall development
simplicity. These are all about the people description.

The Process:-

A comprehensive plan for a software development can be established from a


software process framework. A number of a different task based milestone, work
product and quality assurance points enables the frameworks activity to be adapted
to characteristics of a software project. The frame work activity that characteristics
the software processes are applicable to all software projects. The problem is to be
selected process model that is appropriate for software to be a engineered by a
software team.

The project manager must decide which process mode is appropriate for 1)
Customer who has a request product 2) the characteristics of product 3) the project
environment.

Once the primary plan is established process decomposition begins a complete plan reflecting
work task required to populate to require to framework activities must be created. These are all
about process.

Process Decomposition:-

A software team should have extent of adaptability in choosing the software


process model which is a best for a project and engineering task. A relatively small
project might be a best accomplished using linear frequent approach. If very tight
time constraints are imposed then problem can be a heavily compartmentalization
RAD model. Project with other characteristics will lead to the selection of a other
process model. Once a process model and process framework is decided then
generic communication framework communication, planning, modeling,
construction and deployment can be used. It will works for a linear model for
iterative and incremental model, for evolutionary model and for concurrent or
concurrent assembly models.
Process decomposition commence when the project manager asks how we
accomplish the actual activity. Process decomposition is appropriate for the small
relatively and a simple project for it. It should be noted work task s must be
adapted to specific need of a project. These are about the process decomposition in
the software engineering for product development activities.
Process and Project Metrics

A software metric is a measure of some property of a piece of software or its


specifications. Since quantitative measurements are essential in all sciences, there
is a continuous effort by computer science practitioners and theoreticians to bring
similar approaches to software development. The goal is obtaining objective,
reproducible and quantifiable measurements, which may have numerous valuable
applications in schedule and budget planning, cost estimation, quality assurance
testing, software debugging, software performance optimization, and optimal
personnel task assignments.

COMMON SOFTWARE MEASUREMENTS

Common software measurements include:

• Balanced scorecard

• Bugs per line of code

• Code coverage

• Cohesion
• Comment density

• Connascent software components


• Coupling

• Cyclomatic complexity (McCabe's complexity)

• DSQI (design structure quality index)

• Function point analysis

• Halstead Complexity
• Instruction path length
• Maintainability index

• Number of classes and interfaces


• Number of lines of code

• Number of lines of customer requirements


• Program execution time
• Program load time

• Program size (binary)


• Robert Cecil Martin's software package metrics

• Weighted Micro Function Points


• Function Points and Automated Function Points, an Object Management
Group standarD
• CISQ automated quality characteristics measures

LIMITATIONS
As software development is a complex process, with high variance on both
methodologies and objectives, it is difficult to define or measure software qualities
and quantities and to determine a valid and concurrent measurement metric,
especially when making such a prediction prior to the detail design. Another source
of difficulty and debate is in determining which metrics matter, and what they
mean.The practical utility of software measurements has therefore been limited to
the following domains:
• Scheduling

• Software sizing

• Programming complexity

• Software development effort estimation

• Software quality

A specific measurement may target one or more of the above aspects, or the
balance between them, for example as an indicator of team motivation or project
performance.

ACCEPTANCE AND PUBLIC OPINION

Some software development practitioners point out that simplistic measurements


can cause more harm than good. Others have noted that metrics have become an
integral part of the software development process. Impact of measurement on
programmers psychology have raised concerns for harmful effects to performance
due to stress, performance anxiety, and attempts to cheat the metrics, while others
find it to have positive impact on developers value towards their own work, and
prevent them being undervalued Some argue that the definition of many
measurement methodologies are imprecise, and consequently it is
often unclear how tools for computing them arrive at a particular result while
others argue that imperfect quantification is better than none (“You can’t control
what you can't measure” Evidence shows that

software metrics are being widely used by government agencies, the US military,
NASA, IT consultants, academic institutions, and commercial and academic
development estimation software.

You might also like