SPM Unit 3 Notes
SPM Unit 3 Notes
SPM Unit 3 Notes
Successful project is that the system is delivered on time and within budget and with the
required quality. This implies that targets are set which the project manager then tries to
meet. But sometimes the deadlines can’t be met due to incorrect initial estimates. Realistic
estimates are therefore crucial.
Political Implications
Changing Technology
Barry Boehm, in his classic work on software effort models, identified the main ways of
deriving estimates of software development effort as:
• algorithmic models - which use 'effort drivers' representing characteristics of the target
system and the implementation environment to predict effort;
• expert judgement - where the advice of knowledgeable staff is solicited;
• analogy - where a similar, completed, project is identified and its actual effort is used as a
basis for the new project;
• Parkinson - which identifies the staff effort available to do a project and uses that as the
‘estimate’?
• price to win - where the 'estimate' is a figure that appears to be sufficiently low to win a
contract;
• top-down - where an overall estimate is formulated for the whole project and is then broken
down into the effort required for component tasks;
• bottom-up - where component tasks are identified and sized and these individual estimates
are aggregated.
Bottom-up estimating
Estimating methods can be generally divided into bottom-up and top-down approaches. With
the bottom-up approach, the estimator breaks the project into its component tasks and then
estimates how much effort will be required to carry out each task. With a large project, the
process of breaking down into tasks would be a repetitive one: each task would be analysed
into its component sub-tasks and these in turn would be further analysed. This is repeated
until you get to components that can be executed by a single person in about a week or two.
We will use this approach when a project is completely novel or no historical data
available.
Envisage the number and type of software modules in the final system
Estimate the SLOC of each identified module
Estimate the work content , taking into account complexity and technical difficulty
Calculate the work-days effort.
The top-down approach is normally associated with parametric (or algorithmic) models. The
effort needed to implement a project will be related mainly to variables associated with
characteristics of the final system. The form of the parametric model will normally be one or
more formulae in the form:
effort = (system size) x (productivity rate)
For example, Amanda at IOE might estimate that the first software module to be constructed
is 2 KLOC. She might then judge that if Kate undertook the development of the code, with
her expertise she could work at a rate of 40 days per KLOC and complete the work in 2 x 40
days, that is, 80 days, while Ken, who is less experienced, would need 55 days per KLOC and
take 2 x 55 that is, 110 days to complete the task.
Some parametric models, such as that implied by function points, are focused on system or
task size, while others, such are COCOMO, are more concerned with productivity factors.
Expert judgment:
This is asking someone who is knowledgeable about either the application area or the
development environment to give an estimate of the effort needed to carry out a task. This
method will most likely be used when estimating the effort needed to change an existing
piece of software. The estimator would have to carry out some kind of impact analysis in
order to judge the proportion of code that would be affected and from that derive an estimate.
Someone already familiar with the software would be in the best position to do this.
Some have suggested that expert judgement is simply a matter of guessing, but our own
research has shown that experts tend to use a combination of an informal analogy approach
where similar projects from the past are identified (see below), and bottom-up estimating.
Estimating By Analogy:
Analogous estimating is the act of using former projects to estimate how long or how
much a current project will take or cost. In other words, it is a technique that centres on
comparison. This means that the more data that is available, the better the estimate will
be. Thus, collecting data for each project will build up a database that can be used in
future projects for comparisons of cost and time.
This is a top-down method that was devised by Allan Albrecht when he worked for
IBM. Albrecht was investigating programming productivity and needed some way to
He developed the idea of function points (FPs).
The basis of function point analysis is that computer-based information systems comprise
five major components:
• External input types are input transactions that update internal computer files.
• External output types are transactions where data is output to the user. Typically
these would be printed reports, since screen displays would come under external
inquiry types .
• Logical internal file types are the standing files used by the system. The term 'file'
does not sit easily with modern information systems. It refers to a group of data that is
usually accessed together. It might be made up of one or more record types.
• External interface file types allow for output and input that might pass to and from
other computer applications.
• External inquiry types - are transactions initiated by the user that provide
information but do not update the internal files. The user inputs some information that
directs the system to the details required.
The analyst has to identify each instance of each external user type in the projected system.
Each component is then classified as having either high, average or low complexity. The
counts of each external user type in each complexity band are multiplied by specified weights
(see below Table ) to get FP scores, which are summed to obtain an overall FP count, which
indicates the information processing size.
Example: A logical Internal File contains purchase order organized into two separate record
types: Main purchase order details namely Purchase order number, supplier reference,
purchase order date and Purchase order item details namely product code, unit price and
number ordered.
In Albrecht method
Boehm originally based his models in the late 1970s on a study of 63 projects. Of these only
seven were business systems and so they could be used with applications other than
information systems. The basic model was built around the equation
effort = c( size)k
where
The first step was to derive an estimate of the system size in terms of kdsi. The constants, c
and k (see below Table), depended on whether the system could be classified, in Boehm's
terms, as 'organic', 'semi-detached' or 'embedded'. These related to the technical nature of the
system and the development environment.
Organic mode - this would typically be the case when relatively small teams
developed software in a highly familiar in-house environment and when the system
being developed was small and the interface requirements were flexible.
Embedded mode - this meant the product being developed had to operate within very
tight constraints and changes to the system were very costly.
Semi-detached mode - this combined elements of the organic and the embedded
modes or had characteristics that came between the two.
USECASE Based Estimation:
Use Case Points (UCP) is a software estimation technique used to forecast the software size
for software development projects. UCP is used when the Unified Modeling
Language (UML) and Rational Unified Process (RUP) methodologies are being used for the
software design and development. The concept of UCP is based on the requirements for the
system being written using use cases, which is part of the UML set of modeling techniques.
The software size (UCP) is calculated based on elements of the system use cases with
factoring to account for technical and environmental considerations. The UCP for a project
can then be used to calculate the estimated effort for a project.
The method for determining the size estimate to develop a system is based on a calculation
with the following elements:
Unadjusted Use Case Weight (UUCW) – the point size of the software that accounts
for the number and complexity of use cases.
Unadjusted Actor Weight (UAW) – the point size of the software that accounts for the
number and complexity of actors.
Technical Complexity Factor (TCF) – factor that is used to adjust the size based on
technical considerations.
Environmental Complexity Factor (ECF) – factor that is used to adjust the size based
on environmental considerations.
Once the previous four elements have been calculated, the final size estimate can be
calculated. This final number is known as the Use Case Points or UCP for a software
development project.
Unadjusted Use Case Weight (UUCW)
The UUCW is one of the factors that contribute to the size of the software being developed. It
is calculated based on the number and complexity of the use cases for the system. To find the
UUCW for a system, each of the use cases must be identified and classified as Simple,
Average or Complex based on the number of transactions the use case contains. Each
classification has a predefined weight assigned. Once all use cases have been classified as
simple, average or complex, the total weight (UUCW) is determined by summing the
corresponding weights for each use case. The following chart shows the different
classifications of use cases based on the number of transactions and the weight value assigned
for each use case within the classification.
UUCW = (Total No. of Simple Use Cases x 5) + (Total No. Average Use Cases x 10)
+ (Total No. Complex Use Cases x 15)
UAW = (Total No. of Simple actors x 1) + (Total No. Average actors x 2) + (Total
No. Complex actors x 3)
Technical Complexity Factor (TCF)[edit]
The TCF is one of the factors applied to the estimated size of the software in order to account
for technical considerations of the system. It is determined by assigning a score between 0
(factor is irrelevant) and 5 (factor is essential) to each of the 13 technical factors listed in the
table below. This score is then multiplied by the defined weighted value for each factor. The
total of all calculated values is the technical factor (TF). The TF is then used to compute the
TCF with the following formula:
TCF = 0.6 + (TF/100)
avoid different activities competing for the same resources at the same
time:
produce a detailed schedule showing which staff carry out each activity;
Replan the project during its life to correct drift from the target.
Resource allocation What are the most effective ways of allocating resources to
the project and when should they be available?
Resource allocation What are the most effective ways of allocating resources to
the project and when should they be available?
Project Schedule
Project plan is a detailed plan which shows dates when each activity should start and
finish and when and how much of each resource will he required. Once the plan has been
refined to this level of detail we call it a project schedule.
Risk Analysis
Risk Allocation
Schedule Production
Identifying Activities: Essentially there are three approaches to identifying the activities or
tasks that make up a project .
The activity-based approach consists of creating a list of all the activities that the project is
thought to involve.
When listing activities, particularly for a large project, it might be helpful to subdivide the
project into the main life style stages and consider each of these separately.
Rather than doing this in an ad hoc manner, with the obvious risks of omitting or double-counting tasks, a much favoured way
of generating a task list is to create a Work Breakdown Structure (WBS). This involves identifying the main (or high-level)
tasks required to complete a project and then breaking each of these down into a set of lower-level tasks.
Product Based Approach:
PFD indicated, for each product, which other products are required as inputs.
The PFD can therefore be easily transformed into an ordered list of activities by identifying
the transformation that turn some products into others.
The figure below is based on a simple list of final deliverables and, for each deliverable, a set
of activities required to produce that project.