Software Project Management
Software Project Management
Software Project Management
STUDY MATERIAL
This document contains the study material for Software Project Management as per
the JNTU
syllabus.
CHAPTER #
CHAPTER PAGE
NUMBER
NO. OF
QUESTIONS
UNIT - I
1 Conventional Software Management 4 3
2 Evolution of Software Economics 13 4
UNIT – II
3 Improving Software Economics 18 10
UNIT – III
4 The Old Way and the New 32 4
5 Life-Cycle Phases 40 3
UNIT – IV
6 Artifacts of the Process 46 10
7 Model-Based Software Architecture 63 6
UNIT – V
8 Workflows of the Process 67 3
9 Checkpoints of the Process 72 4
10 Iterative Process Planning 79 4
UNIT – VI
11 Project Organizations and Responsibilities 89 4
12 Process Automation 97 7
UNIT – VII
13 Project Control and Process Instrumentation 111 6
14 Tailoring the Process 125 9
UNIT – VIII
15 Modern Project Profiles 133 7
16 Next-Generation Software Economics 140 4
17 Modern Process Transitions 146 2
Appendix D CCPDS-R Case Study 150 18
SOFTWARE PROJECT MANAGEMENT (SPM)
STUDY MATERIAL
III V SEMESTER
MC 5.5.1 SOFTWARE PROJECT MANAGEMENT
TEACHING PLAN
Unit/
Item No.
Topic Book
Reference
No. of
periods
UNIT I
1 Conventional Software Management
1.1 The Waterfall Model 6 – 17 2
1.2 Conventional Software Management Performance 17 – 20 1
2 Evolution of Software Economics
2.1 Software Economics 21 – 26 1
2.2 Pragmatic Software Cost Estimation 26 – 30 2
UNIT II
3 Improving Software Economics
3.1 Reducing Software Product Size 33 – 40 2
3.2 Improving Software Processes 40 – 43 1
3.3 Improving Team Effectiveness 43 – 46 1
3.4 Improving Automation through Software Environments 46 – 48 1
3.5 Achieving Required Quality 48 – 51 1
3.6 Peer Inspections: A Pragmatic View 51 – 54 2
UNIT III
4 The Old Way and the New
4.1 The Principles of Conventional Software Engineering 55 – 63 2
4.2 The Principles of Modern Software Management 63 – 66 2
4.3 Transitioning to an Iterative Process 66 – 68 1
5 Life-Cycle Phases
5.1 Engineering and Production Stages 74 – 76 1
5.2 Inception Phase 76 – 77 1
5.3 Elaboration Phase 77 – 79 1
5.4 Construction Phase 79 – 80 1
5.5 Transition Phase 80 – 82 2
UNIT IV
6 Artifacts of the Process
6.1 The Artifact Sets 84 – 96 3
6.2 Management Artifacts 96 – 103 2
6.3 Engineering Artifacts 103 – 105 2
6.4 Pragmatic Artifacts 105 – 108 1
7 Model-Based Software Architecture
7.1 Architecture: A Management Perspective 110 – 111 1
7.2 Architecture: A Technical Perspective 111 – 116 1
SOFTWARE PROJECT MANAGEMENT (SPM)
STUDY MATERIAL
UNIT V
8 Workflows of the Process
8.1 Software Process Workflows 118 – 121 1
8.2 Iteration Workflows 121 – 124 1
9 Checkpoints of the Process
9.1 Major Milestones 126 – 132 2
9.2 Minor Milestones 132 – 133 1
9.3 Periodic Status Assessments 133 – 134 1
10 Iterative Process Planning
10.1 Work Breakdown Structures 139 – 146 2
10.2 Planning Guidelines 146 – 149 1
10.3 The Cost and Schedule Estimating Process 149 – 150 1
10.4 The Iteration Planning Process 150 – 153 1
10.5 Pragmatic Planning 153 – 154 1
UNIT VI
11 Project Organizations and Responsibilities
11.1 Line-of-Business Organizations 156 – 158 1
11.2 Project Organizations 158 – 165 2
11.3 Evolution of Organizations 165 – 166 1
12 Process Automation
12.1 Tools: Automation Building Blocks 168 – 172 1
12.2 The Project Environment 172 - 186 2
UNIT VII
13 Project Control and Process Instrumentation
13.1 The Seven Core Metrics 188 – 190 1
13.2 Management Indicators 190 – 196 2
13.3 Quality Indicators 196 – 199 1
13.4 Life-Cycle Expectations 199 – 201 1
13.5 Pragmatic Software Metrics 201 – 202 1
13.6 Metrics Automation 202 – 208 1
14 Tailoring the Process
14.1 Process Discriminants 209 – 218 2
14.2 Example: Small-Scale Project versus Large-Scale
Project
218 – 220 1
UNIT VIII
15 Modern Project Profiles
15.1 Continuous Integration 226 – 227 1
15.2 Early Risk Resolution 227 – 228 1
15.3 Evolutionary Requirements 228 – 229 1
15.4 Teamwork among Stakeholders 229 – 231 1
15.5 Top 10 Software Management Principles 231 – 232 1
15.6 Software Management Best Practices 232 – 236 1
16 Next-Generation Software Economics
16.1 Next-Generation Cost Models 237 – 242 2
16.2 Modern Software Economics 242 – 247 1
17 Modern Process Transitions
17.1 Culture Shifts 248 – 251 1
17.2 Denouement 251 - 254 1
Total 75
PART – I SOFTWARE MANAGEMENT RENAISSANCE Page 4 of 187
Chapter – 1 CONVENTIONAL SOFTWARE MANAGEMENT
Software Crisis:
Flexibility of the software is both a boon and a bane.
Boon: it can be programmed to do anything.
Bane: because of the “anything” factor, it becomes difficult to plan, monitor, and control
software development.
This unpredictability is the basis of what is known as “software crisis”.
A number of analyses were done on the state of the software engineering industry over
the last
decades.
Their findings concluded that the success rate of software projects is very low.
Their other findings can be summarized as:
1. Software development is highly unpredictable.
Only about 10% of projects are delivered successfully within initial budget and schedule
estimates.
2. Rather than the technology advances, it is the management discipline that is
responsible
for the success or failure of the projects.
3. The level of software scrap and rework is indicative of an immature process.
The above three analyses-conclusions, while showing the magnitude of the problem and
the state
of the current software management, prove that there is much room for improvement.
1.1 THE WATERFALL MODEL
The conventional software process is based on the waterfall model.
The waterfall model can be taken as a benchmark of the software development process.
As a retrospective, we shall examine the waterfall model theory to critically analyze how
the
industry ignored much of the theory, but still managed to evolve good and not-so-good
practices,
particularly while using the modern technologies.
1.1.1 IN THEORY
Winston Royce’s paper – Managing the Development of Large Scale Software Systems –
based
on lessons learned while managing large software projects, provides a summary of
conventional
software management philosophy.
Three primary points presented in the above paper are:
1. There are two essential steps common to the development of computer programs –
analysis
and coding.
2. In addition to the above steps several other “overhead” steps are to be introduced.
These steps are: system requirement definition, software requirement definition, program
design, and coding.
These steps help in managing and controlling the intellectual freedom related to software
development [in comparison to physical (development) processes.]
The project-profile and the basic steps in developing a large-scale program are:
3. The basic framework described in waterfall model is risky. It is failure-prone.
The testing phase – taken up towards the end of the development life cycle – provides for
the
first time an opportunity to physically try out the timing, storage, input/output transfers,
etc.
against what is analyzed, theoretically.
If any changes are required in the design, they disrupt the software requirements – based
on
which the design is carried out – to become violated.
Most of these development risks can be eliminated by following five improvements to the
basic
waterfall process.
The proposed five improvements are:
1. Program design comes first. The first improvement is in terms of introducing a
preliminary program design phase between the software requirements generation and the
analysis phases.
This ensures that the software will not fail because of storage, timing, and data flux.
As analysis proceeds in the succeeding phase, the designer should make the analyst aware
of the consequences of the storage, timing, and operational constraints.
If the total resources required are insufficient, or the nascent operational design is wrong,
it will be recognized at the very early stage.
Thereby, the iteration/redoing of the requirements analysis or preliminary design can be
taken up
without adversely affecting the final design, coding, and testing activities.
PART – I SOFTWARE MANAGEMENT RENAISSANCE Page 5 of 187
Chapter – 1 CONVENTIONAL SOFTWARE MANAGEMENT
The criticism should be targeted at the practice of the approach, which incorporated
various
unsound and unworkable elements.
Past and current practice of the waterfall model approach is referred to as the
“conventional”
software management approach or process.
The waterfall process is no longer a good framework for modern software engineering
practices
and technologies.
It can be used as the reality benchmark to rationalize a process which is improved and
devoid of
the fundamental flaws of the conventional process.
1.1.2 IN PRACTICE
Projects using the conventional process exhibited the following symptoms
characterizing their failure:
Protracted integration and late design breakage
Late risk resolution
Requirements-driven functional decomposition
Adversarial stakeholder relationships
Focus on documents and review meetings
Protracted Integration and Late Design Breakage
Figure 1-2 illustrates development progress versus time for a typical development project
using
the waterfall model management process.
Progress is defined as percent coded – that is demonstratable in its target form.
Software that is compilable and executable need not necessarily be complete, compliant,
or up to
specifications.
From the figure we can notice, regarding the development activities, that:
Early success via paper designs and thorough briefings
Commitment to code late in the life cycle
Integration difficulties due to unforeseen implementation issues and interface ambiguities
Heavy budget and schedule pressure to get the system working
Late and last-minute efforts of non-optimal fixes, with no time for redesign
A very fragile, unmaintainable product delivered late
Given the immature languages and technologies used in the conventional approach,
there was substantial emphasis on perfecting the design before committing it to coding
and consequently, it was difficult to understand or make any changes to it.
This practice resulted in the use of multiple formats –
requirements in English
preliminary design in flowcharts
detailed design in program design languages
implementations in the target languages like FORTRAN, COBOL, or C
and error-prone, labor-intensive translations between formats.
Conventional techniques imposed a waterfall model on the design process.
This resulted in late integration and lower performance levels.
In this scenario, the entire system was designed on paper, then implemented all at once,
then
integrated.
Only at the end of this process there was scope for system testing to verify the soundness
of the
fundamental architecture – interfaces and structure.
Generally, in conventional processes 40% or more of life-cycle resources are consumed
by
testing.
PART – I SOFTWARE MANAGEMENT RENAISSANCE Page 8 of 187
Chapter – 1 CONVENTIONAL SOFTWARE MANAGEMENT
Consequently, projects tend to have a protracted integration phase (Fig 1-2) as major
redesign
initiatives are implemented.
This process tends to resolve the important risks, but by sacrificing the quality and its
maintainability.
Redesigning may also include tying loose-ends at the last minute and patching up bits and
pieces
into a coherent single piece.
These sorts of changes do not conserve the overall design integrity and its
maintainability.
Requirements–Driven Functional Decomposition
Traditionally, the software development process has been requirements-driven:
An attempt is made to provide a precise requirements definition, and
then to implement exactly those requirements.
This approach depends on specifying requirements completely and unambiguously before
other
development activities can begin.
It naively treats all requirements as equally important, and depends on those requirements
remaining constant over the software development life cycle.
These conditions rarely occur in real world.
Specification of requirements is a difficult and important part of the software
development
process.
Virtually every major software program suffers from severe difficulties in requirements
specification.
The treatment of all requirements as equal wastes away substantial engineering-hours on
lessimportant
requirements from the driving requirements and wastes effort on paperwork associated
with traceability, testability, logistics support, and so on.
This paperwork anyway will be discarded later as the more important requirements and
subsequent understanding evolve.
Another property of the conventional approach is that the requirements are typically
specified in
a functional manner.
The classic waterfall process is built upon the fundamental assumption that the software
itself is
decomposed into functions.
Requirements are then allocated to the resulting components.
This decomposition is different from a decomposition based on OOD and the use of
existing
components.
The functional decomposition precludes an architecture-driven approach as it is built
around
contracts, sub-contracts, and work breakdown structures.
Adversarial Stakeholder Relationships
The conventional process results in adversarial stakeholder relationships because of
the difficulties of requirements specification, and
High
Low
Project Risk Exposure
Risk
Exploration
Period
Risk Elaboration
Period
Focused
Risk
Resolution
Period
Controlled
Risk
Management
Period
Project Life Cycle
Figure 1-3. Risk Profile of a conventional software project across its life cycle
Requirements Design - Coding Integration Testing
PART – I SOFTWARE MANAGEMENT RENAISSANCE Page 10 of 187
Chapter – 1 CONVENTIONAL SOFTWARE MANAGEMENT
the exchange of information solely through paper documents that has information in ad
hoc
formats.
The lack of an uniform and standard notation resulted in subjective reviews and
opinionated
exchanges of information.
Typical sequence of events for most of contractual software efforts:
1. The contractor prepared a draft contract-deliverable document capturing an
intermediate
artifact and delivered it to the customer for approval.
2. The customer was expected to provide comments within 2 to 4 weeks.
3. The contractor incorporated these comments and submitted – in 2 to 4 weeks – a final
version for approval.
This type of one-time review process encouraged high-levels of sensitivity on the part of
customers and contractors.
The overhead of such a paper exchange review process was intolerable.
This resulted in mistrust between the customer-contractor.
It made balancing among requirements, schedule, and cost a difficult proposition.
Focus on Documents and Review Meetings
The conventional process focused more on producing documents in an attempt to
describe the
software product, than focusing on producing tangible increments of the products
themselves.
Even milestones are also discussed in meetings in terms of documents only.
For the contractors the major job is of producing documentary evidence of meeting
milestones
and demonstrating progress to stakeholders, instead of spending their energy on reducing
risk
and producing quality software.
Most design reviews resulted in low engineering value and high cost in terms of the effort
and
schedule involved in their preparation and conduct.
T
ABLE 1-2 Results of conventional software project design reviews
APPARENT RESULTS REAL RESULTS
Only a small percentage of audience understands
the software.
Big briefing to a diverse audience Briefings and documents expose few of the
important assets and risks of complex software
systems
There is no tangible evidence of compliance.
A design that appears to be compliant Compliance with ambiguous requirements is of
little value.
Coverage of requirements (typically Few (tens) are design drivers
hundreds) Dealing with all requirements dilutes the focus
on the critical drivers
A design considered “innocent until The design is always guilty.
proven guilty” Design flaws are exposed later in the life cycle.
Diagnosing these five symptoms of:
Protracted integration and late design breakage
Late risk resolution
Requirements-driven functional decomposition
Adversarial stakeholder relationships
Focus on documents and review meetings
can be difficult, particularly in the early phases of the life cycle as by then the problems
with the
conventional model would be cured.
Modern software, hence, should use mechanisms that assess project status early in the
life-cycle
and continue with objective, periodic checkups.
1.2 CONVENTIONAL SOFTWARE MANAGEMENT PERFORMANCE
Barry Boehm’s “Industrial Software Metrics Top 10 List” is a good, objective
characterization of
the state of software development.
PART – I SOFTWARE MANAGEMENT RENAISSANCE Page 11 of 187
Chapter – 1 CONVENTIONAL SOFTWARE MANAGEMENT
Many of the metrics are gross generalizations, yet they accurately describe some of the
fundamental economic relationships that resulted from the conventional process practiced
so far
in the past.
The following are the metrics in Boehm’s top 10 list:
1. Finding and fixing a software problem after delivery costs 100 times more than finding
and fixing the problem in early design phases.
This metric applies equally well for every dimension of process improvement.
It equally well applies to any other process as for software development.
2. You can compress software development schedules 25% of nominal, but no more.
One reason for this is: an N% reduction in schedule requires an M% (M > N) increase in
human resources.
This entails additional management overhead.
Generally, the limit of flexibility in this overhead by scheduling concurrent activities,
conserving sequential activities, and other resource constraints, is about 25%.
For example, say optimally, a 100-staff-month effort may be achievable in 10 months by
10 people.
Could the job be done in one month with 100 people?
Two months with 50 people?
These alternatives are unrealistic.
The 25% compression metric says the limit here is 7.5 months – requiring additional
staff-months to the tune of 20.
Any further schedule compression is doomed to fail.
An optimal schedule could be extended arbitrarily and depending on the staff, could be
performed in a much longer time with fewer human resources.
3. For every $1 spent on development, $2 is spent on maintenance.
Boehm calls this the “iron law of software development.”
Whether it is a long-lived commercial product requiring half-yearly upgrades, or a
custom software system, twice as much money will be spent over the maintenance
lifecycle
than was spent in the development life-cycle.
4. Software development and maintenance costs are primarily a function of the number
of
source lines of code (SLOC).
This metric is more applicable to custom software development in the absence of
commercial components, and lack of reuse as was the case in the conventional era.
5. Variations among people account for the biggest differences in software productivity.
This is a key piece of conventional wisdom:
Hire good people.
When objective knowledge of the reasons for success or failure is not available, the
obvious scapegoat is the quality of the people.
This judgment is subjective and difficult to challenge.
6. The overall ratio of software to hardware costs is still growing. In 1955 it was 15:85,
in
1985, 85:15
85% is about the level of functionality allocated to software in system solutions, and not
just about software productivity.
7. Only 15% of software development effort is devoted to programming.
This is an indicator of the need for balance among other activities besides coding like
requirements management, design, testing, planning, project control, change
management,
and tool preparation and selection.
8. Software systems and products typically cost 3 times as much per SLOC as individual
software programs. Software-system products – system of systems – cost 9 times as
much.
This exponential relationship is the essence of what is called diseconomy of scale.
Unlike other commodities, as more software is built it will more expensive per SLOC.
9. Walkthroughs catch 60% of errors.
Reading this with metric-1, walkthroughs – though they catch 60% of errors – are not
catching the errors that matter and certainly not early enough in the life-cycle.
All defects are not created equal.
Human inspection methods like walkthroughs are good at catching surface problems and
style issues.
When ad hoc notations are used, human methods may be useful as a quality assurance
method.
For uncovering issues like resource contention, performance bottlenecks, control
conflicts, and other higher order issues, human methods are not efficient.
PART – I SOFTWARE MANAGEMENT RENAISSANCE Page 12 of 187
Chapter – 1 CONVENTIONAL SOFTWARE MANAGEMENT
Engineering and feedback cycles now take only a few days/weeks – a great reduction
from months’ of time required earlier.
Further, the old process could not afford re-runs. Designs were done completely – after
thorough analysis and design – in one construction cycle.
The new GUI process is geared to take the user interface through a few realistic versions,
incorporating user feedback all along the way.
It also achieves a stable understanding of the requirements and the design issues in
balance with one another.
The ever-increasing advances in the hardware technology also have been influencing the
software technology improvements.
The availability of higher CPU speeds, more memory, and more network bandwidth has
eliminated many complexities.
Simpler, brute-force solutions are now possible – all this because of advances in
hardware technology.
3.1 REDUCING SOFTWARE PRODUCT SIZE
Producing a product that achieves design goals with minimum amount of
humangenerated
source material is the most significant way to improve return on investment
(ROI) and affordability.
Component-based development is the way for reducing the “source” language size.
Reuse, OO technology, automatic code generation, and higher-order programming
languages are all focused on achieving a system with fewer lines of human-specified
source directive/statements.
This size reduction is the primary motivation behind improvements in
higher order languages – like C++, Ada 95, Java, V Basic, and 4GLs
automatic code generators – CASE tools, visual modeling tools, GUI builders
reuse of commercial components – OSs, windowing environments, DBMSs,
middleware, networks
object-oriented technologies – UML, visual modeling tools, architecture
frameworks.
There is one limitation in this “type” of code/size reduction:
Apparently, this recommendation comes from a simple observation: code that isn’t there
need not be developed and can’t break.
This is not entirely the case.
When size-reducing technologies are used, they reduce the number of human-generated
source lines.
All of them tend to increase the amount of computer-executable code.
So this negates the second part of the observation.
Mature and reliable size reduction technologies are powerful at producing economic
benefits.
Immature technologies may reduce the development size but require more investment in
achieving required levels of quality and performance.
This may have a negative impact on overall project performance.
3.1.1 LANGUAGES
Universal function points (UFPs) are useful metrics for language-independent early
lifecycle
estimates.
UFPs are used to indicate the relative program sizes required to implement a given
functionality.
The basic units of FPs are
external user inputs
external outputs,
internal logical data groups,
external data interfaces, and
external inquiries.
PART – I SOFTWARE MANAGEMENT RENAISSANCE Page 20 of 187
Chapter – 3 IMPROVING SOFTWARE ECONOMICS
SLOC metrics are useful as estimators after a candidate solution is formulated and an
implementation language is known.
Substantial data is documented relating SLOC to FPs as shown below:
TABLE 3-2. Language expressiveness of some of the popular languages
LANGUAGE SLOC PER UFP
Assembly 320
C 128
FORTRAN 77 105
COBOL 85 91
Ada 83 71
C++ 56
Ada 95 55
Java 55
Visual Basic 35
Visual Basic: useful for building simple interactive applications, not useful for real-time,
embedded programs.
Ada 95: useful for mission critical real-time applications, not useful for parallel,
scientific,
and highly number-crunching applications on higher configurations.
Data such as this – spanning application domains, corporations, and technology
generations – should be interpreted and used with great care.
Two observations within the data concern the differences and relationships between Ada
83 and Ada 95, and C and C++.
The difference in expressiveness between the two versions of Ada is mainly due to the
features added to support OOP.
The difference between the two versions of C is more profound.
C++ incorporated several of the advanced features of Ada with more support for OOP.
C++ was developed as a superset of C.
This has its pros and cons.
The C compatibility made it easy for C programmers to migrate to C++.
On the downside, a number of C++ compiler users were programming in C, so the
expressiveness of the OOP based C++ was not being exploited.
The evolution of Java eliminated many of the problems in the C++ language.
It conserves the OO features and adds further support for portability and distribution.
UFPs can be used to indicate the relative program sizes required to implement a given
functionality.
For example, to achieve a given application with a fixed number of function points, one
of the following program sizes would be required:
10,00,000 lines of assembly language
4,00,000 lines of C
2,20,000 lines of Ada 83
1,75,000 lines of Ada 95 or C++
Reduction in the size of human-generated code, in turn reduces the size of the team and
the time needed for development.
Adding a commercial DBMS, a commercial GUI builder, and a commercial middleware
can reduce the effective size of development to the following final size:
75,000 line of Ada 95 or C++ with integration of several commercial components
The use of the highest level language and appropriate commercial components has a
sizable impact on cost – particularly when it comes to large projects which have higher
life-cycle cost.
Generally, simpler is better: reducing size increases understandability, changeability, and
reliability.
The data in the table illustrate why modern
languages like C++, Ada 95, Java, and Visual
Basic are more preferred.
Their level of expressiveness is attractive.
There is risk of misuse in applying the data in
the table.
This data is a precise average of several
imprecise numbers.
Each language has a domain of usage.
The values indicate the relative
expressive power of various languages.
Commercial components and code
generators can further reduce the size of
human-generated code.
PART – I SOFTWARE MANAGEMENT RENAISSANCE Page 21 of 187
Chapter – 3 IMPROVING SOFTWARE ECONOMICS
But, the higher level abstraction technologies tend to degrade performance, increase
resource consumption.
These drawbacks, mostly, can be overcome by hardware performance improvements and
optimization.
These improvements may not be as effective in embedded systems.
3.1.2 OBJECT-ORIENTED METHODS AND VISUAL MODELING
The later part of 1990s has seen a shift towards OO technologies.
Studies have concluded that OO programming languages appear to benefit both software
productivity and software quality, but an economic benefit has yet to be demonstrated.
One reason for the lack of this proof could be the high cost of training in OO design
methods like the UML.
OO technology provides more formalized notations for capturing and visualizing
software abstractions.
This has an impact in reducing the overall size of the product to be developed.
Grady Booch proposed three other reasons for the success of the OO projects. These are
good examples of the interrelationships among the dimensions of improving software
economics:
1.An OO model of the problem and its solution encourages a common vocabulary
between the end users of a system and its developers, thus creating a shared
understanding of the problem being solved.
This is an example how the use of OO technology improves teamwork and
interpersonal communications.
2.The use of continuous integration creates opportunities to recognize risk early and
make incremental corrections without destabilizing the entire development effort.
This aspect of OO technology enables an architecture-first process in which
integration is an early and continuous life-cycle activity.
3.OO architecture provides a clear separation of concerns among disparate elements of a
system, creating firewalls that prevent a change in one part of the system from rending
the fabric of the entire architecture.
This feature is crucial to the supporting languages and environments to implement
OO architectures.
Booch also summarized five characteristics of a successful OO project:
1.A ruthless focus on the development of a system that provides a well understood
collection of essential minimal characteristics.
2.The existence of a culture that is centered on results, encourages communication, and
yet is not afraid to fail.
3.The effective use of OO modeling.
4.The existence of a strong architectural vision.
5.The application of a well-managed iterative and incremental development life cycle.
OO methods, notations, and visual modeling provide strong technology support for the
process framework.
3.1.3 REUSE
Reusing existing components and building reusable components have been natural
software engineering activities along with the improvements in programming languages.
Software design methods implicitly dealt with reuse in order to minimize development
costs while achieving all the other required attributes of performance, feature set, and
quality.
Reuse should be treated as a routine part of achieving a return on investment.
Common architectures, common processes, precedent experience, and common
environments are all instances of reuse.
An obstacle to reuse has been fragmentation of languages, operating systems, notations,
machine architectures, tools and standards.
PART – I SOFTWARE MANAGEMENT RENAISSANCE Page 22 of 187
Chapter – 3 IMPROVING SOFTWARE ECONOMICS
Since the trade-offs have global effects on quality, cost, and supportability, the selection
of commercial components over development of custom components has significant
impact on a project’s overall architecture.
TABLE 3-3. Advantages and disadvantages of commercial components versus custom
software
APPROACH ADVANTAGES DISADVANTAGES
Commercial
components
☺ Predictable license costs
☺ Broadly used mature technology
☺ Available now
☺ Dedicated support organization
☺ Hardware/software
independence
☺ Rich in functionality
Frequent upgrades
Up-front license fees
Recurring maintenance fees
Dependency on vendor
Run-time efficiency sacrifices
Functionality constraints
Integration not always trivial
No control over upgrades
and maintenance
Unnecessary features that
consume extra resources
Often inadequate
reliability and stability
Multiple-vendor
incompatibilities
Custom
development
☺ Complete change freedom
☺ Smaller, and often simpler
implementations
☺ Often better performance
☺ Control of development and
enhancement
Expensive unpredictable
development
Unpredictable availability
date
Undefined maintenance model
Immature and fragile
Single-platform dependency
Drain on expert resources
The paramount message here is: these decisions must be made early in the life cycle as
part of the architectural design.
3.2 IMPROVING SOFTWARE PROCESSES
Process is an overloaded term.
For software-oriented organizations there are many processes and sub-processes.
The main and distinct process perspectives are:
Metaprocess: an organization’s policies, procedures, and practices for pursuing a
software-intensive line of business.
The focus of this process is on organizational economics, long-term
strategies, and a software ROI.
Macroprocess: a project’s policies, procedures, and practices for producing a complete
software product within certain cost, schedule, and quality constraints.
The focus of the macroprocess is on creating an adequate instance of the
metaprocess for a specific set of constraints.
Microprocess: a project team’s policies, procedures, and practices for achieving an
artifact of the software process.
The focus of the microprocess is on achieving an intermediate product
baseline with adequate quality and adequate functionality as economically
and rapidly as possible.
Although these three levels of process overlap somewhat, they have different objectives,
audiences, metrics, concerns, and time scales.
These are shown in Table 3-4.
TABLE 3-4. Three levels of process and their attributes
ATTRIBUTES METAPROCESS MACROPROCESS MICROPROCESS
Subject Line of business Project Iteration
Objectives Line-of-business
profitability
Competitiveness
Project profitability
Risk management
Project budget,
schedule, quality
Resource management
Risk resolution
Milestone budget,
schedule, quality
Audience Acquisition
authorities, customers
Organizational
management
Software project
managers
Software engineers
Sub-project managers
Software engineers
Metrics Project predictability
Revenue, market share
On budget, on
schedule
Major milestone
success
On budget, on
schedule
Major milestone
progress
PART – I SOFTWARE MANAGEMENT RENAISSANCE Page 24 of 187
Chapter – 3 IMPROVING SOFTWARE ECONOMICS
Significant or substantial design errors or architecture issues are rarely obvious unless the
inspection is narrowly focused on a particular issue.
Most inspections are superficial.
When systems are highly complex, with innumerable components, concurrent execution,
distributed resources, and other equally demanding dimensions of complexity, it is very
difficult to comprehend the dynamic interacti9ons within a software system under some
simple use cases.
So, random human inspections tend to degenerate into comments on style and first-order
semantic issues.
They rarely result in the discovery of real performance bottlenecks, serious control issue
like deadlocks, race conditions, or resource contentions, or architectural weakness like
flaws in scalability, reliability, or interoperability.
Architectural issues are exposed only through more rigorous engineering activities like:
Analysis, prototyping, or experimentation
Constructing design models
Committing the current state of the design model to an executable implementation
Demonstrating the current implementation strengths and weaknesses in the context
of critical subsets of the sue cases and scenarios
Incorporating lessons learned back into the models, use cases, implementations, and
plans
Architectural quality achievement is inherent in an iterative process that evolves the
artifact sets together in balance.
The checkpoints along the way are numerous, including human review and inspections
focused on critical issues.
Focusing a large percentage of a project’s resources on human inspections is bad practice
and only perpetuates the existence of low-value-added box checkers who have no stake in
the project’s success.
Quality assurance is everyone’s responsibility and should be integral to almost all process
activities instead of a separate discipline performed by quality assurance specialists.
Questions on this chapter:
1. The key to substantial improvement of software economics is a balanced attack
across several interrelated dimensions. Comment in detail.
2. Explain how reducing software product size contributes to the improvement of
software economics.
3. Explain Booch’s reasons for the success of object-oriented projects. Clearly bring
out the interrelationships among the dimensions of improving software economics.
4. Explain the relative advantages and disadvantages of using commercial
components versus custom software.
5. Explain how software economics is improved by improving software processes.
6. Explain how improvement of team effectiveness contributes to software
economics.
7. Explain Boehm’s staffing principles.
8. Explain how software environments help in improving automation as a way of
improving software economics.
9. Explain the key practices that improve overall software quality, in view of the
general quality improvements with a modern process in comparison with that of
conventional processes.
10. Comment on the relative merits and demerits of peer inspections for quality
assurance.
Difficult words:
caveat caution trivial small/inconsequential
PART – I SOFTWARE MANAGEMENT RENAISSANCE Page 32 of 187
Chapter – 4 THE OLD WAY AND THE NEW
22. Avoid tricks. Many programmers love to create programs with tricks – constructs
that perform a function correctly, but in an obscure way. Show the world how smart
you are by avoiding tricky code.
It is difficult to draw the line between a trick and an innovative solution.
Obfuscated coding techniques should be avoided unless there are compelling reasons
for their use.
23. Encapsulate. Information-hiding is a simple, proven concept that results in software
that is easier to test and much easier to maintain.
Component-based design, OO-design, and modern design and programming notations
have advanced this principle into mainstream practice.
24. Use coupling and cohesion. Coupling and cohesion are the best ways to measure
software’s inherent maintainability and adaptability.
Coupling and cohesion are abstract descriptions of components for which there are no
objective definitions.
Coupling and cohesion, therefore, are difficult to measure.
Modern metrics for maintainability and adaptability are centered on measuring the
amount of software
25. Use the McCabe complexity measure. Although there are many metrics available to
report inherent complexity of software, none is as intuitive and easy to use as Tom
McCabe’s.
Complexity metrics help in identifying the critical components that need special
attention.
It is rare to see these complexity measures being used in field.
They are more of theoretical or academic interest.
They are more useful for automated project management.
26. Don’t test your own software. Software developers should never be the primary
testers of their own software.
An independent test team offers an objective perspective.
On the other hand, software developers need to take ownership of the quality of their
products.
So, developers should test their own software, and so should an independent team.
27. Analyze causes for errors. It is far more cost-effective to reduce the effect of an
error by preventing it than it is to find and fix it. One way to do this is to analyze the
causes of errors as they are detected.
This is a good principle in the construction phase as errors are likely to repeat.
Analyses of errors in complex systems can be over-analysis and over-design on paper
in the early stages of a project.
These activities are more of “error-preventive” efforts.
These activities result in a lower return on investment in comparison with prototyping
and construction activities, which would have made the errors more obvious and
tangible.
This can be restated as (1) don’t be afraid to make errors in the engineering stage and
(2) analyze the cause for errors in the production stage.
28. Realize that software’s entropy increases. Any software system that undergoes
continuous change will grow in complexity and will become more and more
disorganized.
The sign of a poor architecture is that its entropy increases in a way that is difficult to
manage.
Entropy tends to increase dangerously when interfaces are changed for tactical
reasons.
The integrity of an architecture is primarily strategic and inherent in its interfaces.
It must be controlled with intense scrutiny.
Modern change management tools force a project to respect and enforce interface
integrity.
A quality architecture is characterized by minimal increase in entropy and change can
be accommodated with stable, predictable results.
So, an ideal architecture permits change without any abnormal increase in entropy.
29. People and time are not interchangeable. Measuring a project solely by
personmonths
makes little sense.
30. Expect excellence. Your employees will do much better if you have high
expectations for them.
PART – I SOFTWARE MANAGEMENT RENAISSANCE Page 36 of 187
Chapter – 4 THE OLD WAY AND THE NEW
Early iterations in the life cycle establish precedents from which the product, the
process, and the plans can be elaborated in evolving levels of detail.
(B) Process flexibility.
Software development is characterized by a broad solution space and a number of
interrelated concerns.
This results in need for continuous incorporation of change(s).
These changes may be inherent in the problem understanding, the solution space, or
the plans.
Project artifacts must be supported by efficient change management in tune with
project needs.
A rigid process and a chaotically changing process are destined to failure.
A configurable process that allows a common framework to be adapted across a
range of projects is necessary to achieve software ROI.
(C) Architecture risk resolution.
Architecture-first development is crucial for a successful development process.
A team develops and stabilizes an architecture before developing all the components.
An architecture-first and component-based development approach forces the
infrastructure, common mechanisms, and control mechanisms to be elaborated early
in the life cycle and drives all component make/buy decisions into the architecture
process.
It initiates integration activity early in the life cycle as the verification activity of
design process and product.
It also enforces the development environment to be configured and exercised early to
ensure early attention to testability and a foundation for demonstration-based
assessment.
(D) Team Cohesion.
Successful teams are cohesive, and cohesive teams are successful.
Successful teams and cohesive teams share common objectives and priorities.
Cohesive teams avoid sources of turbulence and entropy due to difficulties in
synchronizing stakeholder expectations.
Miscommunication – in exchanging information solely through paper documents
containing subjective descriptions – is one of the primary reasons for turbulence.
Advances in technology – programming languages, UML, and visual modeling –
have enabled more rigorous and understandable notations for communicating
engineering information, particularly in the requirements and design artifacts.
[Previously ad hoc paper-based methods were in use.]
These model-based formats have also enabled the round-trip engineering support
needed to establish change freedom.
(E) Software process maturity.
Just as domain experience is crucial for avoiding the application risks and exploiting
the available domain assets and lessons learned, software process maturity is crucial
for avoiding software development risks and exploiting the software assets and lesson
learned.
Truly mature processes are enabled through an integrated environment providing the
appropriate level of automation to instrument the process for objective quality control.
Questions on this chapter:
1. List out and explain Davis’ principles of conventional software engineering.
2. List out and explain the principles of modern software management.
3. Explain how modern process approaches can be used for solving conventional
problems.
4. Explain the mapping of process exponent parameters of COCOMO II model to
the top 10 principles of a modern process.
Difficult words:
insatiable unsatisfiable trivial small/inconsequential
meager Scanty/not enough ramification result/consequence
sage Wise trivialize belittle/underestimate
obfuscate disguise/conceal entropy measure of degradation
PART – II A SOFTWARE MANAGEMENT PROCESS FRAMEWORK
CHAPTER – 5 LIFE-CYCLE PHASES Page 40 of 187
For a project to be successful there must be a well defined separation between “research
and development activities” and “production activities”.
A failure to define and execute these two stages with proper balance and appropriate
emphasis leads to the failure of the project.
Most unsuccessful projects exhibit one of the following characteristics:
An overemphasis on research and development.
Too many analyses or paper studies are performed.
The construction of engineering baselines is delayed.
An overemphasis on production.
Rush-to-judgment designs, premature work by overeager coders, and continuous
hacking are typical.
Successful projects have a well-defined project milestone when there is a transition from
a research attitude to a product attitude.
Earlier phases focus on achieving functionality.
Later phases revolve around achieving a product that can be shipped to a customer, with
explicit attention to robustness, performance, fit, and finish.
This life-cycle balance, subtle and intangible, is the foundation for successful software
management.
A modern software development process must be defined to support the following:
Evolution of the plans, requirements, and architecture, together with well-defined
synchronization points.
Risk management and objective measures of progress and quality.
Evolution of system capabilities through demonstrations of increasing
functionality.
5.1 ENGINEERING AND PRODUCTION STAGES
To achieve economies of scale and higher ROI, a software manufacturing process should
be driven by technological improvements in process automation and component-based
development.
The two stages of the life-cycle at the first order are:
1.Engineering stage, driven by less predictable but smaller teams doing design and
synthesis activities.
2.The production stage, driven by more predictable but larger teams doing construction,
test, and deployment activities.
TABLE 5-1. The two stages of the life cycle: engineering and production
LIFE-CYCLE
ASPECT
ENGINEERING STAGE
EMPHASIS
PRODUCTION STAGE
EMPHASIS
Risk reduction Schedule, technical feasibility Cost
Products Architecture baseline Product release baselines
Activities Analysis, design, planning Implementation, resting
Assessment Demonstration, inspection, analysis Testing
Economics Resolving diseconomies of scale Exploiting economies of scale
Management Planning Operations
The table 5-1 is a summary of the differences in emphasis between the two stages –
engineering and production.
The transition between engineering and production is very crucial for the stakeholders.
Depending on the specifics of a project the time and resources dedicated to the two stages
can be highly variable.
Having only two stages to a life cycle sounds a little coarse, too simplistic, for most
applications.
So, the engineering stage is decomposed into two distinct phases, inception and
elaboration, and the production stage into construction and transition.
These four phases of the life-cycle process are loosely mapped to the conceptual
framework of the spiral model.
PART – II A SOFTWARE MANAGEMENT PROCESS FRAMEWORK
CHAPTER – 5 LIFE-CYCLE PHASES Page 41 of 187
The UML is a suitable representation format in the form of visual models with a
welldefined
syntax and semantics for requirements and design artifacts. Visual modeling
using UML is a primitive notation for early life-cycle artifacts.
FIGURE 6-1. Overview of the artifact sets
Requirements Set Design Set Implementation Set Deployment Set
1. Vision document 1. Design model(s) 1. Source code
baselines
1. Integrated product
executable baselines
2. Requirements
model(s)
2. Test model 2. Associated
compile-time files
2. Associated run-time
files
3. Software architecture
description
3. Component
executables
3. User manual
Management Set
Planning Artifacts Operational artifacts
1. Work breakdown structure
2. Business case
3. Release specification
4. Software development plan
5. Released descriptions
6. Status assessments
7. Software change order database
8. Deployment documents
9. Environment
6.1.1 THE MANAGEMENT SET
The management set captures the artifacts associated with process planning and
execution.
These artifacts use ad hoc notations, including text, graphics, etc., to capture the
“contracts”
a) among project personnel – project management, architects, developers, testers,
marketers, administrators
b) among stakeholders – funding authority, user, software project manager,
organization manager, regulatory agency
c) and between project personnel and stakeholders.
Specific artifacts included in this set are:
a) The work breakdown structure – activity breakdown, and financial tracking
mechanism
b) The business case – cost, schedule, profit expectations
c) The release specifications – scope, plan, objectives for release baselines
d) The software development plan – project process instance
e) The release descriptions – results of release baselines
f) The status assessments – periodic snapshots of project progress
g) The software change orders – descriptions of discrete baseline changes
h) The deployment documents – cutover plan, training course, sales rollout kit
i) The environment – hardware and software tools, process automation,
documentation, training collateral necessary to support the execution
Management set artifacts are evaluated, assessed, and measured through a combination of
the following:
Relevant stakeholder review
Analysis of changes between the current version of the artifact and previous
versions – management trends and project performance changes in terms of cost,
schedule, and quality
Major milestone demonstrations of the balance among all artifacts and, in
particular the accuracy of the business case and vision artifacts
6.1.2 THE ENGINEERING SETS
The engineering sets consist of
1) The requirement set
2) The design set
3) The implementation set
4) The deployment set.
PART – II A SOFTWARE MANAGEMENT PROCESS FRAMEWORK
CHAPTER – 6 ARTIFACTS OF THE PROCESS Page 48 of 187
The primary mechanism for evaluating the evolving quality of each artifact set is the
transitioning of information from set to set, and
Thereby maintaining a balance of understanding among the requirements, design,
implementation, and deployment artifacts
Each of these components of the system description evolves over time.
Requirement Set
Structured text is used for the vision statement to document the project scope that
supports the contract between the funding authority and the project team.
Ad hoc formats may also be used for
Supplementary specifications – such as regulatory requirements
User mockups or other prototypes that capture requirements
UML notation is used for engineering representations of requirements model – use case
models, domain models.
The requirements set is the primary engineering context for evaluating the other three
engineering artifact sets and is the basis for test cases.
Requirements artifacts are evaluated, assessed, and measured through a combination of
the following:
Analysis of consistency with the release specifications of the management set.
Analysis of consistency between the vision and the requirements models
Mapping against design, implementation, and deployment sets to evaluate the
consistency and completeness and the semantic balance between information in the
different sets.
Analysis of changes between the current version of requirements artifacts and
previous versions – scrap, rework, and defect elimination trends
Subjective review of other dimensions of quality
Design Set
UML notation is used to engineer the design models for the solution.
The design set contains varying levels of abstraction that represent the components of the
solution space – their identities, attributes, static relationships, dynamic interactions.
The design models include structural and behavioral information to ascertain the
following costs:
Bill of materials – quantity and specification of primitive parts and materials,
labor and other costs
Design model information can be straightforwardly and automatically translated into a
subset of the implementation and deployment set artifacts.
Specific design set artifacts include
The design model
The test model
The software architecture description – an extract of information from the design
model that is pertinent to describing an architecture.
The design set is evaluated, assessed and measured through a combination of the
following:
Analysis of the internal consistency and quality of the design model
Analysis of consistency with requirements models
Translation into implementation and deployment sets and notations – traceability,
source code generation, compilation, linking – to evaluate the consistency and
completeness and the semantic balance between information in the sets.
Analysis of changes between the current version of the design model and previous
versions – scrap, rework, and defect elimination trends
Subjective review of other dimensions of quality
Human analysis is required as the level of automated analysis of design models is limited.
PART – II A SOFTWARE MANAGEMENT PROCESS FRAMEWORK
CHAPTER – 6 ARTIFACTS OF THE PROCESS Page 49 of 187
Automated analysis will improve with the maturity of design model analysis tools that
support metrics collection, complexity analysis, style analysis, heuristic analysis, and
consistency analysis.
Implementation Set
Implementation sets are human-readable formats.
The implementation set includes
① Source code – programming language notations – that represents the tangible
implementations of components – their form, interface, and dependency
relationships
② Executables necessary for stand-alone testing of components.
These executables are the primitive parts needed to construct the end product, including
⒜ custom components, ⒝ application programming interfaces (APIs) of commercial
components, and ⒞APIs or reusable or legacy components in a programming language
source like Ada 95, C++, Visual Basic, Java, or Assembly.
Implementation set artifacts can also be translated – compiled and linked – into a subset
of deployment set – end-target executables.
Specific artifacts include
Self-documenting product source code baselines and associated files –
compilation scripts, configuration management infrastructure, data files
Self-documenting test source code baselines and associated files – input test
data files, test result files
Standalone component executables
Component test driver executables
The implementation sets are evaluated, assessed and measured through a combination of
the following:
Analysis of consistency with the design models
Translation into deployment set notations – compilation and linking – to evaluate the
consistency and completeness among the artifact sets
Assessment of component source or executable files against relevant evaluation
criteria through inspection, analysis, demonstration, or testing
Execution of standalone component test cases that automatically compare expected
results with the actual results
Analysis of changes between the current version of the implementation set and
previous versions – scrap, rework, and defect elimination trends
Subjective review of other dimensions of quality
Deployment Set
అ) The deployment set includes
ఆ) User deliverables and machine language notations
ఇ) Executable software
ఈ) The build scripts
ఉ) Installation scripts
ఊ) Executable target specific data necessary to use the product in its environment
The machine language notations represent the product components in the target form
intended for distribution to the user.
Deployment set information can be
① Installed
② Executed against (test) scenarios of use
③ Dynamically configured to support the features required in the end-product.
Specific artifacts include
∙ Executable baselines and associated run-time files
∙ The user manual
PART – II A SOFTWARE MANAGEMENT PROCESS FRAMEWORK
CHAPTER – 6 ARTIFACTS OF THE PROCESS Page 50 of 187
The deployment sets are evaluated, assessed and measured through a combination of the
following:
Testing against the usage scenarios and quality attributes defined in the requirements
set to evaluate the consistency and completeness and the semantic balance between
information in the two sets
Testing the portioning, replication, and allocation strategies in mapping components
of the implementation set to physical resources of deployment system – platform type,
number, network topology
Testing against the defined usage scenarios in the user manual such as installation,
user-oriented dynamic reconfiguration, mainstream usage, and anomaly management.
Analysis of changes between the current version of the deployment set and previous
versions –defect elimination trends, performance changes
Subjective review of other dimensions of quality
The goal of selecting the management, requirements, design, implementation, and
deployment sets – though not scientific – is to optimize presentation of the process
activities, artifacts, and objectives.
These generalizations – with minor exceptions – as part of the conceptual framework are
useful in understanding the overall artifact sets.
Each artifact set uses different notations to capture the relevant artifacts.
Management set notations – ad hoc text, graphics, sue case notation – capture the plans,
process, objectives, and acceptance criteria.
Requirements notations – structured text and UML models – capture the engineering
context and operational concept.
Design notations – in UML – capture the engineering blueprints of architectural design,
and component design.
Implementation notations – software languages – capture the building blocks of the
solution in human-readable formats.
Deployment notations – executables and data files – capture the solution in
machinereadable
formats.
Each artifact set is the predominant development focus of one phase of the life cycle; the
other sets take on check and balance roles.
FIGURE 6-2. Life-cycle focus on artifact sets
Referring to the above figure, each phase has a predominant focus:
Requirements are the focus of the inception phase,
Inception Elaboration Construction Transition
Management Constant level
Constant level
Constant level
Constant level
Requirements Focus
Design Focus
Implementation Focus
Deployment Focus
PART – II A SOFTWARE MANAGEMENT PROCESS FRAMEWORK
CHAPTER – 6 ARTIFACTS OF THE PROCESS Page 51 of 187
The product configurations support various compilers and languages as well as various
implementations of network software.
The heterogeneity of all the various target configurations results in the need for a highly
sophisticated source code structure and a huge suite of different deployment artifacts.
6.1.3 ARTIFACT EVOLUTION OVER THE LIFE CYCLE
Each state of development represents an amount of precision in the final system
description.
Early in the life cycle, precision is low and the representation is generally high.
Eventually, the precision of representation is high and eve4rything is specified in full
detail.
At any point in the life cycle, the five sets will be in different states of completeness.
They should be at compatible levels of detail and reasonably traceable to one another.
Performing detailed traceability and consistency analyses early in the life cycle has a low
return on investment.
As development proceeds, the architecture stabilizes, and maintaining traceability linkage
among artifact sets is worth the effort.
Each phase of development focuses on a particular artifact set.
At the end of each phase, the overall system state will have progressed on all sets, as
illustrated in figure 6-3.
The inception phase focuses mainly on critical requirements, usually with a secondary
focus on an initial deployment view, little focus on implementation except perhaps choice
of language and commercial components, and possibly some high-level focus on the
design architecture but not on design detail.
During the elaboration phase, there is more depth in requirements and more breadth in
the design set, and further work on implementation and deployment issues such as
performance trade-offs under primary scenarios and make/buy analyses.
FIGURE 6-3. Life-cycle evolution of the artifact sets
Elaboration phase activities include the generation of an executable prototype.
This prototype involves subsets of development in all four sets and specifically assesses
whether the interfaces and collaborations among components are consistent and complete
within the context of the system’s primary requirements and scenarios.
A portion of all four sets must be evolved to some level of completion before an
architecture baseline is established.
This requires sufficient assessment of the design set, implementation set, and deployment
set artifacts against the critical use cases of the requirements set to suggest that the
project can proceed predictably with well-understood risks.
The main focus of the construction phase is design and implementation.
Early in this phase, the focus should be the depth of the design artifacts.
Engineering Stage Production Stage
Inception Elaboration Construction Transition
Management
Implementation
Deployment
Requirements
Design
Management
Implementation
Deployment
Requirements
Design
Management
Implementation
Deployment
Requirements
Design
Management
Implementation
Deployment
Requirements
Design
Requi
Require
Im
Desi
Implementati
Depl
Mana Manageme
PART – II A SOFTWARE MANAGEMENT PROCESS FRAMEWORK
CHAPTER – 6 ARTIFACTS OF THE PROCESS Page 53 of 187
Later on, the emphasis is on realizing the design in source code and individually tested
components.
This phase should drive the requirements, design, and implementation sets almost to
completion.
Substantial work is also done on the deployment set, at least to test one or a few instances
of the programmed system through a mechanism such as an alpha or beta release.
The main focus of the transition phase is on achieving consistency and completeness of
the deployment set in the context of other sets.
Residual defects are resolved, and feedback from alpha, beta, and system testing is
incorporated.
In the conventional system requirements are not specified upfront, and then do design,
and so forth.
In contrast, the entire system is evolved; decision about the deployment may affect
requirements, and not the other way around.
The key emphasis here is to break the conventional mold, in which the default
interpretation is that one set precedes another.
Instead, one state of the entire system evolves into a more elaborate state of the system,
involving evolution in each of the parts.
During the transition phase, traceability between the requirements set and the deployment
set is extremely important.
The evolving requirements set captures a mature and pr4ecise representation of the
stakeholders’ acceptance criteria, and the deployment set represents the actual end-user
product.
So, during the transition phase, completeness and consistency between the two sets is
important.
Traceability among the other sets is necessary only to the extent that it aids the
engineering or management activities.
6.1.4 TEST ARTIFACTS
How was it in the conventional system???????????
Conventional testing followed the same document-driven approach as for development.
Development teams built requirements documents, top-level design documents, and
detailed design documents before constructing any source files or executables.
Test teams built system test plan documents, system test procedure documents,
integration test plan documents, unit test plan documents, and unit test procedure
documents before building any test drivers, stubs, or instrumentation.
This document-driven approach caused the same problems for the test activities that it did
for the development activities.
In the modern process the same sets, notations, and artifacts for the products of test
activities are used as for the product development.
The necessary test infrastructure is being identified as a required subset of the end
product.
In the process, seve4ral engineering disciplines are forced into the process.
The test artifacts must be developed concurrently with the product from inception
through deployment.
So, testing is a full-life-cycle activity, not a late life-cycle activity.
The test artifacts are communicated, engineered, and developed within the same
artifact set as the developed product.
The test artifacts are implemented in programmable and repeatable format like the
software.
The test artifacts are documented in the same way as the product is documented.
Developers of the test artifacts use the same tools, techniques, and training as the
software engineers developing the product.
These disciplines allow for significant levels of homogenization across project
workflows.
PART – II A SOFTWARE MANAGEMENT PROCESS FRAMEWORK
CHAPTER – 6 ARTIFACTS OF THE PROCESS Page 54 of 187
All the activities are carried out within the notations and techniques of the four sets used
for engineering artifacts. They do not use separate sequences of design and test
documents.
Interpersonal communications, stakeholder reviews, and engineering analyses can be
performed with fewer distinct formats, fewer ad hoc notations, less ambiguity, and higher
efficiency.
For assessment workflow, in addition to testing, inspection, analysis, and demonstration
are also used.
Testing refers to the explicit evaluation through execution of deployment set components
under controlled scenario with an expected and objective outcome.
Tests can be automated.
The test artifacts are highly project-specific. But, there is a relationship between test
artifacts and the other artifact sets.
For example: consider a project to perform seismic data processing for the purpose of oil
exploration.
This system has three fundamental subsystems:
(1) a sensor subsystem that captures raw seismic data in real time and delivers these
data to:
(2) a technical operations subsystem that converts raw data into an organized database
and manages queries to this database from
(3) a display subsystem that allows workstation operators to examine seismic data in
human-readable form.
Such a system would result in the following test artifacts:
∎ Management set.
The release specifications and release descriptions capture the objectives,
evaluation criteria, and results of an intermediate milestone.
These artifacts are the test plans and test results negotiate4d among internal
project teams.
The software change order capture test results – defects, testability changes,
requirements ambiguities, and enhancements – and the closure criteria associated
with making a discrete change to a baseline.
∎ Requirements set.
The system-level use cases capture the operational concept for the system and the
acceptance test case descriptions, including the expected behavior of the system
and its quality attributes.
The entire system is a test artifact as it is the basis of all assessment activities
across the life-cycle.
∎ Design set.
A test model for non-deliverable components needed to test the product baselines
is captured in the design set.
These components include such design set artifacts as a seismic event simulation
for creating realistic sensor data; a “virtual operator” that can support unattended,
after-hours test cases; specific instrumentation suites for early demonstration of
resource usage; transaction rates or response times; and use case test drivers and
component stand-alone test drivers.
∎ Implementation set.
Self-documenting source code representations for test components and test drivers
provide the equivalent test procedures and test scripts.
These source files include human-readable data files representing certain
statically defined data sets that are explicit test source files.
Output files from test drivers provide the equivalent of test reports.
∎ Deployment set.
Executable versions of test components, test drivers, and data files are provided.
For any release, all the test artifacts and product artifacts are maintained using the same
baseline version identifier.
They are created, changed, and obsolesced as a consistent unit.
PART – II A SOFTWARE MANAGEMENT PROCESS FRAMEWORK
CHAPTER – 6 ARTIFACTS OF THE PROCESS Page 55 of 187
As test artifacts are captured using the same notations, methods, and tools, the approach
to testing is consistent with design and development.
This approach forces the evolving test artifacts to be maintained so that regression tsting
can be automated easily.
6.2 MANAGEMENT ARTIFACTS
The management set includes several artifacts that capture intermediate results and
ancillary information necessary to document the product/process legacy, maintain the
product, improve the product, and improve the process.
Here we look at a summary of the artifacts.
The word document can mean paper document or electronically transmitted information
in the form of processed data, reviews, etc.
Business Case
The business case artifact provides all the information necessary to determine whether the
project is worth investing in.
It details the expected revenue, expected cost, technical and management plans, and
backup data necessary to demonstrate risks and realism of the plans.
In large contractual procurements, the business case may be implemented in a full-scale
proposal with multiple volumes of information.
In a small-scale endeavor for a commercial product, it may be implemented in a brief
plan with an attached spreadsheet.
The main purpose is to transform the vision into economic terms so that an organization
can make an accurate ROI assessment.
The financial forecasts are evolutionary, updated with more accurate forecasts as the life
cycle progresses.
FIGURE 6-4. A typical/default outline for the business case:
Software Development Plan
The software development plan (SDP) elaborates the process framework into a fully
detailed plan.
It is the defining document for the project’s success.
It must comply with contract, comply with organization standards, evolve along with the
design and requirements, and be used consistently across all subordinate organizations
doing the software development.
I . Context (domain, market, scope)
II . Technical Approach
A . Feature set achievement plan
B . Quality achievement plan
C . Engineering trade -offs and technical risks
III . Management approach
A . Schedule and schedule risk assessment
B . Objective measures of success
IV. Evolutio nary appendixes
A . Financial Forecast
1 . Cost estimate
2 . Revenue estimate
3 . Bases of estimates
PART – II A SOFTWARE MANAGEMENT PROCESS FRAMEWORK
CHAPTER – 6 ARTIFACTS OF THE PROCESS Page 56 of 187
Two indications of useful SDP are periodic updating – it is not stagnant shelf-ware and
understanding and acceptance by managers and practitioners.
FIGURE 6-5. A default/typical outline for a software development plan
Work Breakdown Structure
A work breakdown structure (WBS) is the vehicle for budgeting and collecting costs. To
monitor and control a project’s financial performance, the software project manager
must have insight into project costs and how they are expended.
If the WBS is structured improperly, it can drive the evolving design and product
structure in the wrong direction.
Lower levels of a WBS should not be laid out until a commensurate level of stability in
the product structure is achieved. Otherwise, specific boundaries of accountability will
not be well-defined.
A functional breakdown in the WBS will result in functional decomposition in the
software.
Software Change Order Database
Managing change is one of the fundamental primitives of an iterative development
process.
With greater change freedom, a project can iterate more productively.
This flexibility increases the content, quality, and number of iterations that a project can
achieve within a given schedule.
Change freedom is achieved through automation.
The current iterative development environments carry the burden of change management.
Manual techniques for change management are quite inefficient.
So, the change management data have been elevated to a first-class management artifact
that is described as a database to instill the concept of a need for automation.
I. Context (scope, objectives)
II. Software development process
A. Project primitives
1. Life-cycle phases
2. Artifacts
3. Workflows
4. Checkpoints
B. Major milestone scope and content
C. Process improvement procedures
III. Software engineering environment
A. Process automation (hardware and software resource configuration)
B. Resource allocation procedures
(sharing across organization, security access)
IV. Software change management
A. Configuration control board plan and procedures
B. Software change order definitions and procedures
C. Configuration baseline definitions and procedures
V. Software assessment
A. Metrics collection and reporting procedures
B. Risk management procedures
(risk identification, tracking and resolution)
C. Status assessment plan
D. Acceptance test plan
VI. Standards and procedures
A. Standards and procedures for technical artifacts
VII. Evolutionary appendixes
A. Minor milestone scope and content
B. Human resources
(organization, staffing plan, training plan)
PART – II A SOFTWARE MANAGEMENT PROCESS FRAMEWORK
CHAPTER – 6 ARTIFACTS OF THE PROCESS Page 57 of 187
Once software is placed in a controlled baseline, all change must be formally tracked and
managed.
By automating data entry and maintaining change records on-line, most of the change
management bureaucracy and metrics collection and reporting activities can be
automated.
Release Specifications
The scope, plan, and objective evaluation criteria for each baseline release are derived
from the vision statement and other sources like make/buy analyses, risk management
concerns, architectural considerations, shots in the dark, implementation constraints,
quality thresholds.
These artifacts are to evolve along with the process, achieving greater fidelity as the life
cycle progresses and requirements understanding matures.
FIGURE 6-6. Default/Typical release specification outline
Status Assessments
Status assessments provide periodic snapshots of project health and status, including the
software project manager’s risk assessment, quality indicators, and management
indicators.
With varying periodicity, the forcing function must persist.
A good management process must ensure that the expectations of all stakeholders –
contractor, subcontractor, customer, and user – are synchronized and consistent.
The periodic assessment documents provide the critical mechanism
a) for managing everyone’s expectations throughout the life cycle;
b) for addressing, communicating, and resolving management issues, technical
issues, and project risks
c) for capturing project history.
They are the periodic heartbeat for management attention.
Typical status assessment should include a review of resources, personnel staffing,
financial – cost and revenue – data, top 10 risks, technical progress – metrics snapshots,
major milestone plans and results, total project/product scope, action items, and
followthrough.
Continuous open communications with objective data derived directly from on-going
activities and evolving product configurations are mandatory in any project.
Environment
An important emphasis is to define the development and maintenance environment as a
first-class artifact of the process.
A robust, integrated development environment must support automation of the
development process.
This should include requirements management, visual modeling, document automation,
host and target programming tools, automated regression testing, and continuous and
integrated change management, and feature and defect tracking.
Hiring good people and equipping them with good tools is a must for success.
I . Iteration Content
II . Measurable Objective s
A . Evaluation criteria
B . Follow-through approach
III .
.
Demonstration plan
A
.
Schedule of activities
B
.
Team responsibilities
IV
.
Operational scenarios
(use cases demonstrated)
A
B..
Demonstration procedures
Traceability to visio n and business case
.
.
PART – II A SOFTWARE MANAGEMENT PROCESS FRAMEWORK
CHAPTER – 6 ARTIFACTS OF THE PROCESS Page 58 of 187
Automation of the software development process provides payback in quality, the ability
to estimate costs and schedules, and overall productivity using a smaller team.
By allowing the designers to traverse quickly among development artifacts and easily
keep the artifacts up-to-date, integrated toolsets play an increasingly important role in
incremental and iterative development.
Deployment
A deployment document can take many forms:
Depending on the project it could involve several document subsets for transitioning the
product into operational status.
In big contractual efforts in which the system is delivered to a separate maintenance
organization, deployment artifacts may include computer system operations manuals,
software installation manuals, plans and procedures for cutover, site surveys, and so on.
For commercial software products, deployment artifacts may include marketing plans,
sales rollout kits, and training courses.
Management Artifact Sequences
In each phase of the life cycle, new artifacts are produced and previous developed ones
are updated to incorporate lesson learned and to capture further depth and breadth of the
solution.
Some artifacts are updated at each major milestone, others at each minor milestone.
FIGURE 6-8. Artifact sequences across a typical life cycle
Inception Elaboration Controlled Construction Transition
baseline
Informal
version
Iteration 1 Iteration 2 Iteration 3 Iteration 4 Iteration 5 Iteration 6 Iteration 7
Management set
1. WBS
2. Business case
3. Release specifications
4. Software development plan
5. Release descriptions
6. Status assessments
7. Software change order data
8. Deployment documents
9. environment
Requirements set
1. Vision document
2. Requirements model(s)
Design set
1. Design model(s)
2. Test model
3. Architecture description
Implementation set
1. Source code baselines
2. Associated compile-time files
3. Component executables
Deployment Set
1. Integrated product-executable baselines
2. Associated run-time files
3. User manual
6.3 ENGINEERING ARTIFACTS
Most of the engineering artifacts are captured in rigorous engineering notations such as
UML, programming languages, or executable code.
PART – II A SOFTWARE MANAGEMENT PROCESS FRAMEWORK
CHAPTER – 6 ARTIFACTS OF THE PROCESS Page 59 of 187
A subset form is satisfied by the table of contents. This description of the
architecture of the book can be derived directly from the book itself.
An abstraction form could be satisfied by a “Cliffs Notes” treatment. Cliffs notes
are condensed versions of classic books used as study guides. This format is an
abstraction developed separately and includes supplementary material that is not
directly derivable from the evolving product.
FIGURE 6-10. Default/typical architecture description outline
Software User Manual
The software user manual provides the user with the reference documentation necessary
to support the delivered software.
The user manual includes installation procedures, usage procedures and guidance,
operational constraints, and a user interface description, at a minimum even when varying
from project to project.
The manual should be developed early for products with user interfaces, as it is a
necessary mechanism for communicating and stabilizing an important subset of
requirements.
The user manual should be written by members of the test team, who are more likely to
understand the user’s perspective than the development team.
If the test team is responsible for the manual, it can be generated in parallel with
development and can be evolved early as a tangible and relevant perspective of
evaluation criteria.
It also provides a necessary basis for test plans and test cases, and for construction of
automated test suites.
6.4 PRAGMATIC ARTIFACTS
Conventional document-driven approaches wasted lots of amounts of engineering time on
developing, refining, formatting, reviewing, updating, and distributing documents.
There are several reasons that documents became so important to the process:
First, there were no rigorous engineering methods or languages for requirements
specification or design.
So, paper documents with ad hoc text and graphical representations were the
default format.
Second, conventional languages of implementation and deployment were extremely
cryptic and highly unstructured.
To present the details of software structure and behavior to other reviewers –
testers, maintainers, mangers – a more human-readable format was needed.
Third, and most important, software progress needed to be “credibly” assessed.
Documents represented a tangible but misleading mechanism for demonstrating
progress.
Document-driven approaches have degenerated into major obstacles to process
improvement. The quality of documents became more important than the quality of
engineering information they represented.
I. Architecture overview
A. Objectives
B. Constraints
C. Freedoms
II. Architecture views
A. Design view
B. Process view
C. Component view
D. Deployment
III. Architectural interactions
A. Operational concept under primary scenarios
B. Operational concept under secondary scenarios
C. Operational concept under anomalous conditions
IV. Architecture performance
V. Rationale, trade-offs, and other substantiation
PART – II A SOFTWARE MANAGEMENT PROCESS FRAMEWORK
CHAPTER – 6 ARTIFACTS OF THE PROCESS Page 61 of 187
It is important that information inherent in the artifact be emphasized, not the paper on
which it is written.
Short documents are more useful than the long ones.
Software is the primary product, documentation is merely support material.
Questions on this chapter
:
1. Using a neat diagram give an overview of the artifact sets in the software
development process.
2. Give a brief outline of the management artifact sets.
3. Give a brief outline of the engineering artifact sets.
4. Discuss the life-cycle focus on artifact sets in each phase.
5. Explain how test artifacts have evolved from the conventional process to the modern
process in terms of the test artifacts in each of the phases of the life cycle.
6. Trace the artifact evolution over the life cycle.
7. Describe different management artifacts, giving brief outlines of each.
8. Using a neat diagram, explain the sequences of artifacts across a typical life cycle.
9. Describe different engineering artifacts, giving brief outlines of each.
10. Explain the evolution of documentation from the paper-based to electronic based
bringing out the cultural issues involved.
PART – II A SOFTWARE MANAGEMENT PROCESS FRAMEWORK
CHAPTER-7 MODEL-BASED SOFTWARE ARCHITECTURES Page 63 of 187
This view, an abstraction of the design model, addresses the software source code
realization of the system from the perspective of the project’s integrators and
developers, especially with regard to releases and configuration management.
It is modeled statically using component diagrams, and dynamically using the UML
behavioral diagrams.
The deployment view addresses the executable realization of the system, including the
allocation of logical processes in distribution view to physical resources of the
deployment network.
It is modeled statically using deployment diagrams, and dynamically using the UML
behavioral diagrams.
ARCHITECTURAL DESCRIPTIONS take on different forms and styles in different
organizations and domains.
An architecture requires a subset of artifacts in each engineering set.
The actual level of content in each set is situation-dependent, and there are few good
heuristics for describing objectively what is architecturally significant.
Generally, an architecture baseline should include the following:
Requirements: critical use cases, system-level quality objectives, and priority
relationships among features and qualities.
Implementation
Requirements
Design
Deployment
The requirement set may
include UML models
describing the problem
space
The design set includes
all UML design models
describing the solution
Depending on its complexity, a system may require several
models or partitions of a single model.
Design
Model
Process
Model
Component
Model
Deployment
Model
Use Case
Model
The design, process, and
use case models provide
for visualization of the
logical and behavioral
aspects of the design.
The component model
provides for
visualization of the
implementation set.
The deployment model
provides for
visualization of the
dltt
An architecture is described through several views,
which are extracts of design models that capture
the significant structures, collaborations, and
Design
View
Process Component
View
Deployment
View
Use Case
View
Architecture
Description Document
Design view
Process view
Use case view
Component view
Other (optional) views
Other material:
Rationale
Constraints
View
FIGURE 7-1. Architecture, an organized and abstracted view into the design
dl
PART – II A SOFTWARE MANAGEMENT PROCESS FRAMEWORK
CHAPTER-7 MODEL-BASED SOFTWARE ARCHITECTURES Page 66 of 187
The following table shows the allocation of artifacts and the emphasis of each workflow
in
each of the life-cycle phases of inception, elaboration, construction, and transition.
TABLE 8-1. The artifacts and life-cycle emphases association with each workflow
WORKFLOW ARTIFACTS LIFE-CYCLE PHASE EMPHASIS
Inception Elaboration Construction Transition
Management
Environment
Requirements
Design
Implementation
Assessment
Deployment
Figure 8-1. Activity levels across the life-cycle phases
Management Business case
Software development
Plan
Status assessments
Vision
Work breakdown
structure
Inception: prepare business case and vision
Elaboration: Plan development
Construction: Monitor and control development
Transition: Monitor and control deployment
Environment Environment
Software change order
database
Inception: define development environment and
change management infrastructure
Elaboration: install development environment and
establish change management database
Construction: maintain development environment
and software change order database
Transition: transition management environment and
software change order database
Requirements Requirements set
Release specifications
Vision
Inception: define operational concept
Elaboration: define architecture objectives
Construction: define iteration objectives
Transition: refine release objectives
Design Design set
Architecture description
Inception: formulate architecture concept
Elaboration: achieve architecture baseline
Construction: design components
Transition: refine architecture and components
Implementation
Implementation set
Deployment set
Inception: support architecture prototypes
Elaboration: produce architecture baseline
Construction: produce complete componentry
Transition: maintain components
PART – II A SOFTWARE MANAGEMENT PROCESS FRAMEWORK
CHAPTER-8 WORKFLOWS OF THE PROCESS Page 69 of 187
TABLE 8-1. The artifacts and life-cycle emphases association with each workflow
(Continued)
WORKFLOW ARTIFACTS LIFE-CYCLE PHASE EMPHASIS
8.2 ITERATION WORKFLOWS
An iteration consists of a loosely sequential set of activities in various proportions
depending
on where the iteration is located in the development cycle.
Each iteration is defined in terms of a set of allocated usage scenarios.
The components needed to implement the selected scenarios are developed and integrated
with the results of previous iterations.
An individual iteration’s workflow, generally, includes the following sequence:
Management:
o iteration planning to determine the content of the release and develop the
detailed plan for iteration
o assignment of work packages/tasks to the development team
Environment:
o evolving the software change order database to reflect all new baselines
and changes to existing baselines for all product, test, and environment
components.
Requirements:
o analyzing the baseline plan, the baseline architecture, and the baseline
requirements set artifacts to fully elaborate the use cases to be
demonstrated at the end of this iteration and their evaluation criteria
o updating nay requirements set artifacts to reflect changes necessitated by
results of this iterations engineering activities.
Design:
o evolving the baseline architecture and the baseline design set artifacts to
elaborate fully the design model and test model components necessary to
demonstrate against the evaluation criteria allocated to this iteration
o updating design set artifacts to reflect changes necessitated by the results
of this iteration’s engineering activities
Allocated usage
scenarios
Results from the
previous iteration
Management
Requirements
Design
Implementation
Assessment
Deployment
Results of the
next iteration
Up-to-date risk assessment
Controlled baselines artifacts
Demonstrable results
o Requirements understanding
o Design features/performance
o Plan credibility
FIGURE 8-2.
The workflow of an iteration
Assessment Release specifications
Release descriptions
User manual
Deployment set
Inception: assess plans, vision, prototypes
Elaboration: assess architecture
Construction: assess interim releases
Transition: assess product releases
Deployment Deployment set Inception: analyze user community
Elaboration: define user manual
Construction: prepare transition materials
Transition: transition product to user
PART – II A SOFTWARE MANAGEMENT PROCESS FRAMEWORK
CHAPTER-8 WORKFLOWS OF THE PROCESS Page 70 of 187
Implementation:
o developing or acquiring any new components, and enhancing/modifying
any existing components, to demonstrate the evaluation criteria allocated
to this iteration
o integrating and testing all new/modified components with
existing baselines of the previous versions
Assessment:
o evaluating the results of the iteration, including compliance with the
allocated evaluation criteria and the quality of the current baselines
o identifying any rework required and determining if it should be performed
before deployment of this release or allocated to the next release
o assessing results to improve the basis of the subsequent iteration’s plan
Deployment:
o transitioning the release to an external organization or to internal closure
by conducting a post-mortem so that lessons learned can be captured and
reflected in the next iteration
Many of the activities in this sequence also occur concurrently.
For example, requirements analysis is not done all in one continuous lump; it
intermingles
with management, design, implementation, and so on.
Iterations in the inception and elaboration phases focus on management, requirements,
and
design activities.
Iterations in the construction phase focus on design, implementation, and assessment.
Iterations in the transition phase focus on assessment and deployment.
In practice, the various sequences and overlaps among iterations become more complex
The terms iteration and increment deal with some of the pragmatic considerations.
An iteration represents the state of the overall architecture and the complete deliverable
system.
An increment represents the current work in progress that will be combined with the
preceding iteration to form the next iteration.
Figure 8-4, an example of a simple development life cycle, illustrates the difference
between
iterations and increments.
A typical build sequence from the perspective of an abstract layered architecture is also
illustrated therein.
Management
Requirements
Design
Implementation
Assessment
Deployment
Management
Requirements
Design
Implementation
Assessment
Deployment
Management
Requirements
Design
Implementation
Assessment
Deployment
Inception and Elaboration phases Construction phase Transition phase
FIGURE 8-3. Iteration emphasis across the life cycle
PART – II A SOFTWARE MANAGEMENT PROCESS FRAMEWORK
CHAPTER-8 WORKFLOWS OF THE PROCESS Page 71 of 187
Visible milestones in the life cycle help in the discussions in the meetings of the
stakeholders.
The purpose of such meetings is:
a) to demonstrate the performance of a project
b) synchronize stakeholder expectations and achieve concurrence on the three
evolving perspectives – the requirements, the design, and the plan
c) synchronize related artifacts into a consistent and balanced state
d) identify the important risks, issues, and out-of-tolerance conditions
e) perform a global assessment for the whole life cycle.
Milestones must have well-defined expectations and provide tangible results.
This can include renegotiation of the milestone’s objectives after gaining understanding
of the trade-offs among the requirements, the design, and the plan.
Three types of joint management reviews are conducted throughout the process:
1. Major milestones: these system-wide events are held at the end of each
development phase.
They provide visibility to system-wide issues, synchronize the management and
engineering perspectives, and verify that the aims of the phase have been
achieved.
2. Minor milestones: these iteration-focused events are conducted to review the
content of an iteration in detail and to authorize continued work.
3. Status assessments: these periodic events provide management with frequent and
regular insight into the progress being made.
Each of the four phases – inception, elaboration, construction, and transition – consists of
one or more iterations and concludes with a major milestone when a planned technical
capability is produced in demonstrable form.
An iteration represents a cycle of activities for which there is a well-defined intermediate
result as a minor milestone. This is captured with two artifacts: a release specification –
the evaluation criteria and plan, and a release description – the results.
Major milestones at the end of each phase use formal stakeholder-approved evaluation
criteria and release descriptions.
Minor milestones use informal, development-team-controlled versions of these artifacts.
The number of milestones depends on such parameters as scale, number of stakeholders,
business context, technical risk, and sensitivity of cost and schedule perturbations.
9.1 MAJOR MILESTONES
As can be seen from figure 9-1, the four major milestones occur at the transition points
between life-cycle phases.
In an iterative model, the major milestones are used to achieve concurrence among all
stakeholders on the current state of the project.
The milestones can be conducted in one continuous meeting or incrementally through
online
reviews.
The essence of each major milestone is to ensure that the requirements understanding, the
life-cycle plans, and the product’s form, function, and quality are evolving in balanced
levels of detail and to ensure consistency among the various artifacts.
Major ▲ ▲ ▲ ▲
milestones Strategic focus on global concerns of the entire software project
Minor △ △ △ △ △ △ △
milestones Tactical focus on local concerns of the current iteration
Status ◇ ◇ ◇ ◇ ◇ ◇ ◇ ◇ ◇ ◇ ◇ ◇ ◇ ◇ ◇ ◇
assessments Periodic synchronization of stakeholder expectations
Iteration 1 Iteration 2 Iteration 3 Iteration 4 Iteration 5 Iteration 6 Iteration 7
Inception Elaboration Construction Inception
Life-cycle
objectives
milestone
Life-cycle
architecture
milestone
Initial
operational
capability
milestone
Product
release
milestone
FIGURE 9-1.
A typical
sequence of
stake-holder
expectations
PART – II A SOFTWARE MANAGEMENT PROCESS FRAMEWORK
CHAPTER-9 CHECKPOINTS OF THE PROCESS Page 73 of 187
Table 9-1 summarizes the balance of information across the major milestones.
Concerns of different stakeholders
Stakeholder Concern(s)
Customer Schedule and budget estimates, feasibility, risk assessment, requirements
understanding, progress, product line compatibility
Users Consistency with requirements and usage scenarios, potential for
accommodating growth
Architects and
systems
engineers
Product line compatibility, requirements changes, trade-offs analyses,
completeness and consistency, balance among risk, quality, and usability
Developers Sufficiency of requirements detail and usage scenario descriptions,
frameworks for component selection or development, resolution of
development risk, product line compatibility, sufficiency of the development
environment
Maintainers Sufficiency of product and documentation artifacts, understandability,
interoperability with existing systems, sufficiency of maintenance
environment
Others Perspectives by stakeholders like regulatory agencies, independent
verification and validation contractors, investors, contractors, and sales and
marketing teams
TABLE 9-1. The general status of plans, requirements, and products across the major
milestones
MILESTONES
PLANS
UNDERSTANDING
OF PROBLEM
SPACE
(REQUIREMENTS)
SOLUTION SPACE
PROGRESS
(SOFTWARE PRODUCT)
Definition of
stakeholder
responsibilities
Baseline vision,
including growth
vectors, quality
attributes, and
priorities
Demonstration of at least one
feasible architecture
Low-fidelity lifecycle
plan
Make/buy/reuse trade-offs
Life-cycle
objective
High-fidelity
elaboration phase
plan
Use case model
Initial design model
High-fidelity
construction phase
plan
Stable vision and use
case model
Stable design set
Evaluation criteria for
construction release,
initial operational
capability
Life-cycle Make/buy/reuse decisions
architecture
milestone
Low-fidelity
transition phase
plan
Draft user manual Critical component prototypes
Acceptance criteria
for product release
Stable implementation set
Critical features and core
capabilities
Initial operational
capability
milestone
High-fidelity
transition phase
plan Releasable user
manual
Objective insight into product
qualitites
Stable deployment set
Product release Full features
milestone
Next-generation
product plan
Final user manual
Compliant quality
TABLE 9-1. The general status of plans, requirements, and products across the major
milestones
PART – II A SOFTWARE MANAGEMENT PROCESS FRAMEWORK
CHAPTER-9 CHECKPOINTS OF THE PROCESS Page 74 of 187
All open issues – like installation instructions, software version descriptions, user
and operator manuals, software support manuals, and the installation of the
development environment at the support sites – are addressed.
Software quality metrics are reviewed to determine whether quality is sufficient for
transition to the support organization.
9.1 MINOR MILESTONES
The number of iteration-specific, informal milestones needed depends on the content and
length of the iteration.
For iterations with a one-month to six-month duration: only two minor milestones – the
iteration readiness review and the iteration assessment review – are needed.
For longer iterations more intermediate review points may be necessary: test readiness
reviews , intermediate design walkthroughs , etc.
Iterations take different forms and priorities in different phases in the life-cycle.
Early iterations focus on analysis and design,
Later iterations focus more on completeness, consistency, usability, and change
management.
The milestones of an iterations and its associated evaluation criteria must focus the
engineering activities as defined in the software development plan, business case, and
vision.
Iteration Readiness Review.
o This informal milestone is conducted at the start of each iteration.
o To review the detailed iteration plan and the evaluation criteria allocated to
the iteration.
Iteration Assessment Review.
o This informal milestone is conducted at the end of each iteration.
o To assess the achievements of the objectives and satisfying of the evaluation
criteria by an iteration.
o To review iteration results
o To review qualification test results
o To determine the amount of rework to be done
o To review the impact of the iteration results on the plan for subsequent
iterations.
The project and the organizational culture determine the format and the content of these
informal milestones.
Assessment
Management
Requirements
Design
Implementation
Deployment
Iteration-N
Iteration N + 1
Iteration N - 1
Iteration N
initiation
Iteration
Readiness
Review
Iteration
Design
Walkthrough
Iteration
Assessment
Review
Iteration-N
Close Out
FIGURE 9-4. Typical minor milestones in the life cycle of an iteration
PART – II A SOFTWARE MANAGEMENT PROCESS FRAMEWORK
CHAPTER-9 CHECKPOINTS OF THE PROCESS Page 77 of 187
A WBS consistent with the process framework – phases, workflows, and artifacts –
should show
how the elements of the process framework can be integrated into a plan.
It should provide a framework for estimating the costs and schedules of each element,
allocating
them across a project organization, and tracking expenditures.
The structure should be tailored to the specifics of a project in the following ways:
1. Scale. Larger projects will have more levels and substructures.
2. Organizational structure. Projects that span multiple organizational entities may
introduce
constraints that necessitate different WBS allocations.
3. Degree of custom development. Depending on the character of the project, there can be
different emphases in the requirements, design, and implementation workflows.
A business process re-engineering project based primarily on existing components
would have much more depth in the requirements element and a fairly shallow design
and implementation element.
A fully custom development of a one-of-a-kind technical application requires fairly
deep design and implementation elements to manage the risks associated with the
custom, first-generation components.
4. Business context. Contractual projects require more elaborate management and
assessment elements.
They require more elaborate substructures for the deployment element.
An application deployed to a single site may have a trivial deployment element or an
elaborate one.
5. Precedent experience. Most of the projects are developed as new generations of a
legacy
system or in the context of existing organizational standards rather than from the scratch.
It is important to accommodate these constraints to ensure that new project exploit the
existing experience base and benchmarks of project performance.
The WBS decomposes the character of the project and maps it to the life cycle, the
budget, and
the personnel.
AA Inception phase management
AAA Business case development
AAB Elaboration phase release specification
AAC Elaboration phase WBS baselining
AAD Software development plan
AAE Inception phase project control and status assessments
AB Elaboration phase management
ABA Construction phase release specifications
ABB Construction phase WBS baselining
ABC Elaboration phase project control and status assessments
AC Construction phase management
ACA Deployment phase planning
ACB Deployment phase WBS baselining
ACD Construction phase project control and status assessments
A. Management
Second-level WBS elements are the defined for each phase of the life
cycle.
These elements allow the fidelity of the plan to evolve more naturally
with the level of understanding of the requirements and architecture,
and the risks therein.
Third-level WBS elements are defined for the focus of activities that
produce the artifacts of each phase.
These elements may be the lowest level in the hierarchy that collects the
cost of a discrete artifact for a given phase, or they may be decomposed
further into several lower level of activities that, taken together, produce
a single artifact.
PART – III SOFTWARE MANAGEMENT DISCIPLINES Page 82 of 187
CHAPTER-10 PROJECT ORGANIZATIONS AND RESPONSIBILITIES
Software development plan and the business case provide a context for review, the WBS
and the
relative budgets of the elements provide the indicators of the management approach,
priorities,
and concerns.
Another important attribute of a good WBS is that the planning fidelity inherent in each
element
is commensurate with the current life-cycle phase and project state.
Another important attribute of a good WBS is that the planning fidelity inherent in each
element
is commensurate with the current life-cycle phase and project state.
This idea is illustrated in the above figure.
The above method allows for planning elements that range from planning packages
through fully
planned activity networks.
10.2 PLANNING GUIDELINES
Software projects span a broad range of application domains.
Making specific planning recommendations is risky when made independent of context.
At the same time, it is valuable, also, as most people look for a starting point, a skeleton
to flesh
out with project-specific details.
Initial planning guidelines capture the expertise and experience of many other people.
Such guidelines are therefore considered credible bases of estimates and instill some
confidence
in the stakeholders.
Project-independent planning advice is also risky.
The risk is that the guidelines may be adopted blindly without being adapted to specific
project
circumstances.
This may lead to an incompetent management team.
Another risk is that of misinterpretation.
The variability of project parameters, project business contexts, organizational cultures,
and
project processes makes it extremely easy to make mistakes that have significant
potential
impact.
Two simple planning guidelines should be considered when a project plan is being
initiated or
assessed.
The details of the first guideline : The details of the second guideline:
Given an initial estimate of total project cost and these two tables, developing a staffing
profile,
an allocation of staff resources to teams, a top-level schedule, and an initial WBS with
task
budgets and schedules is relatively straight forward.
The data in Table 10-1 and Table 10-2 come mostly from software cost estimation
efforts.
TABLE 10-1.
WBS budgeting defaults
FIRST-LEVEL
WBS ELEMENT
DEFAULT
BUDGET
Management 10%
Environment 10%
Requirements 10%
Design 15%
Implementation 25%
Assessment 25%
Deployment 5%
Total 100%
TABLE 10-2.
Default distributions of effort and schedule by phase
DOMAIN INCEPTION
ELABORATION
CONSTRUCTION
TRANSITION
Effort 5% 20% 65% 10%
Schedule 10% 30% 50% 10%
PART – III SOFTWARE MANAGEMENT DISCIPLINES Page 85 of 187
CHAPTER-10 PROJECT ORGANIZATIONS AND RESPONSIBILITIES
The default allocations for budgeted costs, given in table 10-1, vary across projects. It
provides a
good benchmark for assessing the plan by understanding the rationale for deviations from
these
guidelines.
The entries in the table 10-1 are cost allocations and not effort allocations.
The difference between the two arises because of:
1.The cost of labor is inherent in these numbers.
For example, the management, requirements, and design elements tend to use more
personnel
who are senior and more highly paid than other element use.
That is why the cost for these two elements is 25%. It doesn’t mean the number of people
also will be required in that proportion.
2.The cost of hardware and software assets that support the process automation and
development teams is also included in the environment element.
Table 10-2 provided guidelines for allocating effort and schedule across the life-cycle
phases.
These values provide an average expectation across a spectrum of application domains.
10.3 THE COST AND SCHEDULE ESTIMATING PROCESS
Project plans need to be derived from two perspectives:
First, a forward-looking, top-down approach:
It starts with an understanding of the general requirements and constraints, derives a
macro-level
budget and schedule, then decomposes these elements into lower level budgets and
intermediate
milestones.
From this perspective, the following planning sequence occurs:
1. The software project manager develops a characterization of the overall size,
process, environment, people and quality required for the project.
2. A macro-level estimate of the total effort and schedule is developed using a
software cost estimation model.
3. the software project manager partitions the estimate for the effort into a top-level
WBS.
The project manager also partitions the schedule into major milestone dates and
partitions the effort into a staffing profile.
These types of estimates tend to ignore many detailed project-specific parameters.
4. At this point, subproject managers are given the responsibility for decomposing
each of the WBS elements into lower levels using their top-level allocation, staffing
profile, and major milestone dates as constraints.
Second, a backward-looking, bottom-up approach:
Keeping the end in mind, the micro-level budgets and schedules are analyzed and then
they are
all summed into the higher level budgets and intermediate milestones.
This approach tends to define and populate the WBS from the lowest levels upward.
From this perspective the planning sequence is:
1. The lowest level WBS elements are elaborated into detailed tasks, for which budgets
and
schedules are estimated by the responsible WBS element manager.
These estimates tend to incorporate the project-specific parameters in an exaggerated
way.
2. Estimates are combined and integrated into higher level budgets and milestones.
The biases of individual estimators need to be homogenized so that there is a consistent
basis of negotiation.
3. Comparisons are made with the top-down budgets and schedule milestones.
Gross differences are assessed and adjustments are made in order to converge on
agreement between the top-down and bottom-up estimates.
Milestone scheduling or budget allocation using top-down estimating tends to exaggerate
the
project management biases and results in an overly optimistic plan.
Bottom-up estimates exaggerate the performer biases and result in an overly pessimistic
plan.
Iteration is necessary, using the results of one approach to validate and refine the results
of the
other approach, thereby evolving the plan through multiple versions.
This process instills ownership of the plan in all levels of management.
These two planning approaches should be used together, in balance, throughout the life
cycle of
the project.
PART – III SOFTWARE MANAGEMENT DISCIPLINES Page 86 of 187
CHAPTER-10 PROJECT ORGANIZATIONS AND RESPONSIBILITIES
During the engineering stage, the top-down perspective will dominate as there is usually
not
enough depth of understanding or stability in the detailed task sequences to perform
credible
bottom-up planning.
During the production stage, there should be enough precedent experience and planning
fidelity
that the bottom-up planning perspective will dominate.
By then, the top-down approach should be well tuned to the project-specific parameters,
so it
should be used more as a global assessment technique.
Figure 10-4 illustrates this life-cycle planning balance.
10.4 THE ITERATION PLANNING PROCESS
In addition to the application-independent aspects of budgeting and scheduling, another
dimension of planning concerns with defining the actual sequence of intermediate results.
Planning the content and schedule of the major milestones and their intermediate
iterations is the
most tangible form of the overall risk management plan.
An evolutionary build plan is important as there are always adjustments in build content
and
schedule as early conjecture evolves into well-understood project circumstances.
A description of a generic build progression and general guidelines on the number of
iterations in
each phase:
Engineering stage Production stage
Inception Elaboration Construction Transition
Feasibility
iterations
Architecture
iterations
Usable iterations Product releases
100%
Bottom-up task-level planning based
on metrics from previous iterations
Top-down project-level planning based
on macro-analysis on previous projects
Engineering stage planning emphasis:
Macro-level task estimation for
production-stage artifacts
Micro-level task estimation for
engineering artifacts
Stakeholder concurrence
Coarse-grained variance analysis of
actual vs. planned expenditures
Turning the top-down projectindependent
planning guidelines
into project-specific planning
guidelines
WBS definition and elaboration
Production stage planning emphasis:
Micro-level task estimation for
production-stage artifacts
Macro-level task estimation for
maintenance of engineering
artifacts
Stakeholder concurrence
Fine-grained variance analysis of
actual vs. planned expenditures
FIGURE 10-4.
Planning balance throughout life cycle
PART – III SOFTWARE MANAGEMENT DISCIPLINES Page 87 of 187
CHAPTER-10 PROJECT ORGANIZATIONS AND RESPONSIBILITIES
Iteration is used to mean a complete synchronization across the project, with a well-
orchestrated
global assessment of the entire project baseline.
Other micro-operations – monthly, weekly or daily builds – are performed en route to
these
project-level synchronization points.
Inception iterations.
The early prototyping activities integrate the foundation components of a candidate
architecture and provide an executable framework for elaborating critical use cases of
the system.
This framework includes existing components, commercial components, and custom
prototypes sufficient to demonstrate a candidate architecture and sufficient
requirements understanding to establish a credible business case, vision, and software
development plan.
To achieve an acceptable prototype, two iterations may be necessary based on the size
of the project.
Elaboration iterations.
These iterations result in an architecture, including a complete framework and
infrastructure for execution.
Upon completion of the architecture iteration, a few critical use case should be
demonstrable:
(1) initializing the architecture,
(2) injecting a scenario to drive the worst-case data processing flow through the
system – for example, the peak load scenario, and
(3) injecting a scenario to drive the worst-case control flow through the system – for
example, orchestrating the fault-tolerance use cases.
Two iterations should be planned for, to achieve an acceptable architectural baseline.
More iterations may be required in exceptional cases.
Construction iterations.
Most projects require at least two major construction iterations:
(1) An alpha release includes executable capability for all the critical use cases.
It represents about 70% of the total product breadth and performs at quality –
performance and reliability – levels below the final expectations.
(2) A beta release provides 95% of the total product capability breadth and achieves
some the important attributes.
A few more features need to be completed, and improvements in robustness and
performance are necessary for the final product release to be acceptable.
To manage risks or optimize resource consumption, a few more iterations may be
necessary, in some cases.
Transition iterations.
Most projects use a single iteration to transition a beta release into the final product.
A number of small-scale iterations may be necessary to resolve defects, incorporate
beta feedback, and incorporate performance improvements.
Because of the overhead associated with a full-scale transition to the user community,
most projects do away with a single iteration between a beta release and the final
product release.
A typical project would have the following six-iteration profile :
o One iteration in inception: an architecture prototype
o Two iterations in elaboration: architecture prototype and
architecture baseline
o Two iterations in construction: alpha and beta releases
o One iteration in transition: product release
PART – III SOFTWARE MANAGEMENT DISCIPLINES Page 88 of 187
CHAPTER-10 PROJECT ORGANIZATIONS AND RESPONSIBILITIES
The resulting management overhead may be well worth the cost to ensure proper risk
management and stakeholder synchronization.
10.5 PRAGMATIC PLANNING
Good planning is more dynamic in an iterative process. Doing it accurately is also far
easier.
The software project manager, while executing iteration N of any phase must:
(1) monitor and control against a plan initiated in iteration N-1, and
(2) plan iteration N+1.
Making good trade-offs in the current iteration plan and next iteration plan based on
objective
results in the current iteration and previous iterations, is a good management practice.
In addition to bad architectures and misunderstood requirements, inadequate planning is
one of
the most common reasons for project failures.
The success of every successful project can be attributed in part to good planning.
Plans are not just for managers.
The more open and visible the panning process and results, the more ownership there is
among
the team members who need to execute it.
Bad, closely held plans cause attrition.
Good, open plans shape cultures and encourage teamwork.
Questions on this chapter
1. Define a WBS. Compare the issues related to the conventional and evolutionary
WBSs.
2. Explain the two planning guidelines.
3. Explain how planning balance is achieved throughout the life cycle in cost and
schedule
estimating process.
4. Explain the iteration planning process in the four phases of the life cycle.
PART – III SOFTWARE MANAGEMENT DISCIPLINES Page 89 of 187
CHAPTER-11 PROJECT ORGANIZATIONS AND RESPONSIBILITIES
Database:
Specialists with experience in the organization, storage, and retrieval of data
GUI:
Specialists with experience in the display organization, data presentation, and user
interaction to support human input, output, and control needs
Operating systems and networking:
Specialists with experience in the execution of multiple software objects on a network
of hardware resources, including all the control issues associated with initialization,
synchronization, resource sharing, name space management, reconfiguration,
termination, and inter-object communications
Domain applications:
Specialists with experience in algorithms, application processing, or business rules
specific to the system
The software development team is responsible for the quality of individual components,
including all component development, testing, and maintenance.
The development team decides how nay design or implementation issue local to a single
component is resolved.
Software Assessment Team
There are two reasons for using an independent team for software assessment:
(1) to ensure an independent quality perspective.
(2) to exploit the concurrency of activities
Schedules can be accelerated by developing software and preparing for testing in parallel
with development activities.
Change management, test planning, and test scenario development can be performed in
parallel with design and development.
A modern process should employ use-case-oriented or capability-based testing organized
as a sequence of builds and mechanized via two artifacts:
1. release specification – the plan and evaluation criteria for a release
2. release description – the results of a release
Component teams
Life-Cycle Focus
Inception elaboration Construction Transition
Prototyping support
Make/buy trade-offs
Critical component design
Critical component
implementation and test
Critical component
baseline
Component design
Component
implementation
Component stand-alone
test
Component maintenance
Component maintenance
Component
documentation
Software Development
Responsibilities:
Component design
Component implementation
Component stand-along test
Component maintenance
Component documentation
Artifacts:
Design set
Implementation set
Deployment set
FIGURE 11-5. Software development team activities
PART – III SOFTWARE MANAGEMENT DISCIPLINES Page 95 of 187
CHAPTER-11 PROJECT ORGANIZATIONS AND RESPONSIBILITIES
The following figure shows the focus of software assessment team activities over the life
cycle:
Each release may encompass several components, because integration proceeds
continuously.
Evaluation criteria will document the customer’s expectations at each major milestone,
and release descriptions will substantiate the test results.
The final iterations will be equivalent to acceptance testing and include levels of detail
similar to the levels of detail of software test plans, procedures, and reports.
The artifacts evolve from brief, abstract versions in early iterations into more detailed and
more rigorous documents, with detailed completeness and traceability discussions in later
releases.
These scenarios should be subjected to change management like other software and are
always maintained up-to-date for automated regression testing.
The assessment team is responsible for the quality of baseline releases with respect to the
requirements and customer expectations.
The assessment team is therefore responsible for exposing any quality issues that affect
the customer’s expectations, whether or not the expectations are captured in the
requirements.
11.3 EVOLUTION OF ORGANIZATIONS
The project organization represents the architecture of the team and needs to evolve
consistent with the project plan captured in the WBS.
The following figure illustrate how the team’s center of gravity shifts over the life cycle,
with about 50% of the staff assigned to one set of activities in each phase:
Release testing
Change management
Deployment
Environment support
Life-Cycle Focus
Inception elaboration Construction Transition
Infrastructure planning
Primary scenario
prototyping
Infrastructure baseline
Architecture release testing
Change management
Initial user manual
Infrastructure upgrades
Release testing
Change management
User manual baseline
Requirements
verification
Infrastructure
maintenance
Release baseline
Change management
Deployment to users
Requirements verficiation
Software Assessment
Responsibilities:
Project infrastructure
Independent testing
Requirements
verification
Metrics analysis
Configuration control
Change management
User deployment
Artifacts:
Deployment set
SCO database
User manual
Environment
Release specifications
Release descriptions
Deployment
documents
FIGURE 11-6. Software assessment team activities
PART – III SOFTWARE MANAGEMENT DISCIPLINES Page 96 of 187
CHAPTER-11 PROJECT ORGANIZATIONS AND RESPONSIBILITIES
Each of the process workflows has a distinct need for automation support.
In some cases, it is necessary to generate an artifact; in others, it is needed for
bookkeeping.
Critical concerns associated with each workflow are:
Management
There are many opportunities for automating the project planning and control activities of
the management workflow.
Software cost estimation tools and WBS tools are useful for generating the planning
artifacts.
For managing against a plan, workflow management tools and a software project
control panel that can maintain an on-line version of the status assessments are
advantageous.
This automation support can improve the insight of the metrics collection and reporting
concepts.
Environment
Configuration management and version control are essential in a modern iterative
development process.
The metrics approach is dependent on measuring changes in software artifact baselines.
The environment must support the change management automation.
Requirements
Conventional approaches decomposed
system requirements into subsystem requirements,
subsystem requirements into component requirements, and
component requirements into unit requirements
The equal treatment of all requirements wasted time on non-driving requirements, and
also on the corresponding paper work that was ultimately discarded.
In a modern process,
♥ The requirements are captured in the vision statement.
Workflows Environment Tools and Process Automation
Construction Inception Construction Transition
Management Workflow automation, metrics automation
Environment Change management, document automation
Requirements Requirements management
Design Visual modeling
Implementation Editor, compiler, debugger
Assessment Test automation, defect tracking
Deployment Defect tracking
FIGURE 12-1. Automation and tool components that support the process workflows
PART – III SOFTWARE MANAGEMENT DISCIPLINE Page 99 of 187
CHAPTER-12 PROCESS AUTOMATION
♥ Lower levels requirements are driven by the process – organized by iteration in stead of
by lower level component – in the form of evaluation criteria.
♥ The vision statement captures the contract between the development group and the
customer.
♥ This information should be evolving but slowly varying, across the life cycle, and
should be represented in customer-understanding form.
♥ The evaluation criteria are captured in the release specification artifacts, which are
transient snapshots of objectives for a given iteration.
♥ Evaluation criteria are derived from the vision statement plus other sources like
make/buy analyses, risk management concerns, architectural considerations,
implementation constraints, quality thresholds, etc.
Iterative models allow the customer and the developer to work with tangible, evolving
versions of the system.
Pragmatically, requirements must evolve along with an architecture and an evolving set
of application increments.
In stead of focusing on consistency, completeness, and traceability of immature
requirements specifications, projects need to focus on achieving the proper specification
of the vision and to evolve the lower level specifications through successive sets of
evaluation criteria against the evolving design iterations.
The consequences of this approach on the environment’s support for requirements
management are twofold:
(1) The recommended requirements approach is dependent on both textual and
model-based representations.
So, the environment should provide integrated doc automation and visual
modeling for capturing textual specifications and use case models.
It is necessary to manage and track changes to either format and present them in
human-readable format – electronic or paper-based.
(2) Traceability between requirements and other artifacts needs to be automated
The requirements set artifacts need a well-defined traceability to test artifacts as
the overall assessment team is responsible for demonstrating the product’s level
of compliance with the requirements.
The problem space description – given in the requirements set, and the solution
space description – represented in the other technical artifact sets, has traceability
between the requirements and the design, the architecture is likely to evolve in a
way that optimizes requirements traceability rather than design integrity.
This effect is more pronounced if tools are used to automate the process.
Design
The tools that support the requirements, design, implementation, and assessment
workflows are all used together.
The less separable they are, the better.
The primary support required for the design workflow is visual modeling, which is used
for capturing design models, presenting them in human-readable format, and translating
them into source code.
An architecture-first and demonstration-based process is enabled by existing architecture
components and middleware.
Implementation
The implementation workflow relies on a programming environment – editor, compiler,
debugger, linker, runtime.
It should also include substantial integration with the change management tools, visual
modeling tools, and test automation tools to support productive iteration.
Assessment and Deployment
The assessment workflow requires some more additional tools to support test automation
and test management.
To increase change freedom, testing and document production must be automated.
Defect tracking is another important tool that supports assessment: It provides the change
management instrumentation to automate metrics and control release baselines.
It is also needed to support the deployment workflow throughout the life cycle.
PART – III SOFTWARE MANAGEMENT DISCIPLINE Page 100 of 187
CHAPTER-12 PROCESS AUTOMATION
In the above figure, the automated translation of design models to source code – both
forward and reverse engineering – is well established.
The automated translation of design models to process/distribution models is also
straightforward through technologies like Active X and Common Object Request Broker
Architecture (CORBA).
Compilers and linker provide automation of source code into executable code.
As architectures use heterogeneous components, platforms, and languages, the
complexity of building, controlling, and maintaining large-scale webs of components
introduces new needs for configuration control and automation of build management.
The primary reason for round-trip engineering is to allow freedom in changing software
engineering data sources.
This configuration control of all the technical artifacts is crucial to maintaining a
consistent and error-free representation of the evolving product.
It is not always necessary to have bi-directional transitions in all cases.
Translation from one data source to another may not provide 100% completeness. For
example, translating design models into C++ source code may provide only the structural
and declarative aspects of the source code representation.
The code components may still need to be fleshed out with the specifics of certain object
attributes and methods.
12.2.2 Change Management
Change management is as critical to iterative processes as planning.
Tacking changes in the technical artifacts is crucial to understanding the true technical
progress trends and quality trends toward delivering an acceptable end product or interim
release.
Forward engineering (source generation from models)
Reverse engineering (models generation from source)
Deployment Set
Executable Code
Design Set
UML Models
Implementation set
Source Code
Requirement Set
UML Models
Automated build management
Automated distribution links
Portability among platforms and network topologies
FIGURE 12-2. Round-trip engineering
Automated
production
Traceability
links
PART – III SOFTWARE MANAGEMENT DISCIPLINE Page 102 of 187
CHAPTER-12 PROCESS AUTOMATION
☺ Disposition.
The SCO is assigned one of the following states by the CCB:
Proposed: written, pending CCB review
Accepted: CCB-approved for resolution
Rejected: closed, with rationale, such as not a problem, duplicate, obsolete
change, resolved by another SCO
Archived: accepted but postponed until a later release
In progress: assigned and actively being resolved by the development
organization
In assessment: resolved by the development organization; being assessed
by a test organization
Closed: completely resolved, with the concurrence of all CCB members
A priority and release identifier can also be assigned by the CCB to guide the
prioritization and organization of concurrent development activities.
Configuration Baseline
A configuration baseline is a named collection of software components and supporting
documentation that is subject to change management and is upgraded, maintained, tested,
status-assessed, and obsolesced as a unit.
There are generally two classes of baselines: external product releases and internal testing
releases.
A configuration baseline is controlled formally as it is a packaged exchange between
groups.
For example, the development organization may release a configuration baseline to the
test organization.
A project may release a configuration baseline to the user for beta testing.
Generally, three levels of baseline releases are required for most systems: major, minor,
and interim.
Each level corresponds to a numbered identifier such as N.M.X, where N is the major
release number, M the minor release number, and X the interim release identifier.
A major release represents a new generation of the product or project.
A minor release represents the same basic product with enhanced features, performance,
or quality.
Major and minor releases are intended to be external product releases that are persistent
and supported for a period of time.
An interim release corresponds to a development configuration intended to be transient.
The shorted its life cycle, the better.
The figure on the following page shows examples of some release name histories for two
different situations:
Once software is placed in a controlled baseline, all changes are tracked.
A distinction must be made for the cause of a change.
Change categories are:
Type 0:
Critical failures, which are defects that are nearly always fixed before nay external
release
These changes represent show-stoppers with an impact on the usability of the
software in its critical use cases.
Type 1:
A bug or defect – with no impairment of the usefulness of the system or that can be
worked around.
These errors tend to correlate nuisances in critical use cases or to serious defects in
secondary use cases that have a low probability of occurrence.
PART – III SOFTWARE MANAGEMENT DISCIPLINE Page 105 of 187
CHAPTER-12 PROCESS AUTOMATION
Type 2:
A change that is an enhancement rather than a response to a defect
Its purpose is to improve performance, testability, usability, or some aspect of
quality that represents good value engineering.
Type 3:
A change that is necessitated by an update to the requirements
Such an update could be new features or capabilities that are outside the scope of the
current vision and business case.
Type 4:
Changes not accommodated by the other categories
Examples: document only or a version upgrade to commercial components
The following table provides examples of these changes in the context of two different
project domains: a large-scale, reliable air traffic control system and a packaged software
development tool.
CHANGE
TYPE
AIR TRAFFIC CONTROL PROJECT
PACKAGED VISUAL MODELING TOOL
Type 0 Control deadlock and loss of flight data Loss of user data
Type 1 Display response time that exceeds the
requirement by 0.5 second
Browser expands but does not collapse
displayed entries
Type 2 Add internal message field for response
time instrumentation
Use of color to differentiate updates form
previous version of visual model
Type 3 Increase air traffic management
capacity from 1,200 to 2,400
simultaneous flights
Port to new platform such as Win-NT
Type 4 Upgrade from Oracle 7 to Oracle 8 to
improve query performance
Exception raised when interfacing to MS Excel
5.0 due to Windows resource management bug
TABLE 12-1. representative examples of changes at opposite ends of project spectrum
Typical project release sequence for a large-scale one-of-a-kind project
Inception Elaboration Construction Transition
Prototype 0.1
Architecture 0.2
Architecture 0.3
0.3.1 0.3.2 1.0.1 2.0.1 2.0.2
Internal test release 1.0
Alpha test release 1.0
IOC: beta release 1.0
3.1.1 4.0.1
Beta release 3.1
Product release 3.1
Upgrade release 3.1
Typical project release sequence for a small commercial project
Inception Elaboration Construction Transition
Prototype 0.1
Architecture 0.2
Architecture 0.3
3.1.1 3.1.2
Internal test release 1.0
Alpha test release 1.0
IOC: beta release 1.0
4.0.1 4.1.2
Beta release 3.1
Product release 4.0 Upgrade release 4.1
Upgrade release 4.2
FIGURE 12-4. Example release histories for a typical project and a typical product
PART – III SOFTWARE MANAGEMENT DISCIPLINE Page 106 of 187
CHAPTER-12 PROCESS AUTOMATION
The main purpose of the other core metrics is to provide management and engineering
teams with a more objective approach for assessing actual progress with more accuracy.
For software projects the culture of the team, the experience of the team, and the style of
the development [the process, its rigor, and its maturity] should drive the criteria used to
assess the progress objectively.
13.2.3 STAFFING AND TEAM DYNAMICS
An iterative development project should start with a small team until the risks in the
requirements and architecture are resolved.
Staffing can vary based on the overlap of iterations and other project-specific
circumstances.
For discrete, one-of-a-kind development efforts – such as building a corporate
information system – the staffing profile in the following figure is typical:
In such development projects, the maintenance team is expectably smaller than the
development team.
For a commercial product development, the sizes of the maintenance and development
teams may be the same.
In case of long-lived, continuously improved products, maintenance is a continuous
construction of new and better releases.
Tracking actual vs. planned staffing is a necessary and well-understood management
metric.
Another important management indicator of changes in project momentum is: the
relationship between attrition and additions.
Low attrition of good people is a sign of success.
Engineers are highly motivated by making progress in getting something to work; this is
the recurring theme underlying an efficient iterative development process.
If this motivation is not there, good engineers will migrate elsewhere.
An increase in unplanned attrition is a sure sign of trouble.
The causes of such attrition can vary, but they are usually personnel dissatisfaction with
management methods, lack of teamwork, or probability of failure in meeting the planned
objectives.
Inception
Elaboration
Construction
Transition
Effort: 5%
Schedule:
10%
Effort: 20%
Schedule:30%
Effort: 65%
Schedule: 50%
Effort:10%
Schedule:
10%
Staffing
Project Schedule
FIGURE 13-4. Typical staffing profile
PART – III SOFTWARE MANAGEMENT DISCIPLINES Page 116 of 187
CHAPTER-13 PROJECT CONTROL AND PROCESS INSTRUMENTATION
The indicator provides insight into the benign or malignant character of software change.
In a mature iterative process, earlier changes are expected to result in more scrap than
later changes.
Breakage trends that increase with time indicate that product maintainability is suspect.
13.3.3 BREAKAGE AND MODULARITY
Rework is defined as the average cost of change, which is the effort to analyze, resolve,
and retest all changes to software baselines. .
Adaptability is defined as the rework trend over time. ..
For a healthy project, the trend expectation is decreasing or stable, same as in figure 13-6,
with the y-axis representing rework.
Not all changes are created equal.
Some changes can be made in a staff-hour, while others take staff-weeks.
This metric provides insight into rework measurement.
In a mature iterative process, earlier changes – architectural changes that affect multiple
components and people – require more rework than later changes – implementation
changes that are confined to a single person or component.
Rework trends that are increasing with time clearly indicate that product maintainability
is suspect.
13.3.4 MTBF AND MATURITY
MTBF is defined is the average usage time between software failures. . .
MTBF is computed by dividing the test hours by the number of type 0 and type 1 SCOs.
Maturity is defined as the MTBF trend over time. ..
Effective test infrastructure must be established for early insight into maturity.
For monolithic software the conventional software approaches focused on every line of
code, every branch, and so forth for complete test coverage.
In the distributed and componentized software systems, complete test coverage is
achievable only for discrete components.
Systems of components are more efficiently tested by using statistical techniques.
So, the maturity metrics measure statistics over usage time rather than product coverage.
Released baselines
MTBF
Project Schedule
FIGURE 13-8. Maturity expectation over a healthy project’s life cycle
PART – III SOFTWARE MANAGEMENT DISCIPLINES Page 118 of 187
CHAPTER-13 PROJECT CONTROL AND PROCESS INSTRUMENTATION
Software errors are categorized into two types: ⑴deterministic and ⑵nondeterministic.
Physicists categorized them as ⑴Bohr-bugs and ⑵Heisen-bugs respectively.
⑴ Bohr-bugs are a class of errors that always result when the software is stimulated in a
certain way.
These errors are caused by coding errors, and changes are isolated to single
components.
Conventional software executing a single program on a single processor typically
contained only Bohr-bugs.
⑵ Heisen-bugs are software faults that are coincidental with a certain probabilistic
occurrence of a given situation.
These errors are caused by design errors, and are not repeatable even when the
software is stimulated in the same apparent way.
To provide test coverage and resolve the statistically significant Heisen-bugs,
extensive statistical testing under realistic and randomized usage scenarios is
necessary.
Modern, distributed systems with many interoperating components executing across a
network of processors are vulnerable to Heisen-bugs.
Establishing an initial test infrastructure that allows execution of randomized usage
scenarios early in the life cycle and continuously evolves the breadth and depth of usage
scenarios to optimize coverage across the reliability-critical components is the way to
mature a software product.
The established baselines should be continuously subjected to test scenarios.
From this base of testing, reliability metrics can be extracted.
Maximizing test time increases meaningful insight into product maturity.
This testing approach provides a powerful mechanism for encouraging automation in the
test activities early in the life cycle.
This helps in monitoring performance improvements and measuring realiability.
13.4 LIFE-CYCLE EXPECTATIONS
There is no mathematical or formal derivation for using the seven core metrics.
The reasons for selecting the seven core metrics are:
The quality indicators are derived form the evolving product than from the artifacts.
They provide insight into the waste generated by the process.
Scrap and rework metrics are a standard measurement perspective for manufacturing
processes.
They recognize the inherently dynamic nature of an iterative process.
They explicitly concentrate on the trends or changes with respect to time than
focusing on value.
The combination of insight from the current value and the current trend provides
tangible indicators for management action.
The actual values of these metrics vary across projects, organizations, and domains.
The relative trends should follow the following general pattern:
PART – III SOFTWARE MANAGEMENT DISCIPLINES Page 119 of 187
CHAPTER-13 PROJECT CONTROL AND PROCESS INSTRUMENTATION
Metrics collection agents: data extraction from the environment tools that
maintain the engineering notations for the artifact sets.
Metrics data management server: data management support for populating the
metrics displays of the GUI and storing the data extracted by the agents
Metrics definitions: actual metrics presentations for requirements progress –
extracted from requirements set artifacts, design progress – extracted from design
set artifacts, implementation progress – extracted from implementation set
artifacts, assessment progress – extracted from deployment set artifacts, and other
progress dimensions – extracted from manual sources, financial management
systems, management artifacts, etc.
Actors: the monitor and the administrator
Specific monitors – called roles – include project managers, software development team
leads, software architects, and customers.
For every role, there is a specific panel configuration and scope of data presented.
Each role performs the same general use cases, with a different focus.
Monitor: defines panel layouts from existing mechanisms, graphical objects, and
linkages to project data; queries data to be displayed at different levels of
abstraction.
Administrator: installs the system; defines new mechanisms, graphical objects,
and linkages; handles achieving functions, defines composition and
decomposition structures for displaying multiple levels of abstraction
The whole display is called a panel.
Within a panel are graphical objects, which are types of
layouts – such as dials and bar charts – for information.
Each graphical object displays a metric.
A panel contains a number of graphical objects
positioned in a particular geometric layout.
A metric shown in a graphical object is labeled with the metric type, the summary
level, and the instance name – such as lines of code, subsystem, server1.
Metrics can be displayed in two modes: value, referring to a point in
time, or graphs, referring to multiple and consecutive points in time.
Only some of the display types are applicable to graph metrics.
Metrics can be displayed with or without control values.
A control value is an existing expectation – absolute or relative –
used for comparison with a dynamically changing metric.
For example, a plan for a given progress metric is a
control value for comparing the actuals of that metric.
Thresholds are another example of control values.
Crossing a threshold may result in a state change
that needs to be obvious to a user.
Control values can be shown in the same graphical object as
the corresponding metric, for visual comparison by the user.
Indicators may display data in formats that are binary – such as black and white,
tertiary – such as red, yellow, and green, digital – integer or float, or some enumerated
data type – a sequence of discrete values like sun …. sat, ready-aim-fire, jan … dec.
Indicators also provide a mechanism to be used to summarize a
condition or circumstance associated with another metric, or
relationships between metrics and their associated control values.
PART – III SOFTWARE MANAGEMENT DISCIPLINES Page 122 of 187
CHAPTER-13 PROJECT CONTROL AND PROCESS INSTRUMENTATION
A trend graph presents values over time and permits upper and lower thresholds to be
defined.
Crossing a threshold could be linked to an associated indicator to depict a noticeable state
change from green to red or vice versa.
Trends support user-selected time increments – such as day, week, month, quarter, year.
A comparison graph presents multiple values together, over time.
Convergence or divergence among values may be linked to an indicator.
A progression graph presents percent complete, where elements of progress are shown as
transitions between states and an earned value is associated with each state.
Trends, comparisons, and progressions are illustrated in the following figure:
Comparison:
Comparison of N values with
the same units over time.
Example: open action items
vs. closed action items
Metric Value 1
Metric Value 2
Metric Value
Time
Expected Value
Actual Value
100%-
% Complete
Time
Progress: Plan vs. actuals over time
Actual Value
Upper Threshold
Lower Threshold
Trend: Comparison of a value over
time against known thresholds.
Example: design model change traffic
Metric Value
Time
FIGURE 13-9(a) Examples of the fundamental metrics
FIGURE 13-9(b) Examples of the fundamental metrics
FIGURE 13-9(c) Examples of the fundamental metrics
PART – III SOFTWARE MANAGEMENT DISCIPLINES Page 123 of 187
CHAPTER-13 PROJECT CONTROL AND PROCESS INSTRUMENTATION
With the help of user-defined, linear structure metric information can be summarized.
For example, lines of code can be summarized by unit, subsystem, and project.
The project is the top-level qualifier for all data of the top-level context set.
Summary structures can be defined for lower levels, and display levels can be selected
based on previously defined structures, and drilled down to lower levels.
The following figure illustrates a simple example of an SPCP for a project.
In this example, the project manager role has defined a top-level display with four
graphical objects.
1. Project activity status.
The graphical object in the upper left provides an overview of the status of the
top-level WBS elements.
The seven elements are coded red, yellow, and green to reflect current earned
value status.
Green would represent ahead of plan, yellow for within 10% of plan, and red
identifies elements with a greater than 10% cost or schedule variance.
This graphical object provides several examples of indicators: tertiary colors, the
actual percentage, and the current first derivative – up arrow means getting better,
down arrow means getting worse.
2. Technical artifact status.
The graphical object in the upper right provides an overview of the status of the
evolving technical artifacts.
The Req light displays an assessment of the current state of the use case models
and requirements specifications.
The Des light displays about the design models, the Imp light for the source code
baseline, and the Dep light for the test program.
3. Milestone progress.
The graphical object in the lower left provides a progress assessment of the
achievement of milestones against plan and provides indicators of the current
values.
4. Action item progress.
The graphical object in the lower right provides a different perspective of
progress, showing the current number of open and closed issues.
Top-Level WBS Activities
Management - 4% ↓
Environment + 1%↑
Requirements + 6%↑
Design - 5% ↓
Implementation -25%↓
Assessment - 2% ↑
Deployment - 2% ↑
Technical Artifacts
Actuals (32)
Plan (27)
Milestone Progress Action Item Progress
Open (12)
Closed
Req Des Imp Dep
FIGURE 13-10. Example SPCP display for a top-level project situation
PART – III SOFTWARE MANAGEMENT DISCIPLINES Page 124 of 187
CHAPTER-13 PROJECT CONTROL AND PROCESS INSTRUMENTATION
It is necessary to tailor the software management efforts to the specific needs of the
project at
hand.
The software management process uses different processes for a commercial software
tool
developer compared to that of a software integrator of automating the security system for
a
nuclear plant.
A mature process and effective software management approaches offer greater value to
the largescale
software integrator than for a small-scale tool developer.
All the same, the ROI realized by better software management approaches is worthwhile
for any
software organization.
14.1 PROCESS DISCRIMINANTS
In tailoring the management process to a specific domain or project, there are two
dimensions of
discriminating factors:
technical complexity, and
management complexity.
The following figure illustrates these two dimensions of process variability and shows an
example project application:
The formality of reviews, the quality control of artifacts, the priorities of concerns, and
other
process instantiation parameters are governed by the point a project occupies in these two
dimensions.
The priorities along the two dimensions are summarized in the figure on the following
page.
A process framework is not a project-specific process implementation with a well-
defined recipe
for success.
Judgment must be injected, and the methods, techniques, culture, formality, and
organization
must be tailored to the specific domain to achieve a process implementation to succeed.
There are six process parameters that cause major differences among project processes:
These are critical dimensions that a software project manger must consider when tailoring
a
process framework to create a practical process implementation.
Higher Technical Complexity
Embedded, real-time, distributed, fault-tolerant
High-performance, portable
Unprecedented, architecture re-engineering
Lower Technical Complexity
Straightforward automation, single threaded
Interactive performance, single platform
Many precedent systems, application re-engineering
Lower
Management
Complexity
Smaller scale
Informal
Few stakeholders
“Products”
Higher
Management
Complexity
Larger scale
Contractual
Many stakeholders
“Projects”
Embedded
automotive
application
Commercial
compiler
Telecom
switch
DOD
weapon
system
Air
traffic
control
system
Business
spreadsheet
Small
Business
simulation
Enterprise
application
e.g. order
entry
system
Large-scale
simulation
Enterprise
information
system DOD MIS
Average software
project
5 to 10 people
10 to 12 months
3 to 5 external interfaces
Some unknown risks
FIGURE 14-1. The two primary dimensions of process variability
PART – III SOFTWARE MANAGEMENT DISCIPLINES Page 126 of 187
CHAPTER-14 TAILORING THE PROCESS
14.1.1 SCALE
The single most important factor in tailoring a software process framework is the total
scale of
the application.
There are many ways to measure scale:
Number of SLOC,
Number of function points,
Number of use cases, and
Number of dollars
From a process tailoring perspective, the primary measure of scale is the size of the team.
As the headcount increases, the importance of consistent interpersonal communications
becomes
paramount.
Generally, five people are an optimal size for an engineering team. Most people can
manage four
to seven things at a time.
There are fundamentally different management approaches to manage a team of 1
(trivial), a
team of 5 (small), a team of 25 (moderate), a team of 125 (large), a team of 625 (huge),
and so
on.
As the team size grows, a new level of personnel management is introduced at a factor of
5.
This model can be used to describe some of the difference among project of different
sizes.
Trivial-sized projects require almost no management overhead – planning,
communication,
coordination, progress assessment, review, administration.
There is no need to document the intermediate artifacts.
Higher Technical Complexity
Lower Technical Complexity
Lower
Management
Complexity
Higher
Management
Complexity
FIGURE 14-2. Priorities for tailoring the process framework
Less emphasis on risk management
Less process formality
More emphasis on individual skills
Longer production and transition
phases
More emphasis on risk management
More process formality
More emphasis on teamwork
Longer inception and elaboration
phases
More domain experience required
Longer inception and elaboration
phases
More iterations for risk management
Less-predictable costs and schedules
More emphasis on existing assets
Shorter inception and elaboration
phases
Fewer iterations
More-predictable costs and schedules
PART – III SOFTWARE MANAGEMENT DISCIPLINES Page 127 of 187
CHAPTER-14 TAILORING THE PROCESS
Workflow is single-threaded.
Performance is highly dependent on personnel skills.
Small projects – 5 people – require very little management overhead, but team leadership
toward
a common objective is crucial.
There is some need to communicate the intermediate artifacts among team members.
Project milestones are easily planned, informally conducted, and easily changed.
There is a small number of individual workflows.
Performance depends primarily on personnel skills.
Process maturity is relatively unimportant.
Individual tools can have a considerable impact on performance.
Moderate-sized projects – 25 people – require moderate management overhead, including
a
dedicated software project manger to synchronize team workflows and balance resources.
Overhead workflows across all team leads are necessary for review, coordination, and
assessment.
There is a definite need to communicate the intermediate artifacts among teams.
Project milestones are formally planned and conducted, and the impacts of changes are
benign.
There is a small number of concurrent team workflows, each with multiple individual
workflows.
Performance is highly dependent on the skills of key personnel – the team leads.
Process maturity is valuable.
An environment can have a considerable impact on performance, but success can be
achieved
with certain key tools in place.
Large projects – 125 people – require substantial management overhead, including a
dedicated
software project manager and several subproject managers to synchronize project-level
and
subproject-level workflows and to balance resources.
There is significant expenditure in overhead workflows across all team leads for
dissemination,
review, coordination, and assessment.
Intermediate artifacts are explicitly emphasized to communicate engineering results
across many
diverse teams.
Project milestones are formally planned and conducted, and changes to milestone plans
are
expensive.
Large numbers of concurrent team workflows are necessary, each with multiple
individual
workflows.
Performance is highly dependent on the skills of key personnel – subproject managers
and team
leads.
Project performance is dependent on average people, for two reasons:
1. There are numerous mundane tasks in any large project, especially in the overhead
workflows.
2. The probability of recruiting, maintaining, and retaining a large number of exceptional
people is small.
Process maturity is necessary, particularly the planning and control aspects of managing
project
commitments, progress, and stakeholder expectations.
An integrated environment is required to manage change, automate artifact production,
and
maintain consistency among the evolving artifacts.
Huge projects – 625 people – require substantial management overhead, including
multiple
software project managers and many subproject managers to synchronize project-level
and
subproject-level workflows and balance resources.
There is significant expenditure in overhead workflows across all team leads for
disseminating,
review, coordination, and assessment.
Intermediate artifacts are explicitly emphasized to communicate engineering results
across many
diverse teams.
Project milestones are very formally planned and conducted, and changes to milestone
plans
cause malignant re-planning.
There are very large numbers of concurrent team workflows, each with multiple
individual
workflows.
Performance is highly dependent on the skills of key personnel – subproject managers
and team
leads.
PART – III SOFTWARE MANAGEMENT DISCIPLINES Page 128 of 187
CHAPTER-14 TAILORING THE PROCESS
The following list elaborates some of the key differences in discriminators of success:
The components, all, have relative importance, i.e. none of them is unimportant.
① Design is key in both domains.
Good design of a product is a key differentiator and is the foundation for efficient new
releases, and also for predictable, and cost efficient construction.
② Management is paramount in large projects, and less important in small projects.
In large projects, the consequences of planning errors, resource allocation errors,
inconsistent stakeholder expectations, and other out-of-balance factors can affect the
overall team dynamics.
In small projects opportunities for miscommunications are fewer and their consequences
less significant.
③ Deployment plays a greater role for a small product as there is a broad user base of
diverse individuals and environments.
A large, one-of-a-kind, complex project has a single deployment site.
Legacy systems and continuous operations may pose several risks, but these problems are
well understood and have a fairly static set of objectives. Another key set of differences is
inherent in the implementation of the various artifacts of the
process.
The following table provides a conceptual example of these differences:
TABLE 14-9. Differences in artifacts between small and large
projects
ARTIFACT SMALL COMMERCIAL
PROJECT
LARGE, COMPLEX PROJECT
Work breakdown
structure
1-page spreadsheet with 2
levels of WBS elements
Financial management system with 5
or 6 levels of WBS elements
Business case Spreadsheet and short
memo
3-volume proposal including technical
volume, cost volume, and related
experience
Vision statement 10-page concept paper 200-page subsystem specification
Development plan 10-page plan 200-page development plan
Release specifications
and number of releases
3 interim release
specifications
8 to 10 interim release specifications
Architecture
description
5 critical use case, 50 UML
diagrams, 20 pages of text,
other graphics
25 critical use cases, 200 UML
diagrams, 100 pages of text, other
graphics
Software 50,000 lines of Visual
Basic code
3,00,000 lines of C++ code
Release description 10-page release notes 100-page summary
Deployment User training course
Sales rollout kit
Transition plan
Installation plan
User manual On-line help and 100-page
user manual
200-page user manual
Status assessment Quarterly project reviews Monthly project management reviews
Questions on this chapter
1) Explain the two primary dimension of process variability.
2) Discuss the major differences among project processes around the six process
parameters.
3) Explain the process discriminators resulting from differences in project size.
4) Explain the process discriminators resulting from differences in stakeholder cohesion.
5) Explain the process discriminators resulting from differences in process
flexibility/rigor.
6) Explain the process discriminators resulting from differences in process maturity.
7) Explain the process discriminators resulting from differences in architecture risk.
8) Explain the process discriminators resulting from differences in domain experience.
9) Illustrate the key differences between the phases, workflows, and artifacts of two
projects
on opposite ends of the management complexity spectrum.
PART – IV LOOKING FORWARD Page 133 of 187
CHAPTER-15 MODERN PROJECT PROFILES
There are five recurring issues of conventional projects, resolved by the modern process
framework exploiting several critical approaches: 1.Protracted integration and late
design breakage are resolved by forcing integration into the
engineering stage.
This is achieved through continuous integration of an architecture baseline supported by
executable demonstrations of the primary scenarios.
2.Late risk resolution is resolved by emphasizing an architecture-first approach, in which
the
high-leverage elements of the system are elaborated early in the life cycle.
3.The analysis phase paralysis of a requirements-driven functional decomposition is
avoided by
organizing lower level specifications along the content of releases rather than along the
product
decomposition by subsystem, by component, etc.
4.Adversarial stakeholder relationships are avoided by providing much more tangible
and
objective results throughout the life cycle.
5.The conventional focus on documents and review meetings is replaced by a focus on
demonstrable results and well-defined sets of artifacts, with more-rigorous notations and
extensive automation supporting paperless environment.
The following are the ways in which healthy modern projects resolve these five issues:
15.1 CONTINUOUS INTEGRATION
Iterative development produces the architecture first, allowing integration to occur as the
verification activity of the design phase and enabling design flaws to be detected and
resolved
earlier in the life-cycle.
This approach avoids the big-bang integration at the end of a project by stressing
continuous
integration throughout the project.
The figure above illustrates the differences between the progress profile of a modern
project and
that of a conventional project:
The architecture-first approach forces integration into the design phase through the
construction
of demonstrations.
The demonstrations don’t eliminate the design breakage; instead, they make it happen in
the
engineering stage, when it can be resolved efficiently in the context of life-cycle goals.
The down stream integration nightmare, late patches, and shoe-horned software fixes are
avoided.
The result is a more robust and maintainable design.
Conventional
Project Profile
100%
Development Progress
(% Coded)
Project Schedule
Iterative activities
Evolving management and engineering artifacts
Inception Elaboration Construction Transition
Format
Activity
Product Prototypes Architecture Usable Releases Product Releases
Figure 15-1. Progress profile of a modern software project
Iterative development
project can avoid late
large-scale design
breakage through
continuous integration
PART – IV LOOKING FORWARD Page 134 of 187
CHAPTER-15 MODERN PROJECT PROFILES
The continuous integration of the iterative process also enables better insight into quality
tradeoffs.
System characteristics, largely inherent in the architecture – performance, fault tolerance,
maintainability – are tangible earlier in the process reducing the jeopardy in missing the
target
costs and schedules.
A recurring theme of successful iterative projects is a different cost profile.
The following table identifies the differences in modern process profile from the
perspective of
cost distribution among the various project workflows:
TABLE 15-1. Differences in workflow cost allocations between a
conventional
process and a modern process
SOFTWARE
ENGINEERING
WORKFLOWS
CONVENTIONAL
PROCESS
EXPENDITURES
MODERN
PROCESS
EXPENDITURES
Management 5% 10%
Environment 5% 10%
Requirements 5% 10%
Design 10% 15%
Implementation 30% 25%
Assessment 40% 25%
Deployment 5% 5%
Total 100% 100%
The primary discriminator of a successful modern process is inherent in the life-cycle
expenditures for assessment and testing.
Conventional projects, because of inefficient integration and late discovery of substantial
design
issues, expend about 40% or more of the total resources in integration and test activities.
Because of the mature iterative process, modern projects deliver require only about 25%
of the
total budget for these activities.
15.2 EARLY RISK RESOLUTION
The engineering stage – inception and elaboration phases – of the life cycle focuses on
confronting the risks and resolving them before the production stage.
Conventional projects do the easy stuff first, thereby demonstrating early progress.
A modern process takes up the important 20% of the requirements, use cases,
components, and
risks.
This is the essence of the principle of architecture first.
Defining the architecture does not include simple steps for which visible progress can be
achieved easily.
The effect of the 80/20 lessons learned in the past in software management experience
provide a
useful risk management perspective:
80% of the engineering is consumed by 20% of the requirements.
Before committing resources to full-scale development, the driving requirements
complexity should be understood.
High fidelity and full traceability of the requirements should not be expected
prematurely.
80% of the software cost is consumed by 20% of the components.
Elaborate the cost-critical components first so that planning and control of cost drivers
are well understood early in the life cycle.
80% of the errors are caused by 20% of the components.
Elaborate the reliability-critical components first so that assessment activities have
enough time to achieve the necessary level of maturity.
80% of the software scrap and rework is caused by 20% of the errors.
Elaborate the change-critical components first so that broad-impact changes occur when
the project is nimble or agile.
PART – IV LOOKING FORWARD Page 135 of 187
CHAPTER-15 MODERN PROJECT PROFILES
80% of the resources – execution time, disk space, memory - are consumed by
20%
of the components.
Elaborate the performance-critical components first so that engineering trade-offs with
reliability, changeability, and cost-effectiveness can be resolved early in the life cycle.
80% of the progress is made by 20% of the people.
The initial team for planning the project and designing the architecture should be of the
highest quality.
An adequate plan and adequate architecture can succeed with an average construction
team.
An inadequate plan or inadequate architecture will not succeed even with an expert
construction team.
The following figure compares the risk management profile of a modern project with the
profile
for a conventional project.
15.3 EVOLUTIONARY REQUIREMENTS
Conventional approaches decomposed
system requirements into subsystem requirements ,
subsystem requirements into component requirements, and
component requirements into unit requirements
The organization of requirements was structured such that traceability is simple.
With an early life-cycle emphasis of requirements first, then complete traceability
between
requirements and design components, the natural tendency was for the design structure to
evolve
into an organization that paralleled the structure of requirements organization.
So, functional decomposition of the problem space led to a functional decomposition of
solution
space.
Modern architectures using commercial components, legacy components, distributed
resources,
and object-oriented methods are not easily traced to the requirements they satisfy.
There are complex relationships between requirements statements and design elements,
including 1 to 1, many to 1, 1 to many, conditional, time-based, and state-based.
Top-level requirements are retained as the vision.
Lower level requirements are captured in evaluation criteria attached to each intermediate
release.
These artifacts, illustrated in the following figure, are intended to evolve along with the
process,
with more and more fidelity as the life cycle progresses and requirements understanding
matures.
High
Low
Project Risk Exposure
Risk
Exploration
Period
Risk Elaboration
Period
Project Life Cycle
Inception Elaboration Construction – Transition
Controlled Risk
Management period
Conventional
Project Risk
Profile
Modern Project
Risk Profile
Figure 15-2. Risk Profile of a modern software project across its life cycle
PART – IV LOOKING FORWARD Page 136 of 187
CHAPTER-15 MODERN PROJECT PROFILES
This chapter introduces several provocative hypotheses about the future of software
economics.
16.1 NEXT-GENERATION COST MODELS
Software experts have different opinions about software economics and its manifestation
in
software cost estimation models:
Source line of code versus function points
Productivity measures versus quality measures
Java versus C++
Object-oriented versus functionally oriented
Commercial components versus custom development
A problem, today, is a continuing inability to predict with precision the resources
required for a
given software endeavor.
Accurate estimates are possible today, although they are imprecise.
It will be difficult to improve empirical estimation models when the data is noisy and
highly
uncorrelated, and is based on differing process and technology foundations.
There are no exactly matching cost models for an iterative software process focused on
an
architecture-first approach.
Many cost estimators still use a conventional process experience base to estimate a
modern
project profile.
The following discussion presents a perspective on how a software cost model should be
structured to support the estimation of a modern software process.
A next-generation software cost model should explicitly separate architectural
engineering from
application production.
The cost of designing, producing, testing, and maintaining the architecture baseline is a
function
of scale, quality, technology, process, and team skill.
An architecture cost model is inherently driven by research and development-oriented
concerns,
so some diseconomy of scale (exponent greater than 1.0), still exists.
For an organization having achieved a stable architecture, the production costs should be
an
exponential function of size, quality, and complexity, with a much more stable range of
process
and personnel influence.
The production stage cost model should reflect an economy of scale (exponent less than
1.0)
similar to that of conventional economic models.
The following figure summarizes a hypothesized cost model for an architecture-first
development process.
Next-generation software cost models should estimate large-scale architectures with
economy of
scale.
This implies, the process exponent during the production stage will be less than 1.0.
The reasons is, larger the system, more the opportunity to exploit automation and to reuse
common processes, components, and architectures.
In the conventional process, the minimal level of automation for the overhead activities
of
planning, project control, and change management led to labor-intensive workflows and a
diseconomy of scale.
This lack of automation was as true for multiple-project, line-of-business organization as
it was
for individual projects.
PART – IV LOOKING FORWARD Page 141 of 187
CHAPTER-16 NEXT-GENERATION SOFTWARE ECONOMICS
Most reuse of components reduces the size of the production effort. The reuse of
processes, tools,
and experience has a direct impact on the economies of scale.
Another important difference is that architectures and applications have different units of
mass –
scale versus size, and are representations of the solution space.
Scale might be measured in terms of architecturally significant elements – classes,
components,
processes, nodes – and size might be measured in SLOC or MB of executable code.
These measures differ from measures of the problem spaces such as discrete requirements
or use
cases.
The problem space description drives the definition of the solution space.
There are many solutions to any given problem, as illustrated in the following figure,
each with a
different value proposition.
Cost is a key discriminator among potential solutions.
Cost estimates that are more accurate and more precise can be derived from specific
solutions to
problems.
So, the cost estimation model must be governed by the basic parameters of a candidate
solution.
If the value propositions are not acceptable solutions to the problem, more candidate
solutions
need to be pursued or the problem statement needs to change.
The debate between function point users and source line users is an indication of the need
for
measures of both scale and size.
Function points are more accurate at quantifying the scale of the architecture required,
and
SLOC is more accurate in depicting the size of components that make up total
implementation.
The advantage of using SLOC is that collection can easily be automated and precision
can be
easily achieved.
The accuracy of SLOC as a measure of size is ambiguous and can lead to
misinterpretation when
SLOC is used in absolute comparisons among different projects and organizations.
SLOC is a successful measure of size in the later phases of the life cycle, when the most
important measures are the relative changes from month to month as the project
converges on
releasable versions.
Problem Space
Units:
Number of requirements
Complexity of requirements
Number of use cases
Complexity of use cases
Solution N
Solution 2
Solution 1 Dimensions
Features, F
Qualities, Q
Life-cycle savings, L
Risk, R
Schedule, S
Cost, C
F+Q+L
Value = ------------
R+S+C
Solution Space
FIGURE 16-2. Differentiating potential
solutions through cost estimation
PART – IV LOOKING FORWARD Page 143 of 187
CHAPTER-16 NEXT-GENERATION SOFTWARE ECONOMICS
The value of function points is that they are better at depicting the overall scale of the
solution,
independently of the actual size and implementation language.
Function pints are not easily extracted from any rigorous representation format, so
automation
and change tracking are difficult or ambiguous.
A rigorous notation for design artifacts is a necessary prerequisite to improvements in the
fidelity
with which the scale of a design can be estimated.
There will be an opportunity to automate, in the future, a new measure of scale derived
from
design representations in UML.
Two major improvements in next-generation software cost estimation models can be
expected:
1. Separation of the engineering stage from the production stage will force estimators to
differentiate between architectural scale and implementation size.
This will permit greater accuracy and more precision in life-cycle estimates.
2. Rigorous design notations such as UML will offer an opportunity to define units of
measure for scale that are more standardized and therefore can be automated and tracked.
These measures can also be traced more straightforwardly into the costs of production.
Technology advances are going to make two breakthroughs possible in the software
process:
1. The availability of integrated tools that automate the transition of information between
requirements, design, implementation, and deployment elements.
These tools allow comprehensive round-trip engineering among the engineering artifacts.
2. The current four sets of fundamental technical artifacts would collapse into three sets
by
automating the need for a separate implementation set.
This technology advance is illustrated in the following figure:
The technology advance would allow executable programs to be produced directly from
UML representations without any hand-coding.
The first breakthrough will be risky but straightforward.
The second one is a major paradigm shift.
When a software engineering team can produce implementation and deployment artifacts
in
an error-free, automated environment, the software development process can change
dramatically.
Round-trip engineering
Requirements
Design
Implementatio
Deployment
Management
Requirements
Design
Implementatio
Deployment
Management
Requirements
Design
Deployment
Management
Documents On-line artifacts
Next-generation
environment
expectation
Software
Engineering
Experience
Conventional
Experience
All Engineering Engineering Separate
from Production
Engineering with
Automated Production
FIGURE 16-3. Automation of the construction process in next-generation
environments
PART – IV LOOKING FORWARD Page 144 of 187
CHAPTER-16 NEXT-GENERATION SOFTWARE ECONOMICS
Technical breakthroughs, process breakthroughs, and new tools will make software
management
easier.
Management discipline will remain to be the backbone for project success.
New technology means:
♥ New opportunities for software applications,
♥ New dimensions of complexity,
♥ New avenues of automation, and
♥ New customers with new/different priorities
Accommodating these changes will require changes in the
traditional/ingrained/established
software management priorities and principles.
Striking a balance among requirements, designs, and plans will remain the main objective
of
software management efforts.
Many of the techniques and disciplines required for a modern process will necessitate a
significant paradigm shift.
As usual, changes will be resisted by some stakeholders or certain intra-organizational
factors.
It is important, equally, to separate cultural resistance from objective resistance.
In this chapter we consider the important culture shifts to be prepared for in order to
avoid the
sources of friction in transitioning to and practicing a modern process.
17.1 CULTURE SHIFTS
The following are the indicators to look for in order to differentiate project that have
made a
genuine cultural transition from project that have only put up a façade:
Lower level and mid-level manger are performers.
The need for “pure managers” arises only when personnel resources exceed the level
of
more than 25 people.
Competent managers spend their time performing, especially with regard to
understanding the status of the project firsthand and developing plans and estimates.
The person managing an effort should plan it.
The manager should participate in, and not approve, developing the plan.
A good indicator of trouble is a manager who did not author the plan or did not take
own it.
The stakeholders affected by this transition are the software project managers.
Requirements and design are fluid and tangible.
An iterative process requires actual construction of a sequence of progressive
comprehensive systems that demonstrate the architecture, enable objective
requirements negotiations, validate the technical approach, and address resolution of
key risks.
All stakeholders would be focused on these “real” milestones, with incremental
deliveries of useful functionality than speculative paper descriptions.
The transition to a less document-driven environment will be followed by the
engineering teams; and resisted by traditional contract monitors.
Ambitious demonstrations are encouraged.
Early life-cycle demonstrations should be used for exposing design flaws, and not as a
show-off.
Stakeholders should not overreact to early mistakes, digressions, or immature designs.
Otherwise, organizations will set up future iterations to be less ambitious.
In early release plans evaluation criteria are to be taken as goals, and not as
requirements.
At the same time, lack of follow-through should not be tolerated.
Negative trends, not resolved with rigor, may become serious downstream
perturbations.
Open and attentive follow-through is necessary to resolve issues.
If the management team resists this transition, it implies some engineering or process
issues are being hidden.
PART – IV LOOKING FORWARD Page 147 of 187
CHAPTER-17 MODERN PROCESS TRANSITIONS
Good and bad project performance is much more obvious earlier in the life cycle.
Success breeds success.
Early failures are risky to turn around or difficult to solve downstream.
It is the early phases that make or break a project.
So, they very best team should perform the planning and architecture phases.
Then, projects can be completed successfully even with average teams.
Otherwise, all the expert programmers and testers may not be able to achieve success.
Early staffing with the right team should not be resisted.
Early increments will be immature.
External stakeholders – customers and end-users – cannot expect initial deliveries to
perform up to specifications, to be complete, to be fully reliable, or to have end-target
levels of quality or performance.
Development organizations must demonstrate tangible improvement in successive
increments in a responsible way.
The trends will indicate convergence toward specification.
Objectively quantifying changes, fixes, and upgrades will indicate the quality of the
process and environment for future activities.
Management and the development team will accept immaturity as a natural part of the
process, as long as the customers are impressed by later increments even when the early
flaws are difficult to be happy about.
Artifacts are less important early, more important later.
A baseline should be achieved that is useful and stable to warrant time-consuming
analyses of the quality factors of traceability, thoroughness, and completeness.
Until the baseline is achieved the details need not be considered.
Otherwise, early engineering cycles and resources will be wasted in adding content and
quality to artifacts which would soon be obsolete.
The development team will embrace this transition.
The traditional contract monitors will resists the early de-emphasis on completeness.
Real issues are surfaced and resolved systematically.
Requirements and designs evolve together, with continuous negotiation, trade-off, and
bartering toward best value.
This should be recognized for the success of a project.
The difference between real and apparent issues can easily be spotted on healthy
projects.
This culture shift could affect almost any team.
Quality assurance is everyone’s job, not a separate discipline.
Surfacing of early performance issues is a sing of an immature design but a mature
design process.
Stakeholders will be concerned over early performance issues.
Development engineers will emphasize on early demonstrations and the ability to
assess and evaluate performance trade-offs in subsequent releases.
Quality assurance should be part of every role, every activity, and every artifact.
Quality assurance is measured by tangible progress and objective data, not by
checklists,
meetings, and human inspections.
The person responsible for the design or management should ensure that quality
assurance is integrated into the process.
Inspections by separate team as in the traditional process are replaced by self-assuring
teamwork of an organization with a mature process, common objectives, and common
incentives.
The development team will embrace this transition.
The traditional contract monitors will resists the early de-emphasis on completeness.
Performance issues arise early in the life cycle.
On every successful project, early performance issues arise.
These issues are a sign of an immature design but a mature design process.
Management and the development team will accept immaturity as a natural part of the
process, as long as the customers are impressed by later increments even when the early
flaws are difficult to be happy about.
PART – IV LOOKING FORWARD Page 148 of 187
CHAPTER-17 MODERN PROCESS TRANSITIONS
In the above figure, progress is defined as percent coded – demonstrable in its target
form.
Organizations that succeed should be capable of deploying software products that are
constructed
largely from existing components in 50% less time, with 50% fewer development
resources, and
maintained by teams 50% the size of these required by the systems.
To avoid the apprehension of failure due to the transitioning to new techniques and
technologies,
a safe path is to maintain the status quo and rely on existing methods.
But for higher success, maintaining the status quo is not always safe.
To make a transition, two points of wisdom from champions and change agents are:
1. Pioneer any new techniques on a small pilot program, and
2. Be prepared to spend more resources – money and time – on the first project that
makes
the transition.
But both the recommendations are counter-productive.
Small pilot programs rarely achieve any paradigm shift of consequence.
Trying a new technique, tool, or method on a very rapid, small-scale effort can show
good results,
initial momentum, or proof of concept.
But pilot projects are never on the critical path of the organization.
So, the best teams, adequate resources, or management attention may not be allocated to
them.
Organizational paradigm shifts result from the circumstances when:
Most critical project is taken up with the highest caliber personnel, allocated them
adequate
resources, and demanding better results
An organization expects a new method, tool, or technology to have an adverse impact on
the
results of an innovative project, and it comes true only when it is a non-critical project
and
adequate resources are not allocated to it.
A better way to transition to a more mature iterative development process that supports
automation technologies and modern architectures is to:
Ready. Do your homework.
Analyze modern approaches and technologies
Define/improve/optimize the process
Support it with mature environments, tools, and components
Plan thoroughly
Aim. Select a critical project
Staff it with the right team, adequate resources, and demand improved results
Fire. Execute the organizational and project-level plans with vigor and follow-through.
Questions on this chapter
1. List out and explain the culture shifts to be overcome while transitioning to modern
processes. [P 248-251]
2. Explain the issues in transitioning from conventional practices to modern iterative
methods, with reference to performance and the strategies to be adopted for transitioning.
[P 251-253, Fig 17-1/P 252]
PART – V CASE STUDIES AND BACKUP MATERIAL Page 150 of 187
APPENDIX D CCPDS-R CASE STUDY
This appendix presents a detailed case study of a successful software project.
The success, here, is in terms of on-budget, on-schedule, and customer-satisfaction.
The COMMAND CENTER PROCESSING and DISPLAY SYSTEM-REPLACEMENT
(CCPDS-R) project was performed for the U. S. Air Force by TRW Space and Defense in
Redondo Beach, California.
The project included: systems engineering, hardware procurement, and software
development.
These three components consumed 1/3 of the budget.
The schedule spanned 1987 to 1994.
The software effort included the development of three distinct software systems of more
than one
million SLOC.
This case study focuses on the initial software development, called the Common
Subsystem of
about 3,55,000 SLOC.
The Common Subsystem effort also produced a reusable architecture, a mature process,
and an
integrated environment for efficient development of the two software subsystems of
similar size
that followed.
So, this case study represents about 1/6 of the overall CCPDS-R project effort.
D.1 CONTEXT FOR THE CASE STUDY
The data, given here, are derived from published papers, internal TRW guidebooks, and
contract
deliverable documents.
D.2 COMMON SUBSYSTEM OVERVIEW
The CCPDS-R project produced a large-scale, highly reliable command and control
system that
provides missile warning information used by the National Command Authority.
The procurement agency was Air Force Systems Command Headquarters, Electronic
Systems
Division, at Hanscom Air Force Base, Massachusetts.
The primary user was US Space Command, and the full-scale development contract was
awarded
to TRW’s Systems Integration Groups.
The CCPDS-R contract called for the development of three subsystems:
1. The Common Subsystem was the primary missile warning system within the upgrade
program.
3,55,000 SLOC
48-month software development schedule
provided reusable components, tools, environment, process, and procedures for
the following subsystems
included a primary installation with a backup system
2. The Processing and Display Subsystem (PDS) was a scaled-down missile warning
display system for all nuclear-capable commanders-in-chief.
2,50,000 SLOC
fielded on remote, read-only workstations distributed worldwide
3. The STRATCOM Subsystem provided both missile warning and force management
capability for the backup missile warning center.
4,50,000 SLOC
Overall Software Acquisition Process
The CCPDS-R acquisition included two distinct phases:
① A concept definition (CD) phase, and
② A full-scale development (FSD) phase
PART – V CASE STUDIES AND BACKUP MATERIAL Page 151 of 187
APPENDIX D CCPDS-R CASE STUDY
The following figure summarizes the overall acquisition process and the products of each
phase:
The CD phase was similar, in intent, to the inception phase:
The primary products were:
♥ A system specification document – vision document
♥ An FSD phase proposal – a business including technical approach and a fixed-price-
incentive
and award-fee cost proposal
♥ A software development plan
♥ A system design review
♥ Technical interchange meetings with the stakeholders – customer and user
♥ Several contract-deliverable documents
These event and products enabled the FSD source selection to be based on demonstrated
performance of the contractor-proposed team and the FSD proposal.
From the software perspective a source selection criterion included in the FSD proposal
activities:
a software engineering exercise.
This was a unique and effective approach for assessing the abilities of the competing
contractors
in software development.
The customer was concerned with the overall software risk of the project.
CCPDS-R was a large software development activity and one of the first to use the Ada
programming language.
There was apprehension about the Ada development environments, contractor processes,
and
contractor training programs being mature enough to use on a full-scale development
effort.
The software engineering exercise was intended to demonstrate these apprehensions.
The software engineering exercise occurred immediately after the FSD proposals were
submitted.
The customer provided the bidders with a simple two-page specification of a “missile
warning
simulator” with some of the same fundamental requirements as the CCPDS-R system.
It included a distributed architecture, a flexible user interface, and the basic processing
scenarios
of a simple CCPDS-R missile warning thread.
CD phase
Schedule:
12 months
Products:
Vision
Business case
Software development plan
Software engineering exercise
Contract:
Firm fixed price
ISRR: Initial System Requirements
Review
ISDR: Initial System Design
Review
Competitive Design Phase
(Inception)
ISRR ISDR
FSD phase
Schedule:
48 months
Products:
2167A software document
Six software configuration items
Major milestones
Beta delivery (EOC)
Contract:
Firm fixed price award fee
SRR: System requirements review
IPDR: Interim Preliminary design review
PDR: Preliminary design review
CDR: Critical design review
EOC: Early operational capability
FQT: Final qualification test
Full-Scale Development Phase
(Elaboration – Construction – Transition)
SRR IPDR PDR C DR EOC FQT
FIGURE D-1. CCPDS-R life-cycle overview
PART – V CASE STUDIES AND BACKUP MATERIAL Page 152 of 187
APPENDIX D CCPDS-R CASE STUDY
The instantiation of the NAS generic tasks, processes, sockets, and circuits into a run-
time
infrastructure was called a software architecture skeleton (SAS).
The software engineering associated with the Common Subsystem SAS was the focus of
early
builds and demonstrations – an example of architecture-first process.
The SAS encompasses the declarative view of the solution, including all the top-level
control
structures, interfaces, and data types passed across these interfaces.
In the CCPDS-R definition of an architecture, the declarative view included the
following:
♦ All Ada main programs
♦ All Ada tasks and task attributes
♦ All socket (asynchronous task-to-task communications), socket attributes, and
connections to
other sockets
♦ Data types for objects passed across sockets
♦ NAS components for
Even though a SAS will compile, it executes many scenarios only after software is added
that
reads messages, processes them, and writes them within application tasks.
The purpose of a SAS is to provide the structure and interface network for integrating
components into threads of capability.
There are two aspects of SAS verification and assessment: (1) compilation, and (2)
execution.
Construction and compilation of all the SAS objects together is a non-trivial assessment
that
provides feedback about the consistency and quality of the SAS.
Constructing the components and executing the stimuli and response threads within the
SAS
provide further feedback about structural integrity and run-time semantics.
Then, the SAS provides the forum for integration and architecture evolution.
Construction of SAS early helps in evolving it into a stable baseline in which change is
managed
and measured for feedback about architectural stability.
CCPDS-R installed its first SAS baseline – after three informal iterations, around month
13,
before the PDR milestone.
All subsequent changes was performed via rigorous configuration control.
The changes SAS underwent after its first baseline were scrutinized. The SAS dynamics
converged on an acceptable architecture with solid substantiation early in the life cycle.
So the SAS was useful in assessing the volatility in the overall software interfaces and
captured
the conceptual architecture of the Common Subsystem.
The following figure provides a perspective of the software architecture stability:
The graphs in the figure show that there was significant architectural change over the first
20
months of the project, after which the architecture remained stable.
o Initialization
o State management of
process and tasks
o Fault handling
o Interprocess
communications
o Health and performance
monitoring
o Instrumentation
o Network management
o Logging
o Network control
-200
-100
IPDR PDR
65
Processes
Months
-500
-250
IPDR PDR
251
Tasks
Months
-1500
-750
IPDR PDR
Sockets
Months
1,148
IPDR PDR
Message Types
Months
-500
-250 200
FIGURE D-3. Common Subsystem SAS evolution
PART – V CASE STUDIES AND BACKUP MATERIAL Page 155 of 187
APPENDIX D CCPDS-R CASE STUDY
The large spike in processes and tasks around month 5 corresponded to an attempt at a
more
distributed approach. As this architecture experimentation exposed the design trade-offs
in
distribution strategies, the SAS process design was changed back to the original number
of
processes. The SAS task-level design converged on an increased number of tasks.
The basic problems being examined by the architecture team were the trade-offs in
concurrency,
operating system process overhead, run-time library tasking overhead, paging, context
switching,
and the mix of interprocess, inter-task, and inter-node message exchange.
The complexity of such run-time interactions made modeling and simulation ineffective.
Only the early run-time demonstrations of multiple distribution configurations allowed
the
architecture team to achieve the understanding of technical trade-offs necessary to select
an
adequate solution.
If the change in the distribution design occurred late in the project, the impact could have
been
immense.
Because sockets and messages were simple to change and corresponded to lower level
application interfaces, changes to these numbers continued at a low level through the
critical
design review (CDR) milestone.
This experimentation helped in the achievement of an architecture baseline early in the
life cycle.
This was enabled by the flexibility of the NAS CSCI.
D.5 PROCESS OVERVIEW
CCPDS-R software development followed a Department of Defense life cycle with a
software
requirement review, preliminary design review, critical design review, and final
qualification test.
Building
blocks
Building
blocks
Critical
threads
Non-critical
threads
Completeness:
maturation, tuning
Requirements analysis
Architecture analysis
Architecture synthesis
Critical-thread demonstration
Architecture maintenance
Application construction
Architecture maintenance
Application maintenance
Inception Inception Construction
CD Phase Contract
award
SRR IPDR PDR CDR FQT
SRR
Demo
Build 0 and
IPDR Demo
Build 1 and
PDR Demo CDR
Demo
0 5 9 14 24 Months after 45
Contract award
Primitives and
support software
Architecture, test
scenarios, models
Critical thread algorithms,
applications, displays
Non-critical thread algorithms,
applications, displays
Communications interfaces,
final test scenarios
Associate contractor interface
Build 0
Build 1
Build 2
Build 3
Build 4
Build 5
Build 2
Build 3-1
Build 3-2
Build 4 Build 5
FIGURE D-4.
Overview of
the CCPDS-R
macroprocess,
milestone, and
schedule
PART – V CASE STUDIES AND BACKUP MATERIAL Page 156 of 187
APPENDIX D CCPDS-R CASE STUDY
The figure above illustrates the mapping of the life cycles of the design phase and full-
scale
development phase to the phases of the iterative process framework:
In all six incremental builds were defined to manage the project.
The figure above summarizes the build content and overlap.
The conclusion of each build corresponded to a new baseline of the overall Common
Subsystem.
From a macroprocess view, the initial milestones focused on achieving a baseline
architecture.
The PDR baseline required three major architecture iterations:
1. The Software Requirements Review (SRR) demonstration: initial feasibility of the
foundation components and basic use cases of initialization and interprocess
communications
2. The Interim Preliminary Design Review (IPDR) demonstration: the feasibility of the
architecture infrastructure under the riskiest use cases, including the following:
♦ A peak data load missile warning scenario of a mass raid
♦ A peak control load scenario of a system failover and recovery from the primary thread
of processing to a backup thread with no loss of data
3. The Preliminary Design Review (PDR) demonstration: adequate achievement of the
peak
load scenarios and the other primary use cases within a full-scale architectural
infrastructure, including the other critical-thread components
The CDR demonstration updated the architecture baseline.
This was equivalent to an alpha test for the complete architectural infrastructure and the
criticalthread
scenarios.
By providing a set of complete use cases it enabled the user to perform a subset of the
mission.
The CCPDS-R software process had a well-defined macroprocess.
Each major milestone was accompanied by a major demonstration of capability with
contributions from on-going builds.
The design process used was more incremental than iterative, although, it was clearly
both.
D.5.1 RISK MANAGEMENT: BUILD CONTENT
Planning the content and schedule of the Common Subsystem builds resulted in a useful
and
accurate representation of the overall risk management plan.
The build plan was thought out early in the inception phase by the management team.
The management team set the expectation for reallocating build content as the life cycle
progressed and more-accurate assessments of complexity, risk, personnel, and
engineering tradeoffs
were achieved.
There were several adjustments in the build-content and schedule as early conjecture
evolved
into objective fact.
The following figure illustrates the detailed scheduled and CSCI content of the Common
Subsystem:
FIGURE D-5(a). Common Subsystem builds (Part – 1)
CSCI Build
0 1 2 3 4 5 Total
NAS 8 8 2 2 20
SSV 33 25 102 106
DCO 23 27 20 70
TAS 2 3 5 10
CMP 3 6 6 15
CCO 5 31 37 7 80
Totals
8
43
61
173
63
7
355
The details of the build content of the Common Subsystem are:
♦ Build 0. This build comprised the foundation components necessary to build a software
architecture skeleton.
The inter-task/interprocess communications, generic task and process executives, and
common error reporting components were included.
PART – V CASE STUDIES AND BACKUP MATERIAL Page 157 of 187
APPENDIX D CCPDS-R CASE STUDY
This build was also the conclusion of the research and development projected executed in
parallel with the CD [inception] phase.
These NAS components were the cornerstone of the architectural framework and were
built
to be reusable by all the CCPDS-R subsystems.
They represented very complex, high-risk components with stringent performance,
reliability, and reusability demands.
♦ Build 1. This was essentially the “architecture.”
It included a complete set of instantiated tasks [300], processes [70], interconnections
[1000],
states, and state transitions for the structural solution of the CCPDS-R software
architecture.
All the NAS components for initialization, state management (configuration), and
instrumentation were added to achieve a cycling architecture.
To support the initial demonstration, a trivial user interface and test scenario injection
capability were also added.
Upon completion of this build, only a few critical use cases were demonstrable:
initializing
the architecture, injecting a test scenario to drive the data flow through the system, and
orchestrating reconfigurations such as primary thread switchover to backup thread.
♦ Build 2. First build of mission-critical components and achieved the initial capability to
execute real mission scenarios.
The three primary risks in the mission scenarios were: the timeliness of the display
database
distribution, the performance (resource consumption and accuracy) of the missile
warning
radar algorithms, and the performance of the user interface for several complex displays.
Upon completion of this build, several mission-oriented use case could be executed,
including the worst-case data processing thread and the worst-case control processing
thread
– primary-to-backup switchover.
♦ Build 3. This build contained the largest volume of code, including display format
definitions,
global type definitions, and representation specifications needed for validation of external
interface transactions.
Most of the voluminous code was generated automatically in a cookbook manner by
constructing code generation tools.
Build 0
PDW CDW TOR
Build 1
PDW CDW T0R
Development Stand-alone test
Turnover for
configuration control
Build 2
PDW CDW TOR
Build 3
PDW CDW TOR1 TOR2
Build 4
PDW CDW TOR
Build 5
PDW CDW TOR
||||||||
0 5 10 15 20 25 30 34
SRR IPDR PDR CDR
PDW: Preliminary Design Walkthrough
CDW: Critical Design Walkthrough
TOR : Turnover Review
Original milestone
FIGURE D-5(b). Common Subsystem builds (Part – 2)
KSLOC
8
43
61
173
83
7
355
Months after contract award
PART – V CASE STUDIES AND BACKUP MATERIAL Page 158 of 187
APPENDIX D CCPDS-R CASE STUDY
Structure Behavior
Completeness
and accuracy Maintenance
PDW CDW TOR
Integrated
Structural
Demonstration
Integrated
Performance
Demonstration
Baseline
Turnover
Assessment
FIGURE D-6. Basic activities sequence for an individual build
PART – V CASE STUDIES AND BACKUP MATERIAL Page 159 of 187
APPENDIX D CCPDS-R CASE STUDY
The overall subsystem build plan was driven by allocating all reliability-critical
components –
causing type 0 errors – to build 0, 1, or 2.
The following figure illustrates the overall flow of test activities and test baselines
supporting
this build plan.
The sequence of baselines allowed maximum time for the early-build, critical-thread
components
to mature.
To increase trustworthiness in their readiness for operational use further extensive testing
was
done on the components.
Testing process went on till such a time that an empirical [software] mean time between
failures
(MTBF) was demonstrable and acceptable to the customer.
The CCPDS-R build sequence and test program are good examples of confronting the
most
important risks first.
A stable architecture was also achieved early in the life cycle so that substantial reliability
testing
could be performed.
This strategy allowed useful maturity metrics to be established to demonstrate a realistic
software MTBF to the customer.
D.5.5 DOD-STD-2167A ARTIFACT
CCPDS-R software development was required to comply with DOD-STD-2167A
documentation
standard.
Data item descriptions in 2167A specified document format and content.
Substantial tailoring was allowed to match the development approach and to
accommodate the
use of Ada as a design language and also as the implementation language.
Dev/SAT: Development and Stand-Alone Test
Component-level testing
BIT: Build Integration Test
Informal smoke testing in the integrated
architecture
EST: Engineering String Test
Formal scenario test demonstrating
requirements compliance
Build 0
Dev/SAT
Build 0
Baseline
Build 1
Dev/SAT
Build 1
Baseline
Build 0
BIT
Build 2
Dev/SAT
Build 2
Baseline
Build 1
BIT
Build 3
Dev/SAT
Build 3
Baseline
Build 2
BIT
Build 4
Dev/SAT
Build 4
Baseline
Build 2
EST
Build 3
BIT
Build 5
Dev/SAT
Build 5
Baseline
Build 3
EST
Build 4
BIT
Build 4
EST
Each subsequent build baseline
provides a controlled
configuration for:
Maintenance of stand-alone
tested components
Testing of additional capabilities
Regression testing of previous
capabilities
After-hours reliability stress
testing
FIGURE D-7. Incremental baseline evolution and test activity flow
PART – V CASE STUDIES AND BACKUP MATERIAL Page 164 of 187
APPENDIX D CCPDS-R CASE STUDY
These demonstrations provide acute insight into the integrity of the architecture and its
subordinate components, the run-time performance risks, and the understanding of the
system’s
operational concept and key use cases.
Lessons from the design walkthroughs and their demonstrations were tracked via action
items.
Major milestone design reviews provided both a briefing and a demonstration.
The briefing summarized the overall design and the important results of the design
walkthroughs,
and presented an overview of the demonstration goals, scenarios, and expectations.
The demonstration at the design review was a culmination of the real design review
process.
The sequence of demonstration activities included
♦ the development of a plan
♦ definition of a set of evaluation criteria
♦ integration of components into an executable capability
♦ generation of test drivers, scenarios, and throw-away components
The demonstration plans were not elaborate, yet, for each demonstration, they captured
the purpose of the demonstration
the evaluation criteria for assessing the results
the scenarios of execution
the overall hardware and software configuration
A modern demonstration-based approach frequently starts with a pessimistic assessment
and
then gets better.
The following key lessons were learned in the CCPDS-R demonstration activities:
Early construction of test scenarios has a high ROI.
♦ The early investment in building some of the critical test scenarios served two
invaluable
purposes:
It force-implemented an important subset of the requirements into a tangible form.
These test scenarios caused several interactions with the users that increased the
understanding of requirements early in the life cycle.
These implementation activities got the test team involved early in building an
environment for demonstration and testing that was highly mature by the time the
project reached full-scale testing.
Demonstration planning and execution expose the important risks.
♦ Negotiating the content of each demonstration and the associated evaluation criteria
served to focus the architecture team, management team, and other stakeholders on the
critical priorities of the early requirements and architecture activities.
♦ Instead of dealing with the full elaboration and traceability of all 2,000 requirements,
the
team focused on understanding the 20 or so designs drivers.
Demonstration infrastructure, instrumentation, and scaffolding have a high ROI.
♦ Initially, there was a concern that these demonstrations would require a significant
investment in throw-away components which were only needed for demonstration.
♦ In most cases, very little of this work ended up being thrown away.
♦ Most components were reused in stand-alone tests, build integration tests, or
engineering
string tests.
♦ As a benchmark of the level of throw-away components, IPDR demonstration
amounted
to 72,000 SLOC. Of this, only about 2,00 SLOC – smart stubs and dummy messages –
were thrown away.
Demonstration activities expose the crucial design trade-offs.
♦ The integration of the demonstration provided timely feedback on the important design
attributes and the level of design maturity.
♦ The demonstration efforts required 10 to 12 designers integrating components into the
architecture.
♦ They ran into a number of obstacles, built many workarounds, and performed several
components redesigns and a few architecture redesigns.
♦ Most of this work took place over a month, that too late at night.
♦ This late night work was very detailed integration-debug-rebuild-redesign and it acted
as
a very effective design review.
♦ They gave a first-hand understanding of the architectural strengths and weaknesses,
mature components, fragile components, and the priorities in post-demonstration
improvement
PART – V CASE STUDIES AND BACKUP MATERIAL Page 167 of 187
APPENDIX D CCPDS-R CASE STUDY
The first three months of planning – encompassing a draft plan, government review and
comment, and final plan production – could be achieved with a collaborative team of all
interested stakeholders.
The review sequence was a contractual requirement.
This demonstration was the first attempt at constructing a full-scale SAS.
So, this was the first major integration effort for the Common Subsystem.
The subsequent demonstrations were shorter, but equally intense integration activities.
IPDR Demonstration Scope
The basic scope of the IPDR demonstration was defined in the CCPDS-R statement of
work:
The contractor shall demonstrate the following capabilities at the NORAD Demo 1:
system initialization, system failover and recovery, system reconfiguration, and data
logging.
The customer and TRW understood these capabilities.
The capabilities represented the key components and use cases necessary to meet the
objectives.
1. System services – interprocess communication services, generic applications control
(generic task and process executives), NAS utilities (list routines, name services, string
services), and common error reporting and monitoring services – were the general utility
software components of NAS.
These services were to be reused across al three subsystems.
They were the foundation of the architectural infrastructure.
They were the building blocks needed to demonstrate an executable thread.
2. Data logging (SSV CSCI) was a capability to instrument some of the results of the
demonstration and was a performance concern.
3. Test message injection (TAS CSCI) components permitted messages to be injected
into
an object in the system to provide a general test driver capability.
4. System initialization was the fundamental use case (called phase 1 in Figure D-8) that
would illustrate the existence of a consistent software architecture skeleton and error-free
operation of a substantial set of the system services.
A performance risk was the requirement of initializing a large distributed software
architecture – both custom and commercial components – within a given time.
5. The second scenario (phase 2) was to inject the peak message traffic load into the
architecture and cause all the internal message traffic to cascade through the system in a
realistic way.
Executing this scenario required all the software objects to have simple and smart
message processing stubs to be “modeled”.
These simple Ada programs completed the thread with dummy message traffic by
reading and writing messages as expected under peak load.
Prototype message processing software was constructed to accept incoming messages and
forward them through the strings of components of the SAS.
This included all significant expected traffic, from receipt of external sensor messages to
missile warning display updates.
It also included all overhead traffic associated with status monitoring, error reporting,
performance monitoring, and data logging.
6. System failover and recovery (phase 3) was a risky scenario as it required a very
sophisticated set of state management and state transition control interfaces to be
executed across a logical network of hundreds of software objects.
The basic operation of this use case was to inject a simulated fault into a primary thread
operational object to exercise the following sequence of events:
♦ Fault detection
♦ Fault notification
♦ Orchestrated state transition from primary thread to backup thread
♦ Shutdown of primary thread
All these network state transitions needed to occur without interruption of service to the
missile warning operators.
Reconfiguration meant recovering from a degraded mode.
Following the system failover defined above, a new backup thread would be initialized so
that there was minimum exposure to single-point failures. Repair immediately followed
failover, in the delivered system.
PART – V CASE STUDIES AND BACKUP MATERIAL Page 169 of 187
APPENDIX D CCPDS-R CASE STUDY
There are several instances of progress metrics and quality indicators of scrap, rework,
and
maturity.
The basis for automation required some interesting technical approaches embedded
directly in
the evolving design and code artifacts.
D.7.1 DEVELOPMENT PROGRESS
Measuring development progress accurately with several concurrent builds in various
states was
a complex undertaking for the Common Subsystem management team.
A consistent approach providing accurate insight into subsystem-level status and build
status was
devised.
The goal was a balanced assessment that included the following:
The Ada/ADL metrics:
To provide good insight into the direct indicators of technical progress
These were accurate at depicting the true progress in design and implementation
These were weak at depicting the completed contract deliverables and financial status
Earned value metrics:
To provide good insight into the financial status and contract deliverables
These were weak indicators of true technical progress
Like other software metrics, these two perspectives were initially inaccurate assessments
of
absolute progress.
They were excellent assessments of relative progress when tracked periodically.
With more experience, the absolute assessments also became well-tuned predictors of
success or
risk.
The following chart illustrates the overall assessment:
The above figure depicts the top-level progress summary for each build and for the
Common
Subsystem as a whole.
The length of the shading within each build relative to the dashed line – current month –
identifies whether progress was ahead of or behind schedule.
Build 0
PDW CDW TOR
Build 1
PDW CDW T0R
Development Stand-alone test
Turnover for
configuration control
Build 2
PDW CDW TOR
Build 3
PDW CDW TOR1 TOR2
Build 4
PDW CDW TOR
Build 5
PDW CDW TOR
||||||||
0 5 10 15 20 25 30 34
SRR IPDR PDR CDR
PDW: Preliminary Design Walkthrough
CDW: Critical Design Walkthrough
TOR : Turnover Review
Original milestone
KSLOC
8
43
61
173
83
7
355
Months after contract award
Common Subsystem Design
Common Subsystem SAT
Time = Month 17
FIGURE D-9. Development progress summary
PART – V CASE STUDIES AND BACKUP MATERIAL Page 173 of 187
APPENDIX D CCPDS-R CASE STUDY
The shading was a judgment by the software chief engineer, who combined the monthly
progress
metrics and the monthly financial metrics into a consolidated assessment.
Monthly collection of metrics provided detailed management insight into build progress,
code
growth, and other indicators.
To provide multiple perspectives, the metrics were collected by build and by CSCI.
Individual CSCI managers collected and assessed their metrics before the metrics were
incorporated into a project-level summary.
This process was objective, efficient, and meaningful.
The lowest level estimated of TBD_Statements were subjective. They were determined
by the
most knowledgeable people – the designers,.
They were being maintained in the evolving source code format as it was the format used
by the
designers. This increased the likelihood that the artifact would be kept up-to-date.
This process also assured consistent and uniform communication of progress across the
project.
The following figure illustrates the monthly progress assessments for the Common
Subsystem
and each build:
||||||||||
3 6 9 12 15 18 21 24 27 30
Subsystem progress (% coded)
||||||||||
100%
Build 0
Build 1
Build 2
Build 3
Build 4
Build 5
Individual
build progress
Contract Month
||||||||||
3 6 9 12 15 18 21 24 27 30
Subsystem progress (% coded)
||||||||||
100%
Actuals
Plan
IPDR
PDR
CDR
Contract Month
Common Subsystem Progress
FIGURE D-10. Common Subsystem development progress
PART – V CASE STUDIES AND BACKUP MATERIAL Page 174 of 187
APPENDIX D CCPDS-R CASE STUDY
In the above figure, the planned evolution was based on weight-averaging the SLOC
counts for
each build with the guidelines of: 30% done by PDW and 70% done by CDW.
Overall, the Common Subsystem performed very close to its plan, with one exception:
The progress achieved at IPDR reflected the unexpected positive impact of the source
code
generation tools, particularly for the SAS generation of 50,000+ SLOC.
Performance against plans varied for the individual builds. Each build tracked its plan
fairly well.
The progress of the subsystem and each build was assessed monthly with internal
management
and the customer in the project management reviews.
The objectivity approach was a key contributor to the non-adversarial relationship that
evolved
among all the stakeholders.
D.7.2 TEST PROGRESS
The test organization was responsible for build integration tests and requirements
verification
testing – SATs, ESTs, and FQT.
Build integration testing (BIT) was less effective in uncovering problems.
BITs were to be a complete set of integration test procedures from the most basic
capability to
off-nominal boundary conditions.
Most of this work was redundant with demonstration integration efforts.
So, the BITs were frequently redundant with demonstration preparation and were less
costeffective
than if the demonstration preparation activities were combined with BIT and made a
responsibility of the test organization.
The following table summarized the build 2 BIT results:
TABLE D-6. SCO characteristics for build 2 BIT testing
PROBLEM SOURCE MINOR
(<1 HOUR)
MODERATE
( < 1 DAY)
MAJOR
(>1 DAY)
TOTAL
Requirement interpretation 5 5
Inadequate stand-along test 3 4 2 9
Interface problem 9 2 1 12
Inadequate performance 1 1
Desired enhancement (not a problem) 3 3
Inconsistent configuration 3 2 5
Total 24 8 3 35
The entries in the above table reflect a highly integrated product state.
More effort was allocated to BIT planning, preparation, and conduct.
The merging of the demonstration preparation and BIT activities would have enabled
more
integration before turnover and more efficient regression testing after turnover to ensure
that all
previous issues were resolved.
The following table and figure provide perspectives on the progress metrics used to plan
and
track the CCPDS-R test program:
TABLE D-7. Requirements verification work by test type and CSCI
Test type NAS SSV DCO TAS CMP CCO QPR TOTAL
Build 0/1 SAT 42 5 47
Build 2 SAT 11 52 63 15 12 153
Build 3/4/5 SAT 65 62 18 198 46 389
EST 1/2 131 39 77 94 341
EST 3 32 49 117 42 240
EST 4 16 172 219 5 4 6 422
EST 5/FQT 5 105 84 42 54 207 46 543
Totals 237 482 622 221 268 259 46 2,135
The figure plots the progress against the plan for requirements verification tests.
SATs, ESTs, and FQTs were sources of test cases used.
SATs were the responsibility of the development teams, executed in the formal
configuration
management environment and peer-reviewed by the test personnel.
ESTs consisted of functionally related groups of scenarios that demonstrated
requirements
spanning multiple components.
FQTs were tests for requirements compliance that were not demonstrated until a
complete
system existed.
Quantitative performance requirements (QPRs) spanned all CSCIs.
PART – V CASE STUDIES AND BACKUP MATERIAL Page 175 of 187
APPENDIX D CCPDS-R CASE STUDY
D.7.4 MODULARITY
The following figure identifies the total breakage as a ratio of the entire software
subsystem:
This metric identifies the total scrap generated by the Common Subsystem software
development
process as about 25% of the whole product. Industry average software scrap runs in the
40% to
60% range. The initial configuration management baseline was established around the
time of
PDR, (month 14). Later, 1,600 discrete changes were processed against configuration
baseline.
D.7.5 ADAPTABILITY
Overall, about 5% of effort was expended in rework activities against software baselines.
The average cost of change was about 24 hours per SCO. These values prove the ease
with
which the software baselines could be changed. The level of adaptability achieved by
CCPDS-R
was four times better than the typical project.
||||||||||
5 10 15 20 25 30 35 40 45
25%
20%
15%
10%
5%
Cumulative SLOC
Contract Month
Closed rework Currently open rework
About 25% of all
SLOC were scrapped
and reworked after
their initial baseline
50
40
30
20
10
|||||||||||
14 24 48
Maintenance changes
Implementation
Changes
Design
Changes
Design changes: Architecture changes that
typically span multiple components and teams
Implementation changes: Pre-FQT
changes, typically isolated to a
single component and team
Maintenance changes: out-ofscope
changes performed under
separate contract
Average hours/SCO
Common Subsystem Schedule (months)
PDR
CDR
FQT
FIGURE D-13. Common Subsystem modularity
FIGURE D-14. Common Subsystem adaptability
PART – V CASE STUDIES AND BACKUP MATERIAL Page 177 of 187
APPENDIX D CCPDS-R CASE STUDY
The above figure plots the average cost of change across the Common Subsystem
schedule.
The 1,600+ SCOs processed against the evolving configuration baseline by FQT resulted
a stable
cost of change.
CCPDS-R proved to be a counter-example to the maxim “the later you are in the life
cycle, the
more expensive things are to fix”
Most of the early SCO trends were changes that affected multiple people and multiple
components – design changes in the above figure.
The later SCO trends were usually localized to a single person and a single component –
implementation changes in the above figure.
The final phase of SCOs reflected an uncharacteristic increase in breakage, the result of a
large
engineering change proposal that completely changed the input message set to the
Common
Subsystem.
This area turned out to be more difficult than thought of.
Although the design was robust and adaptable for a number of premeditated change
scenarios, an
overhaul of the message set was never foreseen nor accommodated in the design.
D.7.6 MATURITY
CCPDS-R had a specific reliability requirement with a specific allocation in the software.
The independent test team constructed an automated test suite to exercise the evolving
software
baseline with randomized message scenarios.
Extensive testing was done under realistic conditions to substantiate software MTBF in a
credible way.
The reliability-critical components were subjected to the most reliability stress testing.
This plan ensured early insight into maturity and software reliability issues.
The following figure illustrates the results:
With modern distributes architectures, statistical testing serves two purposes: (1) it
ensures
maximum coverage, and (2) uncovers significant issues of races, deadlocks, resource
overruns,
memory leakage, and other Heisen-bugs (uncertainty conditions).
Overall system integrity can be tested by executing randomized and accelerate scenarios
for long
periods of time.
|||
10,000 50,000 1,00,000
30,000
(1,08,528)/4 = 27,132 hours
Build 0, 1, 2 mean time between
critical failures (reliabilitycritical
components)
Test
Suite
Software
Builds
Test
Hours
Critical
Failures
Cumulative
Failures
4 0, 1, 2, 3, 4, 5 19,400 2 17
3 0, 1, 2, 3, 4 23,068 2 17
2 0, 1, 2, 3, 4 20,600 2 18
1 0, 1, 2 1,08,528 4 26
MTBF (hours)
FIGURE D-15. Common Subsystem maturity
PART – V CASE STUDIES AND BACKUP MATERIAL Page 178 of 187
APPENDIX D CCPDS-R CASE STUDY
D.7.7 COST/EFFORT EXPENDIURES BY ACTIVITY
The following table provides the overall cost breakdown for the CCPDS-R Common
Subsystem:
TABLE D-8. Common Subsystem cost expenditures by top-level WBS element
WBS
ELEMENT
COST
(%)
ACTIVITIES AND ARTIFACTS
Management and
administration
9 Deliverable plans, administrative support, financial administration,
customer interface contracts, overall control and leadership
Process/product
specification
7 Technical requirements, demonstration plans and evaluation criteria,
iteration plans, software process, metrics analysis
Software
engineering
11 Architecture engineering, design walkthrough coordination, NAS
CSCI development, metrics definition and assessment,
demonstration planning and integration
Development 38 Development, testing, documentation, and maintenance of
application components
Testing,
assessment, and
deployment
24 Release management; formal test preparation, conduct, and
reporting; test scenario development; change management;
deployment
Infrastructure 11 System administration, hardware and software resources, toolsmithing,
tool integration
Total software
activities
100 Cost expenditures, including hardware and software tools – in
the infrastructure element, travel, and other direct costs
These data were extracted from the final WBS cost collection runs.
The next-level elements are described in the following table:
TABLE D-9. Common Subsystem cost expenditures by lower level WBS element
WBS ELEMENT COST (%) ACTIVITIES AND ARTIFACTS
Software project management 6 Customer interface, contracts,
administrations
Software engineering 5 Requirements coordination, chief engineer
Specifications 4 CSCI SRS development
Demonstrations 3 Plans, integration, reports
Tools/metrics 3 Tools, metrics collection
NAS CSCI 3 Middleware, 20 KSLOC
Integration and test management 4 Test coordination, management
BIT testing 3 Integration smoke testing
EST testing 9 Formal test plans, testing, reports
FQT testing 6 Formal test plans, testing, reports
Configuration management and
test-bed control
3 Release management, integration
Environment 11 Hardware, software, system administration
Development management 5 CSCI applications management
SSV CSCI 11 Architecture, system software, 160 KSLOC
DCO CSCI 9 Display interface applications, 70 KSLOC
CCO CSCI 9 Communications applications, 80 KSLOC
TAS CSCI 2 Test and exercise applications, 10 KSLOC
CMP CSCI 4 Mission algorithm applications, 15 KSLOC
Total software activities 100 All software-related expenses
The following are some noteworthy data points:
Some of the elements in table D-9 were split across elements in table D-8 to extract the
activities at the project management level.
The overall test team effort is relatively low compared with that of projects using
conventional process.
The main reason is that the architecture team delivered an integrated software product
to the test and assessment team, which was responsible for testing the integrated quality
of the evolving product.
CCPDS-R used an efficient environment representing 11% of the total cost of the
effort.
Overall maintenance – total rework effort expended – was only 5% of the total cost. It
was
tracked in the individual CSCI WBS elements, though not explicitly specified.
PART – V CASE STUDIES AND BACKUP MATERIAL Page 179 of 187
APPENDIX D CCPDS-R CASE STUDY
This definition treats declarative – specification – design more sensitively than it does
executable
– body – design.
This definition provided a consistent and adequate measure, though it is not a perfect
definition.
The components responsible for the change in the definition of SLOC
The SAS packages in SSV contained a network definition consisting of all the process
definitions, task definitions, socket definitions, and socket connections.
These packages contained a number of record definitions, custom-enumerated types, and
record and array field initializations in specification parts.
The source code for these elements consisted of more than 50,000 cariiage returns but
only a few hundred semicolons.
As the engineering effort involved with these packages was much more like the effort
associated with 50,000 SLOC, there was a need to change.
The second component, with similar rationale, was the same global message types.
These packages – numbering about 300 different record types – represented the majority
of data exchanged across SAS objects.
To allocate budgets properly and to compare productivities of different categories, a
method for
normalizing them was devised.
The result was an extension of the COCOMO technique for incorporating reuse, called
equivalent SLOC (ESLOC).
ESLOC converts the standard COCOMO measure of SLOC into a normalized measure
that is
comparable on an effort-per-line basis.
The need for this new measure arises in budget allocation and productivity analysis for
mixtures
of newly developed, reused, and tool-produced source code.
For example, a 10,000-SLOC display component that is automatically produced from a
tool by
specifying 1,000 lines of display formatting script should not be allocated the same
budget as a
newly developed 10,000-SLOC component.
The following table defines the conversion of SLOC to ESLOC on CCPDS-R:
TABLE D-11. SLOC-to-ESLOC conversion factors
SLOC
FORMAT
DESIGN
NEW = 40%
IMPLEMENT
NEW = 20%
TEST
NEW = 40%
ESLOC
Commercial 0% 0% 0% 0%
New 40% 20% 40% 100%
Reused 20% 5% 30% 55%
Automated 0% 0% 40% 40%
Tool input 30% 10% 10% 50%
The rationale for these conversion factors included:
Commercial off-the-shelf components do not contribute to the ESLOC count.
The integration of these components scales up with the amount of newly developed
interfacing software.
New software must be developed from scratch.
It requires complete design, implementation, and test efforts, and has an ESLOC
multiplier of
100% (1 to 1 conversion).
Reused components represent code previously developed for a different application to
be
modified to suit the current application.
The conversion provided above is a simple rule of thumb, instead of the methods to be
applied on individual instances.
Reused software required 50% of the design effort, 25% of the implementation effort,
and
75% of the test effort.
Normalized across the 40/20/40 allocations of new software and it results in a total of
55%.
Automated components require a separate source notation [the tool input format] as
input to a
tool that automatically produces the resulting SLOC.
Automated source code becomes part of the end-product; hence it needs to be fully
tested.
The design and implementation effort is set to zero.
PART – V CASE STUDIES AND BACKUP MATERIAL Page 181 of 187
APPENDIX D CCPDS-R CASE STUDY
If the automation tool is to be developed, its SLOC count should be included in the new
category.
The resulting conversion factor is 40% SLOC-to-ESLOC ratio.
Tool input can take on many diverse forms.
CCPDS-R had input files for the architecture definition, display definitions, and message
validation.
These higher level abstraction formats were converted using 75% of the design effort,
50%
of the implementation effort, and 25% of the test effort.
The resulting conversion factor 50% SLOC-to=ESLOC ratio.
The development of a few code production tools reduced the total ESLOC of the
Common
Subsystem by 78,000 lines, as indicated in the table below:
FIGURE D-12. Common Subsystem CSCI sizes in ESLOC
CSCI DELIVERED
SLOC
TOOLPRODUCED
TOOL
INPUTS
DEVELOPED
TOOLS
SIZE
(ESLOC)
NAS 20,000 20,000
SSV 1,60,000 1,40,000 20,000 15,000 1,01,000
DCO 70,000 18,000 6,000 6,000 68,800
TAS 10,000 4,000 7,600
CMP 15,000 15,000
CCO 80,000 40,000 12,000 3,000 65,000
Totals 3,55,000 2,02,000 38,000 24,000 2,77,400
ESLOC was analyzed solely to ensure that the overall staffing and budget allocations,
negotiated
with each CSCI lead, were relatively fair.
These ESLOC estimates were input to cost modeling analyses that incorporated the
relative
complexity of each CSCI and other COCOMO effort adjustment factors.
This code counting process provided a useful perspective for discussing several of the
engineering trade-offs being evaluated.
After the 1st year, the SLOC counts were stable and well correlated to the schedule
estimating
analyses performed throughout the project life cycle.
CCPDS-R illustrates why SLOC is a problematic metric for measuring software size, and
at the
same time is an example of a complex system in which SLOC metrics worked very
effectively.
This section on software size is a good example of the issues associated with transitioning
to
component-based development.
Projects can and must deal with heterogeneous measurements of size, but there is no
industryaccepted
approach.
So, project managers need to analyze carefully such important metrics definitions.
D.8.2 SUBSYSTEM PROCESS IMPROVMENTS
Real process improvements should be evident in subsequent project performance.
CCPDS-R is a perfect case study for illustrating this trend, as it comprised of three
separate
projects.
The Common Subsystem subsidized much of the groundwork for the PDS and
STRATCOM
subsystems – namely, the process definition, the tools, and the reusable architecture
primitives.
With each successive subsystem, productivity and quality improved significantly.
This is the expectation for a mature software process such as the one developed and
evolved on
CCPDS-R.
The CCPDS-R subsystems had consistent measures of human generated SLOC and
homogeneous processes, teams, and techniques – making comparison of productivities
possible.
Cost per SLOC is taken as the normalized unit of measure to compare productivities.
Relative costs among subsystems are taken to be most relevant.
The PDS subsystem was delivered at 40% of the cost per SLOC of the Common
Subsystem, and
the STRATCOM Subsystem at 33%.
This is one of the real indicators of a level 3 or level 4 process.
PART – V CASE STUDIES AND BACKUP MATERIAL Page 182 of 187
APPENDIX D CCPDS-R CASE STUDY
The following table summarizes the SCO traffic across all CSCIs at month 58:
TABLE D-13. CCPDS-R subsystem changes by CSCI
CSCI TOTAL
SCOs
OPENED
SCOs
CLOSED
SCOs
REJECTED
SCOs
AVERAGE
SCRAP
(SLOC/SCO)
AVERAGE
REWORK
(HOURS/SCO)
Common Subsystem
NAS 236 1 197 38 30 15
SSV 1,200 16 1,004 180 24 16
DCO 526 10 434 82 30 15
TAS 255 0 217 38 40 11
CMP 123 2 105 16 24 35
CCO 435 1 406 28 64 22
PDS Subsystem
PSSV 297 11 231 55 25 8
PDCO 167 10 126 31 25 21
PCO 73 0 72 1 20 10
STRATCOM Subsystem
SSSV 531 30 401 100 18 10
SDCO 339 11 286 42 16 14
STAS 60 0 50 10 20 9
SMP 327 17 299 10 30 9
SCO 180 1 160 19 40 8
SCG 61 6 51 4 85 27
Other
Support 648 2 546 100 Not tracked Not tracked
Test 376 1 356 19 Not tracked Not tracked
Operating
system/
vendor
223 13 161 49 Not tracked Not tracked
Totals 6,056 132 5,102 822 32 13
By the 58th month the Common Subsystem was beyond its FQT and had processed a few
SCOs
in a maintenance mode to accommodate engineering change proposals.
The PDS and STRATCOM subsystems were into their test phases.
For completeness, the table provides entries for support, test, and operating
system/vendor.
Support included code generation tools, configuration management tools, metrics tools,
and
standalone test drivers; test included software drivers used for requirements verification.
Table D-13 shows that the values of the modularity metric – average scrap per change –
and the
adaptability metric – average rework per change – were generally better in the subsequent
subsystems than they were in the Common Subsystem.
The only exception was the SCG CSCI, a special communications capability needed in
the
STRATCOM subsystem that did not have a counterpart in the other subsystems and was
uniquely complex.
CCPDS-R demonstrated the true indicator of a mature process.
With each subsequent subsystem, performance – as measured by quality, productivity, or
time to
market – improved.
CCPDS-R was subjected to a number of SEI software capability evaluations over its
lifetime,
and the project’s process maturity contributed to a level 3 or higher assessment.
These performance improvements were not due solely to a mature process.
Stakeholder teamwork and project investments in architecture middleware and process
automation were equally important to overall project success.
PART – V CASE STUDIES AND BACKUP MATERIAL Page 183 of 187
APPENDIX D CCPDS-R CASE STUDY
DCO was fairly average on all counts but accommodated substantial requirements
volatility
in the display interface without a contract amendment.
The design of this CSCI and the performance of the team were better than the numbers
would indicate.
TAS had a low productivity despite being the simplest and most well-understood
software.
The main reason was that the plan for task resources was less ambitious than the plans for
other teams.
Another reason was that the TAS team was locate off-site, with highly constrained
development environment resources.
CMP had a high cost of change and low productivity, and not for any technical
reasons.
To ensure technical integrity, the inherent missile warning algorithm changes were
closely
scrutinized by many stakeholders.
The coordination of this process resulted in high overhead in CMP productivity and
changes.
CCO had the worst quality metrics.
This was due to a design that did not foresee a major message set change and therefore
resulted in broad and hard-to-resolve breakage.
The CCO team was also the most difficult to transition – culturally – to the process,
metrics,
and demonstration approach used on CCPDS-R.
Overall, this level of productivity and quality was approximately double TRW’s standard
for
previous command center software projects.
D.9 PEOPLE FACTORS
CCPDS-R used two unique approaches to managing its people:
① The core team concept – focusing on leveraging the skills of a few experts across the
entire
team.
② The attrition-avoiding strategy.
The TRW management instituted an award fee flow-down program as an incentive to
people to
remain on the project for a long time.
As a result, there was very little attrition of people across the Common Subsystem, and
also
during the PDS and STRATCOM subsystems, as they overlapped enough with the
Common
Subsystem.
D.9.1 CORE TEAM
The core team of the CCPDS-R software organization was
Established early in the concept definition phase
To deal explicitly with the important 20% of the software engineering activities having
high return on investment.
In particular, the core team with fewer than 10 members was responsible for the
following:
1.Developing the highest leverage components – mostly within the NAS CSCI.
These components resolved the difficult computer science related issues like real-time
scheduling, interprocess communications, run-time configuration management, error
processing, and distributed systems programming.
Encapsulating these complex issues in a small number of high- leverage components
resulted
in the mainstream components becoming simpler and less dependent on experts.
2.Setting standards and procedures for design walkthroughs and software artifacts.
The core team represented the frontline pioneers for most of the software activities by
conducting any project workflow first, or building the first version of most artifacts.
The core team was intimately involved with setting precedent in standard for activities or
the
formats/contents of any artifact.
3.Disseminating the culture throughout the software organization.
The core team was a single, tight-knit team during the inception and most part of the
elaboration phases.
PART – V CASE STUDIES AND BACKUP MATERIAL Page 185 of 187
APPENDIX D CCPDS-R CASE STUDY
As the process and architecture stabilized, the team started to migrate, with several of its
members taking on technical leadership roles on the various development and assessment
teams.
During construction and transition, a few members of the core team maintained the
architecture integrity across the entire project.
These team and personnel transitions were the mechanism for maintaining a common
culture.
D.9.2 AWARD FEE FLOW-DOWN PLAN
The TRW management and the government (customer) were concerned about recruiting
and
retaining a stable, quality software team for the CCPDS-R project.
The project also needed to obtain and develop Ada experience, and Ada experience was a
scarce
resource at the time of the inception of CCPDS-R.
TRW proposed an innovative profit sharing approach to enhance the project’s ability to
attract
and retain a complementary team.
The basic premise of the CCPDS-R award fee flow-down plan was that employees would
share
in the profitability of the project.
[Award fees are contract payments above the cost basis. They are tied to project
performance
against predefined criteria.]
It was agreed to allocate a portion of the award fee pool at each major milestone to be
directly
given to the employees.
The relative contribution and longevity on the project of the individuals were the criteria
for the
distribution of the pool.
The flow-down plan was to achieve the following objectives:
Reward the entire team for excellent performance
Reward different peer groups relative to their overall contribution
Minimize attrition of good people
The plan was complex, but its implementation was simple.
This plan, in the end, achieved its goals in minimizing attrition.
One flaw in the plan was that the early award fees (at PDR and CDR) were far less
substantial
than the later award fees.
So, the teams responsible for the construction and transition phases got more than did
those
working on the inception and elaboration phases.
The basic operational concept of the plan:
Management defined the various peer groups – systems engineering, software
engineering,
business administration, and administration.
Every 6 months, the people within each peer group ranked one another with respect to
their
contribution to the project.
The manager of each peer groups also ranked the team members.
The manager compiled the results into a global performance ranking of the peer group.
Each award fee was determined by the customer at certain major milestones.
Half of each award fee pool was distributed to project employees.
The algorithm for the distributions is:
o The general range of additional compensation relative to each employee’s salary was
about 2% to 10% every year.
o The distribution to each peer group was relative to the average salary of the group.
o The differences in employees’ salaries within each group defined the relative
differences in the expected contributions of employees toward overall project success.
o The distribution within a peer group had two parts:
Half of the total peer group pool was distributed equally among all.
The other half was distributed to the top performers within the group as
defined by the group’s self-ranking.
Management had some discretion in the amounts and ranges.
The true impact of this award fee flow-down plan is hard to determine.
But, it made a difference in the overall teamwork and in retaining the critical people.
The peer ranking worked well in discriminating the top performers.
Excepting a few surprises, the peer rankings matched management’s perceptions closely.
TRW shared a little less than 10% of its overall profit with its employees.
The return on this investment would be considered high by all stakeholders.
PART – V CASE STUDIES AND BACKUP MATERIAL Page 186 of 187
APPENDIX D CCPDS-R CASE STUDY
D.10 CONCLUSIONS
TRW and the Air Force have documented the successes of architecture-first development
on
CCPDS-R.
This project achieved two-fold increases in productivity and quality along with on-
budget, onschedule
deliveries of large mission-critical systems.
The success of the CCPDS-R is due to the balanced use of modern technologies, modern
tools,
and an iterative development process.
The following table summarizes the dimensions of improvement incorporated into the
CCPDS-R
project:
TABLE D-15. CCPDS-R technology improvements
PARAMETER MODERN SOFTWARE PROCESS CCPDS-R APPROACH
Environment Integrated tools
Open systems
Hardware performance
Automation
DEC/Rational/Custom tools
VAX/DEC-dependent
Sever VAX family upgrades
Custom-developed management
system, metrics tools, code auditors
Size Reuse, commercial components
Object oriented
Higher level languages
CASE tools
Distributed middleware
Common architecture primitives,
tools, processes across all subsystems
Message-based, object-oriented
architecture
100% Ada
custom automatic code generators for
architecture, message input/output,
display format source code
Early investment in NAS development
for reuse across multiple subsystems
Process Iterative development
Process maturity models
Architecture first
Acquisition reform
Training
Demonstration, multiple builds, early
delivery
Level 3 process before SEI CMM
definition
Executable architecture baseline at
PDR
Excellent customer-contractor-user
teamwork; highly tailored 2167A for
iterative development
Mostly on-the-job training and
internal mentoring
The resulting efficiencies were largely attributable to a major reduction in the software
scrap and
rework (less than 25%) enabled by an architecture-first focus, an iterative development
process,
an enlightened and open-minded customer, and the use of modern environments,
languages, and
tools.
The Common Subsystem subsidized much of the groundwork for the PDS and
STRATCOM
subsystems.
This investment paid significant returns on the subsequent subsystems, in which
productivity and
quality improved.
This is the economic expectation of a mature software process as that developed and
evolved on
CCPDS-R.
PART – V CASE STUDIES AND BACKUP MATERIAL Page 187 of 187
APPENDIX D CCPDS-R CASE STUDY