SPPM UNIT 2
SPPM UNIT 2
SPPM UNIT 2
UNIT-2
Software Project Management Renaissance
Introduction
• Conventional software Management Practices ear sound in theory, but practice is still tied
to archaic technology and techniques.
• Conventional software economics provides a benchmark of performance for conventional
software management principles.
• The best thing about software is its flexibility: it can be programmed to do almost
anything.
• The worst thing about software is also its flexibility: the “almost anything” characteristic
has made it difficult to plan, monitors and control software development.
Three important analyses of the state of the software engineering industry are:
• Software development is still highly unpredictable. Only about 10% of software projects
are delivered successfully within initial budget and schedule estimates.
• Management discipline is more of a discriminator in success or failure than are
technology advances.
• The level of software scrap and rework is indicative of an immature process.
• All three analyses reached the same general conclusion: The success rate for software
projects is very low. The three analyses provide a good introduction to the magnitude of
the software problem and the current norms for conventional software management
performance.
IN THEORY
It provides an insightful and concise summary of conventional software management.
2. In order to manage and control all of the intellectual freedom associated with software
development, one must introduce several other “overhead” steps, including system
requirements definition, software requirements definition, program design, and testing.
These steps supplement the analysis and coding steps. Figure 1-1 illustrates the resulting
project profile and the basic steps in developing a large-scale program’
3. The basic framework described in the waterfall model is risky and invites failure. The
testing phase that occurs at the end of the development cycle is the first event for which
timing, storage, input/output transfers, etc., are experienced as distinguished form
analyzed. The resulting design changes are likely to be so disruptive that the software
1
MRITS
requirements upon which the design is based are likely violate. Either the requirements
must be modified or a substantial design change is warranted.
• Five necessary improvements for waterfall model are( the risks may be eliminated by
making the following five improvements):-
1. Program design comes first: The first step is to insert a preliminary program design
phase between the software requirements phase and the analysis phase. Hence, by this
technique, the software failure will not occur due to the continuous change in storage,
timing and data. The designer then urges the storage, timing and operational limitations.
On the analyst in such a way, that he notices the results. Resources insufficiently and the
design limitations are identified in the early stages before final designing coding and
testing. The following steps are required:
– Begin the design process with program designers, not analysts or programmers.
– Design, define, and allocate the data processing modes even at the risk of being
wrong. Allocate processing functions, design the database, allocate execution
2
MRITS
time, define interfaces and processing modes with the operating system, describe
input and output processing, and define preliminary operating procedures.
– Write an overview document that is understandable, informative, and current so
that every worker on the project can gain an elemental understanding of the
system.
2. Document the design. The amount of documentation associated with the software programs
is very large because of the following reasons: (1) Each designer must communicate with
interfacing designers, managers, and possibly customers. (2) During early phases, the
documentation is the design. (3) The real monetary value of documentation is to support later
modifications by a separate test team, a separate maintenance team, and operations personnel
who are not software literate.
3. Do it twice. The Computer program must be developed twice and the second version,
which takes into account all the critical design operations, must be finally delivered to the
customer for operational development. The first version of the computer program involves a
special board competence team, responsible for notifying the troubles in design, followed by
their modeling and finally generating an error-free program.
4. Plan, control, and monitor testing. The test phase is the biggest user of the project
resources, such as manpower, computer time and management assessments. It has the greatest
risk in terms of cost and schedules and develops at the most point in the schedule, when backup
alternatives are least available. Thus, most of the problems need to be solved before the test
phase, as it has to perform some other important operations. (1) Hire a team of test specialists,
who are not involved in the original design. (2) Apply visual inspections to discover the obvious
errors, such as skipping the wrong addresses, dropping of minus signs etc. (3) Conduct a test for
every logic path; (4) Employ the final checkout on the target computer.
5. Involve the customer. The customer must be involved in a formal way, so that, he has
devoted himself at the initial stages before final delivery. The customer’s perception, assessment
and commitment can strengthen the development effort. Hence, an initial design step followed
by a “preliminary software review", "critical software design reviews", during design and a "final
software acceptance review" after testing is performed.
1.1.2 IN PRACTICE (The following are the risks in the water fall Model)
• Some software projects still practice the conventional software management approach.
• It is useful to summarize the characteristics of the conventional process as it has typically
been applied, which is not necessarily as it was intended. Projects destined fro trouble
frequently exhibit the following symptoms:
4
MRITS
5
MRITS
creating the documents and only the simple things are reviewed. Hence, most design
reviews have low engineering value and high costs in terms of schedule and effort.
• Most software cost models can be abstracted into a function of five basic
parameters: size, process, personal, environment and required quality.
• The size of the end product (in human-generated components), which is typically
quantified in terms of the number of source instructions or the number of function points
required to develop the required functionality.
• The process used to produce the end product, in particular the ability of the process to
avoid non-value-adding activities (rework, bureaucratic delays, communications
overhead).
• The capabilities of software engineering personnel, and particularly their experience with
the computer science issues and the applications domain issues of the project.
• The environment, which is made up of the tools and techniques available to support
efficient software development and to automate the process
• The required quality of the product, including its features, performance, reliability and
adaptability.
• The relationships among these parameters and the estimated costs can be written as
follows: Effort = (Personnel) (Environment)( Quality)( Sizeprocess)
• One important aspect of software economics (as represented within today's software cost
models) is that the relationship between effort and size exhibits a diseconomy of scale.
The diseconomy of scale of software development is a result of the process exponent
being greater than 1.0. Contrary to most manufacturing processes, the more software you
build, the more expensive it is per unit item.
6
MRITS
• Figure 2-1 shows three generations of basic technology advancement in tools,
components, and processes. The required levels of quality and personnel are assumed to
be constant. The ordinate of the graph refers to software unit costs (pick your favorite:
per SLOC, per function point, per component) realized by an organization.
1. Conventional: 1960s and 1970s, craftsmanship. Organizations used custom tools, custom
processes, and virtually all custom components built in primitive languages. Project performance
was highly predictable in that cost, schedule, and quality objectives were almost always
underachieved.
3) Modern practices: 2000 and later, software production. This book's philosophy is rooted in
the use of managed and measured processes, integrated automation environments, and mostly
(70%) off-the shelf components. Perhaps as few as 30% of the components need to be custom
built Technologies for environment automation, size reduction, and process improvement are not
7
MRITS
independent of one another. In each new era, the key is complementary growth in all
technologies. For example, the process advances could not be used successfully without new
component technologies and increased tool automation.
8
MRITS
9
MRITS
• There are several popular cost estimation models (such as COCOMO, CHECKPOINT,
ESTIMACS, Knowledge Plan, Price-S, ProQMS, SEER, SLIM, SOFTCOST, and
SPQR/20), CO COMO is also one of the most open and well-documented cost estimation
models. The general accuracy of conventional cost models (such as COCOMO) has been
described a “within 20% of actual, 70% of the time.”
• Most real-world use of cost models is bottom-up (substantiating a target cost) rather than
top-down (estimating the “should” cost). Figure 2-3 illustrates the predominant practice:
the software project manager defines the target cost of the software, and then manipulates
the parameters and sizing until the target cost can be justified. The rationale for the target
cost may be to win a proposal, to solicit customer funding, to attain internal corporate
funding, or to achieve some other goal.
• The process described in figure 2-3 is not all bad. In fact, it is absolutely necessary to
analyze the cost risks and understand the sensitivities and trade-offs objectively. It forces
the software project manager to examine the risks associated with achieving the target
costs and to discuss this information with other stakeholders.
• A good software cost estimate has the following attributes:
• It is conceived and supported by the project manager, architecture team, development
team and test accountable for performing the work
• It is accepted by all stakeholders as ambitious but realizable.
• It is based on a well-defined software cost model with a credible basis.
• It is based on a database of relevant project experience that includes similar processes,
similar technologies, similar environments, similar quality requirements and similar
people.
• It is define din enough detail so that its key risk areas are understood and the probability
of success is objectively assessed.
• Extrapolating from a good estimate, an ideal estimate would be derived from a mature
cost model with an experience base that reflects multiple similar projects done by the
same team with the same mature processes and tools.
10
MRITS
***************************************************************************
INTRODUCTION
These parameters are given in priority order for most software domains. Table 2-1 lists some of
the technology developments, process improvement efforts, and management approaches
targeted at improving the economics of software development and integration.
11
MRITS
Size Higher order languages (C++, Ada 95, Java,Visual
Abstraction and component–based Basic, etc.)Object-oriented (analysis, design,
development technologies programming)Reuse Commercial
components
• The most significant way to improve affordability and return on investment (ROI) is
usually to produce a product that achieves the design goals with the minimum amount of
human-generated source material. Component-based development is introduced here as
the general term for reducing the "source" language size necessary to achieve a software
solution. Reuse, object oriented technology, automatic code production, and higher order
programming languages are all focused on achieving a given system with fewer lines of
human-specified source directives (statements). This size reduction is the primary
motivation behind improvements in higher order languages (such as C++, Ada 95, Java,
Visual Basic, and fourth-generation languages), automatic code generators (CASE tools,
visual modeling tools, GUI builders), reuse of commercial components (operating
systems, windowing environments, database management systems, middleware,
networks), and object-oriented technologies (Unified Modeling Language, visual
modeling tools, architecture frameworks). The reduction is defined in terms of human-
generated source material. In general, when size-reducing technologies are used, they
reduce the number of human-generated source lines.
2.1.1 LANGUAGES
• Universal function points (UFPs) are useful estimators for language-independent, early
life-cycle estimates. The basic units of function points are external user inputs, external
outputs, internal logical data groups, external data interfaces, and external inquiries.
SLOC metrics are useful estimators for software after a candidate solution is formulated
12
MRITS
and an implementation language is known. Substantial data have been documented
relating SLOC to function points. Some of these results are shown in Table 2-2.
Assembly 320
C 128
Fortran 77 105
Cobol 85 91
Ada 83 71
C++ 56
Ada 95 55
Java 55
Visual Basic 35
• There has been at widespread movement in the 1990s toward object-oriented technology.
The advantages of object-oriented methods include improvement in software productivity
and software quality. The fundamental impact of object-oriented technology is in
reducing the overall size of what needs to be developed.
• These are interesting examples of the interrelationships among the dimensions of
improving software economics.
2. The use of continuous integration creates opportunities to recognize risk early and
make incremental corrections without destabilizing the entire development effort.
13
MRITS
2. The existence of a culture that is centered on results, encourages communication, and
yet is not afraid to fail.
2.1.3 REUSE
• Reusing existing components and building reusable components have been natural
software engineering activities since the earliest improvements in programming
languages. Software design methods have always dealt implicitly with reuse in order to
minimize development costs while achieving all the other required attributes of
performance, feature set, and quality. Try to treat reuse as a mundane part of achieving a
return on investment.
• The cost of developing a reusable component is not trivial. Figure 3-1 examines the
economic tradeoffs. The steep initial curve illustrates the economic obstacle to
developing reusable components.
• Reuse is an important discipline that has an impact on the efficiency of all workflows and
the quality of most artifacts.
14
MRITS
15
MRITS
Custom 1. Complete change 1. Expensive, unpredictable
development freedom development
2. Smaller, often simpler 2. Unpredictable availability date
implementations 3. Undefined maintenance model
3. Often better 4. Often immature and fragile
performance 5. Single-platform dependency
4. Control of 6. Drain on expert resources
development and
enhancement
16
MRITS
Objectives Line-of-business Project Profitability Resource Management
profitability Risk management Risk resolution
Competitiveness Project Budget, Milestone budge,
Schedule, quality schedule, quality
• Teamwork is much more important than the sum of the individuals. With software teams,
a project manager needs to configure a balance of solid talent with highly skilled people
in the leverage positions, Some maxims of team management include the following:
– A well-managed project can succeed with a nominal Engineering team.
– A mismanaged project will almost never succeed, even with an expert team
of Engineers. A well-architected system can be built by a nominal team of
software builders.
–
• A poorly architected system will flounder even with an expert team of builders.
• Boehm five staffing principles are
– The principle of top talent: Use better and fewer people
– The principle of job matching: Fit the tasks to the skills and motivation of
the people available.
– The principle of career progression: An organization does best in the long
run by helping its people to self-actualize.
17
MRITS
– The principle of team balance: Select people who will complement and
harmonize with one another
– The principle of phase-out: Keeping a misfit on the team doesn't benefit
anyone.
–
• Software project managers need many leadership qualities in order to enhance team
effectiveness. The following are some crucial attributes of successful software project
managers that deserve much more attention:
– Hiring skills: Few decisions are as important as hiring decisions. Placing the right
person in the right job seems obvious but is surprisingly hard to achieve.
– Customer-interface skill: Avoiding adversarial relationships among stakeholders
is a prerequisite for success.
– Decision-Making skill: The jillion books written about management have failed to
provide a clear definition of this attribute. We all know a good leader when we
run into one, and decision-making skill seems obvious despite its intangible
definition.
– Team- building skill: Teamwork requires that a manager establish trust, motivate
progress, exploit eccentric prima donnas, transition average people into top
performers, eliminate misfits, and consolidate diverse opinions into a team
direction.
– Selling skill: Successful project managers must sell all stakeholders (including
themselves) on decisions and priorities, sell candidates on job positions, sell
changes to the status quo in the face of resistance, and sell achievements against
objectives. In practice, selling requires continuous negotiation, compromise, and
empathy
2.4 EXPLAIN ABOUT IMPROVING AUTOMATION THROUGH SOFTWARE
ENVIRONMENTS
• The tools and environment used in the software process generally have a linear effect on
the productivity of the process. Planning tools, requirements management tools, visual
modeling tools, compilers, editors, debuggers, quality assurance analysis tools, test tools,
and user interfaces provide crucial automation support far evolving the software
engineering artifacts. Above all, configuration management environments provide the
foundation for executing and instrument the process. At first order, the isolated impact of
tools and automation generally allows improvements of 20% to 40% in effort. However,
tools and environments must be viewed as the primary delivery vehicle for process
automation and improvement, so their impact can be much higher.
• Automation of the design process provides payback in quality. The ability to estimate
costs and schedules, and overall productivity using a smaller team. Integrated toolsets
play an increasingly important role in incremental/iterative development by allowing the
designers to traverse quickly among development artifacts and keep them up-to-date.
18
MRITS
source code into executable code. Reverse engineering is the generation or modification
of a more abstract representation from an existing artifact.
• Economic improvements associated with tools and environments. It is common for
tool vendors to make relatively accurate individual assessments of life-cycle activities to
support claims about the potential economic impact of their tools. For example, it is easy
to find statements such as the following from companies in a particular tool
– Requirements analysis and evolution activities consume 40% of life-cycle costs.
– Software design activities have an impact on more than 50% of the resources.
– Coding and unit testing activities consume about 50% of software development
effort and schedule.
– Test activities can consume as much as 50% of a project's resources.
– Configuration control and change management are critical activities that can
consume as much as 25% of resources on a large-scale project.
– Documentation activities can consume more than 30% of project Engineering
resources.
– Project management, business administration, and progress assessment can
consume as much as 30% of project budgets.
–
2.5 EXPLAIN ABOUT ACHIEVING REQUIRED QUALITY
• Software best practices are derived from the development process and technologies.
• Key practices that improve overall software quality include the following:
– Focusing on driving requirements and critical use cases early in the life cycle,
focusing on requirements completeness and traceability late in the life cycle, and
focusing throughout the life cycle on a balance between requirements evolution,
design evolution, and plan evolution.
– Using metrics and indicators to measure the progress and quality of an
architecture as it evolves from a high-level prototype into a fully compliant
product.
– Providing integrated life-cycle environments that support early and continuous
configuration control, change management, rigorous design methods, document
automation, and regression test automation.
– Using visual modeling and higher level languages that support architectural
control, abstraction, reliable programming, reuse, and self-documentation
– Early and continuous insight into performance issues through demonstration-
based evaluations.
19
MRITS
– Integration and Test: Serious performance problems were uncovered,
necessitating fundamental changes in the architecture. The underlying
infrastructure was usually the scapegoat, but the real culprit was immature use of
the infrastructure, immature architectural solutions, or poorly understood early
design trade-offs.
1. Make quality #1: Quality must be quantified and mechanisms put into place to
motivate its achievement.
2. High-quality software is possible: Techniques that have been demonstrated to
increase quality include involving the customer, prototyping, simplifying design,
conducting inspections, and hiring the best people.
3. Give products to customers early: No matter how hard you try to learn users needs
during the requirements phase, the most effective way to determine real needs is to
give users a product and let them play with it.
4. Determine the problem before writing the requirements: When faced with what
they believe is a problem, most engineers rush to offer a solution. Before you try to
solve a problem, be sure to explore all the alternatives and don't be blinded by the
obvious solution.
5. Evaluate Design Alternatives: After the requirements are agreed upon, you must
examine a variety of architectures and algorithms. You certainly do not want to use
"architecture" simply because it was used in the requirements specification.
6. Use an appropriate process model: Each project must select a process that makes
the most sense for that project on the basis of corporate culture, willingness to take
risks, application area, volatility of requirements, and the extent to which
requirements are well understood.
20
MRITS
7. Use different languages for different phases: Our industry's eternal thirst for simple
solutions to complex problems has driven many to declare that the best development
method is one that uses the same notation throughout the life cycle.
INTRODUCTION
Characteristic of a successful software development process is the well-defined
separation between "research and development" activities and "production" activities.
Most unsuccessful project:; exhibit one of the following characteristics:
An overemphasis on research and development
An overemphasis on production.
Successful modern projects-and even successful projects developed under the
conventional process-tend to have a very well-defined project milestone when
there is a noticeable transition from a research attitude to a production attitude.
Earlier phases focus on achieving functionality. Later phases revolve around
achieving a product that can be shipped to a customer, with explicit attention to
robustness, performance, and finish.
A modern software development process must be defined to support the following:
1. Evolution of the plans, requirements, and architecture, together with well
defined synchronization points
2. Risk management and objective measures of progress and quality
3. Evolution of system capabilities through demonstrations of increasing
functionality
TABLE 5-1: The two stages of the Life Cycle: Engineering and Production.
24
MRITS
Management Planning Operations
• The transition between engineering and production is a crucial event for the various
stakeholders. The production plan has been agreed upon, and there is a good enough
understanding of the problem and the solution that all stakeholders can make a firm
commitment to go ahead with production.
• Engineering stage is decomposed into two distinct phases, inception and elaboration, and
the production stage into construction and transition. These four phases of the life-cycle
process are loosely mapped to the conceptual framework of the spiral model as shown in
Figure 5-1.
Primary Objectives
• Establishing the project's software scope and boundary condition, including all
operational concept, acceptance criteria, and a clear understanding of what is and is not
intended to be in the product.
• Discriminating the critical use cases of the system and the primary scenarios of operation
that will drive the major design trade-offs.
• Demonstrating at least one candidate architecture against some of the primary scenarios.
• Estimating the cost and schedule for the entire project (including detailed estimates for
the elaboration phase).
• Estimating potential risks (sources of un predictability)
Essential Activities
• Formulating the scope of the project. The information repository should be sufficient to
define the problem space and derive the acceptance criteria for the end product.
25
MRITS
• Synthesizing the architecture: An information repository is created that is sufficient to
demonstrate the feasibility of at least one candidate architecture and an, initial baseline of
make/buy decisions so that the cost, schedule, and resource estimates can be derived.
• Planning and preparing a business case. Alternatives for risk management, staffing,
iteration plans, and cost/schedule/profitability trade-offs are evaluated.
26
MRITS
• During the construction phase, all remaining components and application features are
integrated into the application, and all features are thoroughly tested. Newly developed
software is integrated where required. The construction phase represents a production
process, in which emphasis is placed on managing resources and controlling operations to
optimize costs, schedules and quality.
• Primary Objectives
• Minimizing development costs by optimizing resources and avoiding unnecessary scrap
and rework
• Achieving adequate quality as rapidly as practical
• Achieving useful versions (alpha, beta and other test releases) as rapidly as practical
• Essential Activities
• Resource management, control and process optimization
• Complete component development and testing against evaluation criteria.
• Assessment of product releases against acceptance criteria of the vision.
• Primary Evaluation Criteria
• Is this product baseline mature enough to be deployed in the user community? (Existing
defects are not obstacles to achieving the purpose of eth next release).
• Is this product baseline stable enough to be deployed in the user community? (Pending
changes are not obstacles to achieving the purpose of the next release.)
• Are the stakeholders ready for transition to the user community?
• Are actual resource expenditures versus planned expenditures acceptable?
Management Set
Planning Artifacts Operational Artifacts
28
MRITS
training course, sales rollout kit), and the environment (hardware and software tools,
process automation & documentation).
• Management set artifacts are evaluated, assessed, and measured through a combination of
the following:
– Relevant stakeholder review.
– Analysis of changes between the current version of the artifact and previous
versions.
– Major milestone demonstrations of the balance among all artifacts and, in
particular, the accuracy of the business case and vision artifacts.
Requirements Set
• Requirements artifacts are evaluated, assessed, and measured through a combination of
the following:
– Analysis of consistency with the release specifications of the management set.
– Analysis of consistency between the vision and the requirements models.
– Mapping against the design, implementation, and deployment sets to evaluate the
consistency and completeness and the semantic balance between information in
the different sets.
– Analysis of changes between the current version of requirements artifacts and
previous versions (scrap, rework, and defect elimination trends).
– Subjective review of other dimensions of quality.
Design Set
• UML notation is used to engineer the design models for the solution. The design set
contains varying levels of abstraction that represent the components of the solution space
(their identities, attributes, static relationships, dynamic interactions). The design set is
evaluated, assessed and measured through a combination of the following:
– Analysis of the internal consistency and quality of the design model
– Analysis of consistency with the requirements models
– Translation into implementation and deployment sets and notations (for example,
traceability, source code generation, compilation, linking) to evaluate the
consistency and completeness and the semantic balance between information in
the sets.
– Analysis of changes between the current version of the design model and previous
versions (scrap, rework, and defect elimination trends).
– Subjective review of other dimensions of quality.
Implementation Set
• The implementation set includes source code (programming language notations) that
represents the tangible implementations of components (their form, interface, and
dependency relationships).
• Implementation sets are human-readable formats that are evaluated, assessed, and
measured through a combination of the following:
– Analysis of consistency with the design models.
29
MRITS
– Translation into deployment set notations (for example, compilation and linking)
to evaluate the consistency and completeness among artifact sets.
– Assessment of component source or executable files against relevant evaluation
criteria through inspection, analysis, demonstration, or testing
– Execution of stand-alone component test cases that automatically compare
expected results with actual results.
– Analysis of changes between the current version of the implementation set and
previous versions (scrap, rework, and defect elimination trends).
– Subjective review of other dimensions of quality.
Deployment Set
• The deployment set includes user deliverables and machine language notations,
executable software, and the build scripts, installation scripts, and executable target
specific data necessary to use the product in its target environment.
• Deployment sets are evaluated, assessed, and measured through a combination of the
following:
– Testing against the usage scenarios and quality attributes defined in the
requirements set to evaluate the consistency and completeness and the semantic
balance between information in the two sets.
– Testing the partitioning, replication, and allocation strategies in mapping
components of the implementation set to physical resources of the deployment
system (platform type, number, network topology).
– Testing against the defined usage scenarios in the user manual such as
installation, user oriented dynamic reconfiguration, mainstream usage, and
anomaly management
– Analysis of changes between the current version of the deployment set and
previous versions (defect elimination trends, performance changes).
– Subjective review of other dimensions of quality.
• Each artifact set is the predominant development focus of one phase of the life cycle; the
other sets take on check and balance roles. As illustrated in Figure 6-2, each phase has a
predominant focus:
• Requirements are the focus of the inception phase; design, the elaboration phase;
implementation, the construction phase; and deployment, the transition phase. The
management artifacts also evolve, but at a fairly constant level across the life cycle
30
MRITS
• Most of today’s software development tools map closely to one of the five artifact sets.
1. Management: scheduling, workflow, defect tracking, change management,
documentation, spreadsheet resource management, and presentation tools.
2. Requirements: requirements management tools.
3. Design: visual modeling tools.
4. Implementation: compiler/debugger tools, code analysis tools, test coverage
analysis tools, and test management tools.
5. Deployment: test coverage and test automation tools, network management tools,
commercial components (Operating Systems, GUIs, RDBMS, networks,
middleware), and installation tools.
• Each state of development represents a certain amount of precision in the final system
description. Early in the life cycle, precision is low and the representation is generally
high. Eventually, the precision of representation is high and everything is specified in full
detail. Each phase of development focuses on a particular artifact set. At the end of each
phase, the overall system state will have progressed on all sets, as illustrated in Figure 6-
3.
• The inception phase focuses mainly on critical requirements usually with a secondary
focus on an initial deployment view. During the elaboration phase, there is much greater
depth in requirements, much more breadth in the design set, and further work on
implementation and deployment issues. The main focus of the construction phase is
design and implementation. The main focus of the transition phase is on achieving
consistency and completeness of the deployment set in the context of the other sets.
• The test artifacts must be developed concurrently with the product from inception
through deployment. Thus, testing is a full-life-cycle activity, not a late life-cycle
activity.
• The test artifacts are communicated, engineered, and developed within the same artifact
sets as the developed product.
• The test artifacts are implemented in programmable and repeatable formats (as software
programs).
• The test artifacts are documented in the same way that the product is documented.
• Developers of the test artifacts use the same tools, techniques, and training as the
software engineers developing the product.
• Test artifact subsets are highly project-specific, the following example clarifies the
relationship between test artifacts and the other artifact sets. Consider a project to
perform seismic data processing for the purpose of oil exploration. This system has three
fundamental subsystems: (1) a sensor-subsystem that captures raw seismic data in real
time and delivers these data to (2) a technical operations subsystem that converts raw data
into an organized database and manages queries to this database from (3) a display
subsystem that allows workstation operators to examine seismic data in human-readable
form. Such a system would result in the following test artifacts:
32
MRITS
• Management Set: The release specifications and release descriptions capture the
objectives, evaluation criteria, and results of an intermediate milestone. These
artifacts are the test plans and test results negotiated among internal project teams.
The software change orders capture test results (defects, testability changes,
requirements ambiguities, enhancements) and the closure criteria associated with
making a discrete change to a baseline.
• Requirements Set: The system-level use cases capture the operational concept
for the system and the acceptance test case descriptions, including the expected
behavior of the system and its quality attributes. The entire requirement set is a
test artifact because it is the basis of all assessment activities across the life cycle.
• Design Set: A test model for non deliverable components needed to test the
product baselines is captured in the design set. These components include such
design set artifacts as a seismic event simulation for creating realistic sensor data;
a “virtual operator” that can support unattended, after-hours test cases; specific
instrumentation suites for early demonstration of resource usage; transaction rates
or response times; and use case test drivers and component stand-alone test
drivers.
• Implementation Set: Self-documenting source code representations for test
components and test drivers provide the equivalent of test procedures and test
scripts. These source files may also include human-readable data files
representing certain statically defined data sets that are explicit test source files.
Output files from test drivers provide the equivalent of test reports.
• Deployment Set: Executable versions of test components, test drivers, and data
files are provided.
34
MRITS
have insight into project costs and how they are expended. The structure of cost
accountability is a serious project planning constraint.
Software Change Order Database
• Managing change is one of the fundamental primitives of an iterative development
process. With greater change freedom, a project can iterate more productively. This
flexibility increases the content, quality and number of iterations that a project can
achieve within a given schedule. Change freedom has been achieved in practice through
automation, and today’s iterative development environments carry the burden of change
management. Organizational processes that depend on manual change management
techniques have encountered major inefficiencies.
Release Specifications
• The scope, plan, and objective evaluation criteria for each baseline release are derived
from the vision statement as well as many other sources (make/buy analyses, risk
management concerns, architectural considerations, shots in the dark, implementation
constraints, quality thresholds). These artifacts are intended to evolve along with the
process, achieving greater fidelity as the life cycle progresses and requirements
understanding matures. Figure 6-6 provides a default outline for a release specification
I. Iteration Content
II. Measurable objectives
A. Evaluation criteria
B. Follow through approach
III. Demonstration Plan
A. Schedule of activities
B. Team responsibilities
IV. Operational scenarios (use cases demonstrated)
A. Demonstration Procedures
B. Traceability to vision and business case.
Release descriptions
• Release description documents describe the results of each release, including
performance against each of the evaluation criteria in the corresponding release
specification. Release baselines should be accompanied by a release description
document that describes the evaluation criteria for that configuration baseline and
provides substantiation (through demonstration, testing, inspection, or analysis) that each
criterion has been addressed in an acceptable manner. Figure 6-7 provides a default
outline for a release description.
I. Context
A. Release baseline content
B. Release metrics
II. Release notes
A. Release-specific constraints or limitations.
III. Assessment Results
A. Substantiation of passed evaluation criteria
B. Follow-up plans for failed evaluation criteria
35
MRITS
C. Recommendations for next release
IV. Outstanding issues
A. Items
B. Post-mortem summary of lessons learned.
Status Assessments
• Status assessments provide periodic snapshots of project health and status, including the
software project manager’s risk assessment, quality indicators, and management
indicators. Typical status assessments should include a review of resources, personnel
staffing, financial data (cost and revenue), top 10 risks, technical progress (metrics
snapshots), major milestone plans and results, total project or product scope & action
items.
Environment
• An important emphasis of a modern approach is to define the development and
maintenance environment as a first-class artifact of the process. A robust, integrated
development environment must support automation of the development process. This
environment should include requirements management, visual modeling, document
automation, host and target programming tools, automated regression testing, and
continuous and integrated change management, and feature and defect tracking.
Deployment
• A deployment document can take many forms. Depending on the project, it could include
several document subsets for transitioning the product into operational status. In big
contractual efforts in which the system operations manuals, software installation manuals,
plans and procedures for cutover (from a legacy system), site surveys, and so forth. For
commercial software products, deployment artifacts may include marketing plans, sales
rollout kits, and training courses.
36
MRITS
• Most of the engineering artifacts are captured in rigorous engineering notations such as
UML, programming languages, or executable machine codes. Three engineering artifacts
are explicitly intended for more general review, and they deserve further elaboration.
Vision document
• The vision document provides a complete vision for the software system under
development and supports the contract between the funding authority and eth
development organization. A project vision is meant to be changeable as understanding
evolves of the requirements, architecture, plans and technology. A good visions
document should change slowly. Figure 6-9 provides a default outline for a visions
document
37
MRITS
B. Desired freedoms (potential change scenarios).
Architecture Description:
• The Architecture description provides an organized view of the software architecture
under development. It is extracted largely from the design model and includes views of
the design, implementation and deployment sets sufficient to understand how the
operational concept of the requirements et will be achieved. The breadth of the
architecture description will vary from project to project depending on many factors.
Figure 6-10 provides a default outline fro an architecture description.
• Architecture overview
• Objectives
• Constraints
• Freedoms
• Architecture views
• Design view
• Process view
• Component view
• Deployment view
• Architectural interactions
• Operational concept under primary scenarios
• Operational concept under secondary scenarios
• Operational concept under anomalous conditions
• Architecture performance
• Rationale, trade-offs and other substantiation.
• People want to review information but don’t understand the language of the
artifact. Many interested reviewers of a particular artifact will resist having to learn the
engineering language in which the artifact is written. It is not uncommon to find people
(such as veteran software managers, veteran quality assurance specialists, or an auditing
authority from a regulatory agency) who react as follows: “I’m not going to learn UML,
38
MRITS
but I want to review the design of this software, so give me a separate description such as
some flowcharts and text that I can understand.”
• People want to review the information but don’t have access to the tools. It is not
very common for the development organization to be fully tooled; it is extremely rare that
the/other stakeholders have any capability to review the engineering artifacts on-line.
Consequently, organization is forced to exchange paper documents. Standardized formats
(such as UML, spreadsheets, Visual Basic, C++ and Ada 95), visualization tools, and the
web are rapidly making it economically feasible for all stakeholders to exchange
information electronically.
• Human-readable engineering artifacts should use rigorous notations that are
complete, consistent, and used in a self-documenting manner. Properly spelled
English words should be used for all identifiers and descriptions. Acronyms and
abbreviations should be used only where they are well accepted jargon in the context of
the component’s usage. Readability should be emphasized and the use of proper English
words should be required in all engineering artifacts. This practice enables
understandable representations, browse able formats (paperless review), more-rigorous
notations, and reduced error rates.
• Useful documentation is self-defining: It is documentation that gets used.
• Paper is tangible; electronic artifacts are too easy to change. On-line and Web-based
artifacts can be changed easily and are viewed with more skepticism because of their
inherent volatility.
******************************************************************************
4. Explain about Model Based Software Architecture
39
MRITS
Achieving stable software architecture represents a significant project
milestone at which the critical make/buy decisions should have been resolved.
Architecture representations provide a basis for balancing the trade-offs
between the problem space (requirements and constraints) and the solution
space (the operational product).
The architecture and process encapsulate many of the important (high-
payoff or high-risk) communications among individuals, teams,
organizations and stakeholders.
Poor architectures and immature processes are often given as reasons for
project failures.
A mature process, an understanding of the primary requirements, and a
demonstrable architecture are important prerequisites fro predictable
planning.
Architecture development and process definition are the intellectual steps
that map the problem to a solution without violating the constraints; they
require human innovation and cannot be automated.
• An architecture framework is defined in the terms of views that are abstractions of the
UML models in the design set. The design model includes the full breadth and depth of
information. An architecture view is an abstraction of the design model; it contains only
the architecturally significant information. Most real-world systems require four views:
design, process, component and deployment. The purposes of these views are as follows:
– Design: Describes architecturally significant structures and functions of the
design model.
– Process: Describes concurrency and control thread relationship among the design,
component and deployment views.
– Component: Describes the structure of the implementation set.
– Deployment: Describes the structures of the deployment set.
– Figure 7-1: Summarizes the artifacts of the design set, including the architecture
views and architecture description.
• The requirements model addresses the behavior of the system as seen by its end users,
analysts, and testers. This view is modeled statically using use case and class diagrams
and dynamically using sequence, collaboration, state chart and activity diagrams.
40
MRITS
• The use case view describes how the system’s critical (architecturally significant) use
cases are realized by elements of the design model. It is modeled statically using use case
diagrams and dynamically using any of the UML behavioral diagrams.
• The design view describes the architecturally significant elements of the design model.
This view, an abstraction of the design model, addresses the basic structure and
functionality of the solution. It is modeled statically using calls and object diagrams and
dynamically using any of the UML behavioral diagrams.
• The process view addresses the run-time collaboration issues involved in executing the
architecture on a distributed deployment model, including the logical software network
topology (allocation to process and threads of control), inter process communication and
state management. This view is modeled statically using deployment diagrams and
dynamically using any of the UML behavioral diagrams.
• The component view describes the architecturally significant elements of the
implementation set. This view, an abstraction of the design model, addresses the software
source code realization of the system from the perspective of the project’s integrators and
developers, especially with regard to releases and configuration management. It is
modeled statically using component diagrams and dynamically using any of the UML
behavioral diagrams.
• The deployment view addresses the executable realization of the system, including the
allocation of logical processes in the distribution view (the logical software topology) to
physical resources of the deployment network (the physical system topology). It is
modeled statically using deployment diagrams and dynamically using any of the UML
behavioral diagrams.
41
MRITS
42