Oose 1
Oose 1
Oose 1
Historical Aspects
• NATO group coined the phrase during a 1967, 1968 meeting in Garmisch, Germany ( a
very pretty place).
• They believed the development of software should be an engineering discipline to solve
what has been termed a “software crisis.
• The crisis continues.
Software life cycle model (also called process model) is a descriptive and diagrammatic
representation of the software life cycle. A life cycle model represents all the activities required
to make a software product transit through its life cycle phases. It also captures the order in
which these activities are to be undertaken. In other words, a life cycle model maps the different
activities performed on a software product from its inception to retirement. Different life cycle
models may map the basic development activities to phases in different ways. Thus, no matter
which life cycle model is followed, the basic activities are included in all life cycle models
though the activities may be carried out in different orders in different life cycle models. During
any life cycle phase, more than one activity may also be carried out
Software Maintenance:
Software Maintenance is the process of modifying a software product after it has been delivered
to the customer. The main purpose of software maintenance is to modify and update software
applications after delivery to correct faults and to improve performance.
1. Corrective maintenance:
Corrective maintenance of a software product may be essential either to rectify some bugs
observed while the system is in use, or to enhance the performance of the system.
2. Adaptive maintenance:
This includes modifications and updations when the customers need the product to run on
new platforms, on new operating systems, or when they need the product to interface with
new hardware and software.
3. Perfective maintenance:
A software product needs maintenance to support the new features that the users want or to
change different types of functionalities of the system according to the customer demands.
4. Preventive maintenance:
This type of maintenance includes modifications and updations to prevent future problems
of the software. It goals to attend problems, which are not significant at this moment but
may cause serious issues in future.
Requirements:
A condition or capability needed by a user to solve a problem or achieve an objective
A condition or capability that must be met or possessed by a system or system component
to satisfy a contract, standard, specification or other formally imposed documents
A documented representation of a condition or capability as in 1 and 2.
A software requirement can be of 3 types:
Functional requirements
Non-functional requirements
Domain requirements
Functional Requirements: These are the requirements that the end user specifically
demands as basic facilities that the system should offer. All these functionalities need to be
necessarily incorporated into the system as a part of the contract. These are represented or
stated in the form of input to be given to the system, the operation performed and the output
expected. They are basically the requirements stated by the user which one can see directly in
the final product, unlike the non-functional requirements. For example, in a hospital
management system, a doctor should be able to retrieve the information of his patients. Each
high-level functional requirement may involve several interactions or dialogues between the
system and the outside world. In order to accurately describe the functional requirements, all
scenarios must be enumerated. There are many ways of expressing functional requirements
e.g., natural language, a structured or formatted language with no rigorous syntax and formal
specification language with proper syntax.
Non-functional requirements: These are basically the quality constraints that the system
must satisfy according to the project contract. The priority or extent to which these factors are
implemented varies from one project to other. They are also called non-behavioral
requirements. They basically deal with issues like:
Portability
Security
Maintainability
Reliability
Scalability
Performance
Reusability
Flexibility
NFR’s are classified into following types:
Interface constraints
Performance constraints: response time, security, storage space, etc.
Operating constraints
Life cycle constraints: maintainability, portability, etc.
Economic constraints
The process of specifying non-functional requirements requires the knowledge of the
functionality of the system, as well as the knowledge of the context within which the system
will operate.
Domain requirements:
Domain requirements are the requirements which are characteristic of a particular category
or domain of projects. The basic functions that a system of a specific domain must necessarily
exhibit come under this category. For instance, in an academic software that maintains records
of a school or college, the functionality of being able to access the list of faculty and list of
students of each grade is a domain requirement. These requirements are therefore identified
from that domain model and are not user specific .
Feasibility Study
Feasibility Study in Software Engineering is a study to evaluate feasibility of proposed
project or system. Feasibility study is one of stage among important four stages of Software
Project Management Process . As name suggests feasibility study is the feasibility analysis or it
is a measure of the software product in terms of how much beneficial product development
will be for the organization in a practical point of view. Feasibility study is carried out based
on many purposes to analyze whether software product will be right in terms of development,
implantation, contribution of project to the organization etc.
Types of Feasibility Study :
The feasibility study mainly concentrates on below five mentioned areas. Among these
Economic Feasibility Study is most important part of the feasibility analysis and Legal
Feasibility Study is less considered feasibility analysis .
1. Technical Feasibility –
In Technical Feasibility current resources both hardware software along with required
technology are analyzed/assessed to develop project. This technical feasibility study gives
report whether there exists correct required resources and technologies which will be used
for project development. Along with this, feasibility study also analyzes technical skills and
capabilities of technical team, existing technology can be used or not, maintenance and up-
gradation is easy or not for chosen technology etc.
2. Operational Feasibility –
In Operational Feasibility degree of providing service to requirements is analyzed
along with how much easy product will be to operate and maintenance after
deployment. Along with this other operational scopes are determining usability of
product, Determining suggested solution by software development team is
acceptable or not etc.
3. Economic Feasibility –
In Economic Feasibility study cost and benefit of the project is analyzed. Means
under this feasibility study a detail analysis is carried out what will be cost of the
project for development which includes all required cost for final development like
hardware and software resource required, design and development cost and
operational cost and so on. After that it is analyzed whether project will be
beneficial in terms of finance for organization or not.
4. Legal Feasibility –
In Legal Feasibility study project is analyzed in legality point of view. This includes
analyzing barriers of legal implementation of project, data protection acts or social
media laws, project certificate, license, copyright etc. Overall it can be said that
Legal Feasibility Study is study to know if proposed project conform legal and
ethical requirements.
5. Schedule Feasibility –
In Schedule Feasibility Study mainly timelines/deadlines is analyzed for proposed
project which includes how many times teams will take to complete final project
which has a great impact on the organization as purpose of project may fail if it
can’t be completed on time
ObjectOrientedAnalysis:
Object Oriented Analysis (OOA) is the first technical activity performed as part
of object oriented software engineering. OOA introduces new concepts to
investigate a problem. It is based in a set of basic principles, which are as
follows-
1. The information domain is modeled.
2. Behavior is represented.
3. Function is described.
4. Data, functional, and behavioral models are divided to uncover greater detail.
5. Early models represent the essence of the problem, while later ones provide
implementation details.
ObjectOrientedDesign:
An analysis model created using object oriented analysis is transformed by
object oriented design into a design model that works as a plan for software
creation. OOD results in a design having several different levels of modularity
i.e., The major system components are partitioned into subsystems (a system
level “modular”), and data their manipulation operations are encapsulated into
objects (a modular form that is the building block of an OO system.).
In addition, OOD must specify some data organization of attributes and a
procedural description of each operation. Shows a design pyramid for object
oriented systems. It is having the following four layers.
Object orieneted paradigm:
Major breakthroughs were made between approximately 1975 and 1985, with the development of
the so-called structured or classical paradigm. The techniques constituting the classical paradigm include
structured systems analysis.
The object-oriented paradigm has many strengths:
1. The object-oriented paradigm supports information hiding, a mechanism for ensuring that
implementation details are local to an object. Consequently, if during maintenance implementation details
are changed within an object, information hiding ensures that no other parts of the product need to be
modified to ensure consistency. Accordingly, the object-oriented paradigm makes maintenance quicker
and easier, and the chance of introducing a regression fault (that is, a fault inadvertently introduced into
one part of a product as a consequence of making an apparently unrelated change to another part of the
product) is greatly reduced.
2. In addition to maintenance, the object-oriented paradigm also makes development easier. In many
instances, an object has a physical counterpart.
For example, a bank account object in a bank product corresponds to an actual bank account in the
bank for which the product is being written, modeling plays a major role in the object-oriented paradigm.
The close correspondence between the objects in a product and their counterparts in the real world should
lead to better quality software.
3. Well-designed objects are independent units. As stated at the beginning of this section, an object
encompasses both attributes and the operations performed on the attributes. If all the operations
performed on the attributes of an object are included in that object, then the object can be considered a
conceptually independent entity. This conceptual independence sometimes is termed encapsulation. But
there is an additional form of independence, physical independence. In a well-designed object,
information hiding ensures that implementation details are hidden from everything outside that object.
The only allowable form of communication is sending a message to the object to carry out a specific c
operation. The way that the operation is carried out is entirely the responsibility of the object itself. For
this reason, object-oriented design sometimes is referred to as responsibilitydriven design [Wirfs- Brock,
Wilkerson, and Wiener, 1990] or design by contract [Meyer, 1992].
For another view of responsibility-driven design, derived from an example in [Budd, 2002].
4. A product built using the classical paradigm is implemented as a set of modules, but conceptually it is
essentially a single unit. This is one reason why the classical paradigm has been less successful when
applied to larger products. In contrast, when the object-oriented paradigm is used correctly, the resulting
product consists of a number of smaller, largely independent units. The object-oriented paradigm reduces
the level of complexity of a software product and hence simplifies both development and maintenance.
5. The object-oriented paradigm promotes reuse; because objects are independent entities, they can be
utilized in future products. This reuse of objects reduces the time and cost of both development and
maintenance.
However, the object-oriented paradigm is by no means a panacea of all ills:
1. Like all approaches to software production, the object-oriented paradigm has to be used correctly; it is
just as easy to misuse the object-oriented paradigm as any other paradigm.
2. When correctly applied, the object-oriented paradigm can solve some (but not all) of the problems of
the classical paradigm.
3. The object-oriented paradigm has some problems of its own.
4. The object-oriented paradigm is the best approach available today. However, like all technologies, it is
certain to be superseded by a superior technology in the future.
Ethical issues
Software products are developed and maintained by humans. If those individuals are hardworking,
intelligent, sensible, up to date, and above all, ethical, then the chances are good that the way that the
software products they develop and maintain will be satisfactory.
Software engineers shall commit themselves to making the analysis, specification, design, development,
testing and maintenance of software a beneficial and respected profession. In accordance with their
commitment to the health, safety, and welfare of the public, software engineers shall adhere to the
following Eight Principles:
1. Public—Software engineers shall act consistently with the public interest.
2. Client and Employer—Software engineers shall act in a manner that is in the best interests of their
client and employer consistent with the public interest.
3. Product—Software engineers shall ensure that their products and related modifications meet the
highest professional standards possible.
4. Judgment—Software engineers shall maintain integrity and independence in their professional
judgment.
5. Management—Software engineering managers and leaders shall subscribe to and promote an ethical
approach to the management of software development and maintenance.
6. Profession—Software engineers shall advance the integrity and reputation of the profession consistent
with the public interest.
7. Colleagues—Software engineers shall be fair to and supportive of their colleagues.
8. Self—Software engineers shall participate in lifelong learning regarding the practice of their profession
and shall promote an ethical approach to the practice of the profession.
Software development in theory:
Risks and other aspects of iteration and incrementation:
Another way of looking at iteration and incrementation is that the project as a whole is
divided into smaller mini projects (or increments). Each mini project extends the requirements, analysis,
design, implementation, and testing artifacts. Finally, the resulting set of artifacts constitutes the
complete software product. In fact, each mini project consists of more than just extending the artifacts.
It is essential to check that each artifact is correct (the test work flow) and make any necessary
changes to the relevant artifacts. This process of checking and modifying, then rechecking and
modifying, and so on, is clearly iterative in nature. It continues until the members of the development
team are satisfied with all the artifacts of the current mini project (or increment). When that happens,
they proceed to the next increment.
1. Multiple opportunities are offered for checking that the software product is correct. Every iteration
incorporates the test work flow, so every iteration is another chance to check all the artifacts developed
up to this point. The later faults are detected and corrected, the higher is the cost, as shown in Figure
1.5. Unlike the waterfall model, each of the many iterations of the iterative-and-incremental model
offers a further opportunity to find faults and correct them, thereby saving money.
2. The robustness of the underlying architecture can be determined relatively early in the life cycle. The
architecture of a software product includes the various component artifacts and how they fit together.
An analogy is the architecture of a cathedral, which might be described as Romanesque, Gothic, or
Baroque, among other possibilities. Similarly, the architecture of a software product might be described
as object-oriented, pipes and filters ( UNIX or Linux components), or client–server (with a central server
providing file storage for a network of client computers).
3. The iterative-and-incremental model enables us to mitigate risks early. Risks are invariably involved in
software development and maintenance. In the Win burg mini case study,
for example, the original image recognition algorithm was not fast enough; there is an ever-present risk
that a completed software product will not meet its time constraints. Developing a software product
incrementally enables us to mitigate such risks early in the life cycle. For example, suppose a new local
area network (LAN) is being developed and there is concern that the current network hardware is
inadequate for the new software product. Then, the first one or two iterations are directed toward
constructing those parts of the software that interface with the network hardware. If it turns out that,
contrary to the developers’ fears, the network has the necessary capability, the developers can proceed
with the project, confident that this risk has been mitigated. On the other hand, if the network indeed
cannot cope with the additional traffic that the new LAN generates, this is reported to the client early in
the life cycle, when only a small proportion of the budget has been spent. The client can now decide
whether to cancel the project, extend the capabilities of the existing network, buy a new and more
powerful network, or take some other action. 4. We always have a working version of the software.
Suppose a software product is developed using the idealized life-cycle model of Figure 2.1. Only at the
very end of the project is there a working version of the software product. In contrast, when the
iterative-and-incremental life-cycle model is used, at the end of each iteration, there is a working
version of part of the overall target software product. The client and the intended users can experiment
with that version and determine what changes are needed to ensure that the future complete
implementation meets their needs. These changes can be made to a subsequent increment, and the
client and users can then determine if further changes are needed. A variation on this is to deliver partial
versions of the software product, not only for experimentation but to smooth the introduction of the
new software product in the client organization. Change is almost always perceived as a threat. All too
frequently, users fear that the introduction of a new software product within the workplace will result in
them losing their jobs to a computer. However, introducing a software product gradually can have two
benefits. First, the understandable fear of being replaced by a computer is diminished. Second, it is
generally easier to learn the functionality of a complex software product if that functionality is
introduced stepwise over a period of months, rather than as a whole. 5. There is empirical evidence that
the iterative-and-incremental life-cycle works. The pie chart of Figure 1.1 shows the results of the report
from The Standish Group on projects completed in 2004 [ Hayes, 2004]. In fact, this report (the so-called
CHAOS Report—see Just in Case You Wanted to Know Box 2.2) is produced every 2 years. Figure 2.7
shows the results for 1994 through 2004. The percentage of successful products increased steadily from
16 percent in 1994 to 34 percent in 2002, but then decreased to 29 percent in 2004. In both the 2002
[Softwaremag.com, 2004] and 2004 [Hayes, 2004] reports, one of the factors associated with the
successful projects was the use of an iterative process. (The reasons given for the decrease in the
percentage of successful projects in 2004 included more large projects than in 2002, use of the waterfall
model, lack of user involvement, and lack of support from senior executives [Hayes, 2004].)