Notes Subject Software Engineering
Notes Subject Software Engineering
Notes Subject Software Engineering
SESSION: 2016-17
Evaluation scheme:
Name of Periods Evaluation Scheme Subject Credit
Subject L T P CT TA TOTAL ESE Total
- Software 3 1 0 30 20 50 100 150 4
Engineering
1
INDEX
Unit-1
Introduction to Software Engineering
1.1 Process and Project
1.2 Software Components
1.3 Software Characteristics
1.4 Software Crisis
1.5 Software Engineering Processes
1.6 Software Development Life Cycle (SDLC) Models
1.6.1 Classical Water Fall Model
1.6.2 Iterative waterfall model
1.6.3 Prototype Model
1.6.4 Evolutionary Development Models
1.6.5 Spiral Model
1.6.6 RAD Model
1.6.7 Iterative Enhancement Models.
Unit-2
Unit-3
Software Design
3.1 Software Design
3.2 Software Design principles
3.3 Architectural design
3.4 Coupling and Cohesion Measures
3.5 Function Oriented Design
3.6 Object Oriented Design, Top Down and Bottom-Up Design
2
3.7 Software Measurement and Metrics
3.7.1 Halestead’s Software Science
3.7.2 Function Point (FP)
Unit-4
Software Testing and Maintenance
4.1 Software Testing
4.2 Levels of Testing:
4.2.1 Unit Testing
4.2.2 Integration Testing
4.2.3 Acceptance Testing
4.3 Top-Down and Bottom-Up Testing
4.4 Functional Testing (Black Box Testing)
4.5 Structural Testing (White Box Testing)
4.6 Mutation Testing
4.7 Performance testing
4.8 Coding
4.8.1 Coding Standards
4.8.2 Coding Guidelines
Unit-5
Software Maintenance and project management
5.1Software Maintenance
5.2 Software RE-Engineering
5.3 Software Reverse engineering
5.4 Software Configuration management (CM)
5.4.1 Functions of SCM
5.4.2 SCM Terminology
5.4.3 SCM Activities
3
Unit-1: Introduction of Software enginnering
4
Software products may be
· Generic - developed to be sold to a range of different customers e.g. PC
software such as Excel or Word.
· Bespoke (custom) - developed for a single customer according to their
specification.
New software can be created by developing new programs, configuring generic software
systems or reusing existing software.
5
2) Software is not manufactured, but it is developed through the life cycle concept.
3) Reusability of components
4) Software is flexible and can be amended to meet new requirements
5) Cost of its change at if not carried out initially is very high at later stage .
Program Software
6
· Programmers have skills for programming but without the engineering
mindset about a process discipline
Standish Group Report OG States:
1. About US$250 billions spent per year in the US on application
development
2. Out of this, about US$140 billions wasted due to the projects getting
abandoned or reworked; this in turn because of not following best practices
and standards
3. 10% of client/server apps are abandoned or restarted from scratch
4. 20% of apps are significantly altered to avoid disaster
5. 40% of apps are delivered significantly late
6. 30% are only successful
Software Engineering
Software engineering is concerned with the theories, methods and tools for developing,
managing and evolving software products.
· “The systematic application of tools and techniques in the development of
computer-based applications.” (Sue Conger in The New Software Engineering)
· “Software Engineering is about designing and developing high-quality software.”
(Shari Lawrence Pfleeger in Software Engineering -- The Production of Quality
Software)
· A systematic approach to the analysis, design, implementation and maintenance of
software. (The Free On-Line Dictionary of Computing)The systematic application
of tools and techniques in the development of computer-based applications. (Sue
Conger in The New Software Engineering) Software Engineering is about
designing and developing high-quality software. (Shari Lawrence Pfleeger in
Software Engineering -- The Production of Quality Software)
· The technological and managerial discipline concerned with systematic production
and maintenance of software products that are developed and modified on time and
within cost constraints (R. Fairley)
· A discipline that deals with the building of software systems which are so large that
they are built by a team or teams of engineers (Ghezzi, Jazayeri, Mandrioli)
7
Software Engineering Software Programming
8
• Software engineering is about 50 years old whereas traditional engineering as a
whole is thousands of years old.
• Software engineering is often busy with researching the unknown (eg. To drive an
algorithm) right in the middle of a project whereas traditional engineering normally
separates these activities. A project is supposed to apply research results in known
or new clever ways to build the desired result.
• Software engineering has first recently started to codify and teach best practice in
the form of design pattern whereas in traditional, some engineering discipline have
thousands of years of best practice experience handed over from generation to
generation via a field’s literature , standards, rules and regulations.
9
Activities undertaken during maintenance: - Maintenance of a typical software product
requires much more than the effort necessary to develop the product itself. Many studies
carried out in the past confirm this and indicate that the relative effort of development of a
typical software product to its maintenance effort is roughly in the 40:60 ratio. This phase
continues till the software is in use.
10
1.6.2 Iterative Waterfall Model
• Waterfall model assumes in its design that no error will occur during the design
phase
• Iterative waterfall model introduces feedback paths to the previous phases for each
process phase
• It is still preferred to detect the errors in the same phase they occur
• Conduct reviews after each milestone
11
• It is difficult to define all requirements at the beginning of the project.
• Model is not suitable for accommodating any change
• It does not scale up well to large projects
• Inflexible partitioning of the project into distinct stages makes it difficult to
respond to changing customer requirements.
• Therefore, this model is only appropriate when the requirements are well-
understood and changes will be fairly limited during the design process.
• Few business systems have stable requirements.
• The waterfall model is mostly used for large systems engineering projects where a
system is developed at several sites.
1. The new system requirements are defined in as much detail as possible. This
usually involves interviewing a number of users representing all the departments or
aspects of the existing system.
2. A preliminary design is created for the new system.
12
3. A first prototype of the new system is constructed from the preliminary design.
This is usually a scaled-down system, and represents an approximation of the
characteristics of the final product.
4. The users thoroughly evaluate the first prototype, noting its strengths and
weaknesses, what needs to be added, and what should to be removed. The
developer collects and analyzes the remarks from the users.
5. The first prototype is modified, based on the comments supplied by the users, and a
second prototype of the new system is constructed.
6. The second prototype is evaluated in the same manner as was the first prototype.
7. The preceding steps are iterated as many times as necessary, until the users are
satisfied that the prototype represents the final product desired.
8. The final system is constructed, based on the final prototype.
9. The final system is thoroughly evaluated and tested. Routine maintenance is carried
out on a continuing basis to prevent large-scale failures and to minimize downtime.
Customers could believe it’s the working system. Developer could take “shortcuts” and
only fill the prototype instead of redesigning it. Customers may think that the system is
almost done, and only few fixes are needed
Advantage of Prototype model
• Suitable for large systems for which there is no manual process to define there
requirements.
• User training to use the system.
• User services determination.
• System training.
• Quality of software is good.
• Requirements are not freezed.
Disadvantage of Prototype model
• It is difficult to find all the requirements of the software initially.
• It is very difficult to predict how the system will work after development.
13
Concurrent
activities
Initial
Specification version
Outline Intermedia te
description Development versions
Final
Validation version
Advantages:
· Deals constantly with changes
· Provides quickly an initial version of the system
· Involves all development teams
Disadvantages:
· Quick fixes may be involved
· “Invisible” process, not well-supported by documentation
· The system’s structure can be corrupted by continuous change
14
• Progressively more complete version of the software gets built with each iteration
around the spiral
15
1.6.6 Rapid Development Application model (RAD)
RAD is proposed when requirements and solutions can be modularized as independent
system or software components, each of which can be developed by different teams. User
involvement is essential from requirement phase to deliver of the product. The process is
stared with rapid prototype and is given to user for evolution. In that model user feedback
is obtained and prototype is refined. SRS and design document are prepared with the
association of users. RAD becomes faster if the software engineer uses the component’s
technology (CASE Tools ) such that the components are really available for reuse. Since
the development is distributed into component-development teams, the teams work in
tandem and total development is completed in a short period (i.e., 60 to 90 days).
RAD Phases
• Requirements planning phase (a workshop utilizing structured discussion of
business problems)
• User description phase – automated tools capture information from users
• Construction phase – productivity tools, such as code generators, screen
generators, etc. inside a time-box. (“Do until done”)
• Cutover phase -- installation of the system, user acceptance testing and user
training
Advantage of RAD
· Dramatic time savings the systems development effort
· Can save time, money and human effort
· Tighter fit between user requirements and system specifications
· Works especially well where speed of development is important
Disadvantage of RAD
· More speed and lower cost may lead to lower overall system quality
· Danger of misalignment of system developed via RAD with the business due to
missing information
· May have inconsistent internal designs within and across systems
· Possible violation of programming standards related to inconsistent naming
conventions and inconsistent documentation
16
Unit-2: Software Requirement Specifications
2.1 Software Requirement
1. A condition of capability needed by a user to solve a problem or achieve an
objective
2. A condition or a capability that must be met or possessed by a system ... to satisfy a
contract, standard, specification, or other formally imposed document."
• Functional requirements: - describe what the software has to do. They are often
called product features. It depends on the type of software, expected users and the
type of system where the software is used.
• Non Functional requirements: - are mostly quality requirements. That stipulate
how well the software does, what it has to do.These define system properties and
constraints e.g. reliability, response time and storage requirements. Constraints are
I/O device capability, system representations, etc.
• User requirements:- Statements in natural language plus diagrams of the services
the system provides and its operational constraints.
• System requirements: - A structured document setting out detailed descriptions of
the system’s functions, services and operational constraints. Defines what should
be implemented so may be part of a contract between client and contractor.
17
Requirements engineering processes:- The processes used for RE vary widely depending
on the application domain, the people involved and the organisation developing the
requirements. However, there are a number of generic activities common to all processes
· Requirements elicitation
· Requirements analysis
· Requirements documentation
· Requirements review
Requirem ent
Elicitation
Requirem ent
docum entation
Requirem ent
Review
SRS
18
Ø Schedule feasibility: - Are the project’s schedule assumption realistic?
2.1.3 Analysis
Requirement analysis phase analyze, refine and scrutinize requirements to make
consistent & unambiguous requirements.
19
1. Draw the context diagram
The context diagram is a simple model that defines the boundaries and interface of the
proposed system.
2. Development of prototype
Prototype helps the client to visualize the proposed system and increase the understanding
of requirement. Prototype may help the parties to take final decision.
3. Model the requirement
This process usually consists of various graphical representations of function, data entities,
external entities and their relationship between them. It graphical view may help to find
incorrect, inconsistent, missing requirements. Such models include data flow diagram,
entity relationship diagram, data dictionary, state transition diagram.
4. Finalize the requirement
After modeling the requirement inconsistencies and ambiguities have been identified and
corrected. Flow of data among various modules has been analyzed. Now Finalize and
analyzes requirements and next step is to document these requirements in prescribed
format.
2.1.4 Documentation
This is the way of representing requirements in a consistent format SRS serves many
purpose depending upon who is writing it.
20
• This causes a communication gap between the parties involved in the development
project. A basic purpose of software requirements specification is to bridge this
communication gap.
Characteristics of good SRS document:- Some of the identified desirable qualities of the
SRS documents are the following-
Concise- The SRS document should be concise and at the same time unambiguous,
consistent, and complete. An SRS is unambiguous if and only if every requirement stated
has one and only one interpretation.
Structured- The SRS document should be well-structured. A well-structured document is
easy to understand and modify. In practice, the SRS document undergoes several revisions
to cope with the customer requirements.
Black-box view- It should only specify what the system should do and refrain from stating
how to do. This means that the SRS document should specify the external behaviour of the
system and not discuss the implementation issues.
Conceptual integrity- The SRS document should exhibit conceptual integrity so that the
reader can easily understand the contents.
Verifiable- All requirements of the system as documented in the SRs document should be
verifiable. This means that it should be possible to determine whether or not requirements
have been met in an implementation.
21
2.3 Software Quality
• The degree to which a system, component, or process meets specified
requirements.
• The degree to which a system, component or process meets customer or user needs
or expectations.
22
2.3.1 McCall Software Quality Model
i. Product Operation
Factors which are related to the operation of a product are combined. The factors are:
• Correctness
• Efficiency
• Integrity
• Reliability
• Usability
23
These five factors are related to operational performance, convenience, ease of usage and
its correctness. These factors play a very significant role in building customer’s
satisfaction.
ii. Product Revision
The factors which are required for testing & maintenance are combined and are given
below:
• Maintainability
• Flexibility
• Testability
These factors pertain to the testing & maintainability of software. They give us idea about
ease of maintenance, flexibility and testing effort. Hence, they are combined under the
umbrella of product revision.
iii. Product Transition
We may have to transfer a product from one platform to an other platform or from one
technology to another technology. The factors related to such a transfer are combined and
given below:
• Portability
• Reusability
• Interoperability
24
Validation:-is the process of evaluating software at the end of software development to
ensure compliance with the software requirement. testing is common method of validation
Software V&V is a system engineering process employing rigorous methodologies for
evaluating the correctness and quality of the software product throughout the software life
cycle.
2.5 SQA Plans, Software Quality Frameworks
Quality plan structure
§ Product introduction;
§ Product plans;
§ Process descriptions;
§ Quality goals;
§ Risks and risk management.
Quality plans should be short, succinct documents.If they are too long, no-one will read
them.
Design
Test
Deployment
25
• Continued surveillance
ISO9000 CMM
26
ISO certification is awarded by an SEI CMM assessment is purely for
international standard body and can be Internal use
quoted as an official document
Deals primarily for manufacturing CMM was developed specially for
industry and provisioning of services Software industry and therefore
addresses software issues
It aims at level 3 of CMM Goes beyond Quality Assurance and lead
to TQM
Has Customer Focus as primary aim and Provide a list of Key Process Areas to
follows procedural controls proceed from lower CMM level to
higher level to provide gradual Quality
improvements
27
Unit-3: Software Design
3.1Software design
• Design is the highly significant phase in the software development where the
designer plans “how” a software system should be produced in order to make it
functional, reliable and reasonably easy to understand, modify and maintain.
• SRS tell us what a system does and becomes input to design process, which tells us
“how” a software system works.
• Software design involves identifying the component of software design, their inner
workings, and their interface from the SRS. The principle work of this activity is
the software design document (SDD) which is also referred as software design
description
• Software design deals with transforming the customer requirements, as described in
the SRS document, into a form (a set of documents) called software design
document that is suitable for implementation in a programming language.
28
Characteristics and objectives of a good software design
Good design is the key of successful product.
• Correctness: A good design should correctly implement all the functionalities
identified in the SRS document.
• Understandability: A good design is easily understandable.
• Efficiency: It should be efficient.
• Maintainability: It should be easily amenable to change.
29
or related functions . Cohesion is strong if al parts are needed for the functioning of other
parts (e..Important design objective is to maximize module cohesion and minimize module
coupling.
Coupling
It is the measure of the degree of interdependence between modules. Coupling is highly
between components if they depend heavily on one another, (e.g., there is a lot of
communication between them).
Decomposition and modularization
Decomposition and modularization large software in to small independent once, usually
with the goal of placing different functionality or responsibility in different component
Design Complexity
Complexity is another design criteria used in the process of decomposition and refinement.
A module should be simple enough to be regarded as a single unit for purposes of
verification and modification
a measure of complexity for a given module is proportional to the number of other
modules calling this module (termed as fan-in), and the number of modules called by the
given module (termed as fan-out).
Top-down approach (is also known as step-wise design) is essentially the breaking down
of a system to gain insight into its compositional sub-systems. In a top-down approach an
overview of the system is formulated, specifying but not detailing any first-level
subsystems. Each subsystem is then refined in yet greater detail, sometimes in many
additional subsystem levels, until the entire specification is reduced to base elements. A
top-down model is often specified with the assistance of "black boxes", these make it
easier to manipulate. However, black boxes may fail to elucidate elementary mechanisms
or be detailed enough to realistically validate the model.
Bottom-up approach is the piecing together of systems to give rise to grander systems,
thus making the original systems sub-systems of the emergent system. In a bottom-up
approach the individual base elements of the system are first specified in great detail.
These elements are then linked together to form larger subsystems, which then in turn are
linked, sometimes in many levels, until a complete top-level system is formed. This
strategy often resembles a "seed" model, whereby the beginnings are small but eventually
grow in complexity and completeness. However, "organic strategies" may result in a tangle
of elements and subsystems, developed in isolation and subject to local optimization as
opposed to meeting a global purpose. Mechanisms or be detailed enough to realistically
validate the model.
30
• Alternative architectural styles or patterns are analyzed to derive the structure that
is best suited to customer requirements and quality attributes.
• Once an alternative has been selected, the architecture is elaborated using an
architectural design method.
Layered architectures
Data-centered architectures: - A data store (e.g., a file or database) resides at the center
of this architecture and is accessed frequently by other components that update, add, delete,
or otherwise modify data within the store. Client software accesses a central repository. In
some cases the data repository is passive. That is, client software accesses the data
independent of any changes to the data or the actions of other client software. A variation
on this approach transforms the repository into a “blackboard” that sends notifications to
client software when data of interest to the client change.
31
3.4 Coupling and cohesion Measures
Coupling: - It is the measure of the degree of interdependence between modules. Coupling
is highly between components if they depend heavily on one another, (e.g., there is a lot of
communication between them).
Types of Coupling:-
1. Data coupling: communication between modules is accomplished through well-defined
parameter lists consisting of data information items
2. Stamp coupling: Stamp coupling occurs between module A and B when complete data
structure is passed from one module to another.
3. Control coupling: a module controls the flow of control or the logic of another module.
This is accomplished by passing control information items as arguments in the argument
list.
4. Common coupling: modules share common or global data or file structures. This is the
strongest form of coupling both modules depend on the details of the common structure
5. Content coupling: A module is allowed to access or modify the contents of another,
e.g. modify its local or private data items. This the strongest form of coupling
Cohesion :- It is a measure of the degree to which the elements of a module are
functionally related. Cohesion is weak if elements are bundled simply because the perform
similar or related functions. Cohesion is weak if elements are bundled simply because they
perform similar or related functions. Cohesion is strong if al parts are needed for the
functioning of other parts (e..Important design objective is to maximize module cohesion
and minimize module coupling.
Types of Cohesion:-
1.Functional cohesion: A and B are part of a single functional task. This is very good
reason for them to be contained in the same procedure or achieved when the components
of a module cooperate in performing exactly one function, e.g., POLL_SENSORS,
GENERATE_ALARM, etc.
2.Sequential Cohesion: Module A outputs some data which forms the input to B. This is
the reason for them to be contained in the same procedure.
3.Communicational cohesion: is achieved when software units or components of a
module sharing a common information or data structure are grouped in one module
32
4.Procedural cohesion: is the form of cohesion obtained when software components are
grouped in a module to perform a series of functions following a certain procedure
specified by the application requirements
5. Temporal cohesion: Module exhibits temporal cohesion when it contains tasks that are
related by the fact that all tasks must be executed in the same time-span. Examples are
functions required to be activated for a particular input event, or during the same state of
operation
6.Logical cohesion: refers to modules designed using functions who are logically related,
such as input/output functions, communication type functions (such as send and receive),
7.Coincidental cohesion: Coincidental cohesion exists in modules that contain
instructions that have little or no relationship to one another.
33
be used in both the preliminary and detailed design phases. Using pseudocode, the
designer describes system characteristics using short, concise, English language
phrases that are structured by key words such as It-Then-Else, While-Do, and End.
Example:-
COUNT=0
STOCK=STOCK+QUANTITY
OR
READ THE DATA FROM SOURCE
WRITETHE DATA TO DESTINATION
34
Categories of Metrics
i. Product metrics: describe the characteristics of the product such as size,
complexity, design features, performance, efficiency, reliability, portability,
etc.
ii. Process metrics: describe the effectiveness and quality of the processes that
produce the software product. Examples are:
• effort required in the process
• time to produce the product
• effectiveness of defect removal during development
• number of defects found during testing
• maturity of the process
iii. Project metrics: describe the project characteristics and execution.
Examples are:-
• number of software developers
• staffing pattern over the life cycle of the software
• cost and schedule
• productivity
Volume
• The unit of measurement of volume is the common unit for size “bits”. It is the actual
size of a program if uniform binary encoding for the vocabulary is used.
V= N* log2 ή
Program Level
• The value of L ranges between zero and one, with L=1 representing a program
written at the highest possible level (i.e., with minimum size) and v is the potential
volume
35
L=V*/V
Estimated program Length
• Advantages
– Users point of view: what user requests & receives from the system
– Independent of tech.,lang,tool,methods
36
– Can be estimated from SRS or Design specification Doc.
– Since directly from first phase doc. So easy re-estimation on expansion or
modification.
• Disadvantages
– Difficult to estimate
– Experienced based/subjective
3.8 Coding styles- Coding guidelines provide only general suggestions regarding the
coding style to be followed.
1) Do not use a coding style that is too clever or too difficult to understand- Code
should be easy to understand. Clever coding can obscure meaning of the code and
hamper understanding. It also makes maintenance difficult.
2) Avoid obscure side effects- The side effects of a function call include modification
of parameters passed by reference, modification of global variables, and I/O
operations. An obscure side effect is one that is not obvious from a casual
examination of the code. Obscure side effects make it difficult to understand a
piece of code.
3) Does not use an identifier for multiple purposes- Programmers often use the
same identifier to denote several temporary entities? There are several things which
are wrong with this approach and hence should be avoided. Some of the problems
caused by use of variables for multiple purposes are as follows:
Ø Each variable should be given a descriptive name indicating its purpose.
This is not possible if an identifier is used for multiple purposes. Use of a
variable for multiple purposes can lead to confusion and make it difficult to
read and understand the code.
Ø Use of variables for multiple purposes usually makes future enhancements
more difficult.
4) The code should be well-documented- As a rule of thumb, there must be at least
one comment line on the average for every three source lines.
5) Do not use goto statements- Use of goto statements makes a program unstructured
and very difficult to understand.
37
Unit-4 Software Testing
Test case:
This is the triplet [I,S,O], where I is the data input to the system, S is the state of the
system at which the data is input, and O is the expected output of the system.
Test suite:
This is the set of all test cases with which a given software product is to be tested.The set
of test cases is called a test suite. Hence any combination of test cases may generate a test
suite.
38
Test data
Inputs which have been devised to test the system
Test cases
Inputs to test the system and the predicted outputs from these inputs if the system operates
according to its specification
4.2.1 Unit testing is undertaken after a module has been coded and successfully reviewed.
• Unit testing (or module testing) is the testing of different units (or modules) of a
system in isolation.
• In order to test a single module, a complete environment is needed to provide all
that is necessary for execution of the module. That is, besides the module under test
itself, the following steps are needed in order to be able to test the module.
39
4.2.2 Integration Testing: Integration is the process by which components are
aggregated to create larger components. Integration Testing is testing done to show that
even though the componenets were individually satisfactory (after passing component
testing), checks the combination of components are incorrect or inconsistent.
• The purpose of unit testing is to determine that each independent module is
correctly implemented. This gives little chance to determine that the interface
between modules is also correct, and for this reason integration testing must be
performed.
• Focuses on interaction of modules in a subsystem
• Unit tested modules combined to form subsystems
• Test cases to “exercise” the interaction of modules in different ways
Purpose of Integration Testing is to identify bugs of following Types: These are
protocol-design bugs, input and output format bugs, inadequate protection against
corrupted data, wrong subroutine call sequence, call-parameter bugs, misunderstood entry
or exit parameter values. The entire system is viewed as a collection of subsystems (sets
of classes) determined during the system and object design .
Goal: Test all interfaces between subsystems and the interaction of subsystems
The Integration testing strategy determines the order in which the subsystems are selected
for testing and integration.
4.2.3 System Testing
System tests are designed to validate a fully developed system to assure that it meets its
requirements. There are essentially three main kinds of system testing:
Alpha Testing. Alpha testing refers to the system testing carried out by the test team
within the developing organization.
Beta testing. Beta testing is the system testing performed by a select group of friendly
customers.
Acceptance Testing. Acceptance testing is the system testing performed by the customer
to determine whether he should accept the delivery of the system.
40
Test in g
Level 1 Level 1 . ..
s eq uen ce
Le vel 3
s tu bs
Bottom-up testing
· each subsystem is tested separately and then the full system is tested
· Primary purpose of testing each subsystem is to test the interfaces among various
modules making up the subsystem. Both control and data interfaces are tested
· Integrate individual components in levels until the complete system is created
T e st
d ri v e r s
T e s t in g
Lev el N L e ve l N Le vel N Lev el N Lev el N
s eq u e n c e
T e st
d ri v e r s
Le ve l N – 1 Lev el N– 1 Lev el N– 1
41
• For example, programmers may improperly use < instead of <=, or conversely <=
for <. Boundary value analysis leads to selection of test cases at the boundaries of
the different equivalence classes.
• Suppose we have an input variable x with a range from 1-100 the boundary values
are 1,2,99 and 100.
Hence both the input x and y are bounded by two interval [a,b] and [c,d] respectively. for
input x, we may design test cases with value a and b, just above a and also just below b, for
42
input y, we may have values c and d ,design just above c and just below d. These test cases
will have more chance to detect an error
• The basic idea of boundary value analysis is to use input values at their minimum,
just minimum, a nominal value, just below their their maximum and at their
maximum
• Thus for a program of n variable ,boundary value analysis yields 4n+1 test cases.
43
– input > max is an invalid class
2. The semantic content of the specification is analyzed and transformed into a Boolean
graph linking the causes & effects.
3. The graph is converted in to decision
4. The columns in the decision table are converted into test cases.
44
4.5 Structural Testing (White Box Testing)
45
corresponding to node j.Flow graph can easily be generated from the code of any
problem
Example
46
• Flow graph representation is the first step of path testing
• Second step is to draw the DD graph from the flow graph
• DD path graph is known as decision to decision path graph in which on concentrate
decision nodes.
• The nodes of flow graph which are sequence are combined into single node.
• Hence DD graph is directed graph in which nodes are sequence of statements and
edges show control flow between nodes
• The DD path graph is used to find independent path.
• Execute all independent path at least once.
2. Cyclomatic Complexity
• Is also known as structural complexity because it gives internal view of code
• For more complicated programs it is not easy to determine the number of
independent paths of the program.
• McCabe’s cyclomatic complexity defines the number of linear independent paths
through a program. Also, the McCabe’s cyclomatic complexity is very simple to
compute.
• The McCabe’s cyclomatic complexity metric provides a practical way of
determining the maximum number of linearly independent paths in a program.
Though the McCabe’s metric does not directly identify the linearly independent
paths, but it informs approximately how many paths to look for.
There are three different ways to compute the cyclomatic complexity.
47
Method 1: Given a control flow graph G of a program, the cyclomatic complexity V(G)
can be computed as:
V(G) = E – N + 2(P)
where N is the number of nodes of the control flow graph and E is the number of edges in
the control flow graph.
Method 2: An alternative way of computing the cyclomatic complexity of a program from
an inspection of its control flow graph is as follows:
V(G) = Total number of bounded areas + 1
In the program’s control flow graph G, any region enclosed by nodes and edges can be
called as a bounded area
Method 3:The cyclomatic complexity of a program can also be easily computed by
computing the number of decision statements of the program.
If N is the number of decision statement of a program, then the McCabe’s metric is
equal to N+1.
Difference between functional and structural testing
48
data are capable of detecting the change between the original program and the
mutated program
• A major disadvantage of the mutation-based testing approach is that it is
computationally very expensive, since a large number of possible mutants can be
generated
• Mutation testing should be used in conjunction of some testing tool which would run
all the test cases automatically
4.7 Performance testing
• Performance testing is carried out to check whether the system needs the
nonfunctional requirements identified in the SRS document.
• There are several types of performance testing. Among of them nine types are
discussed below.
• The types of performance testing to be carried out on a system depend on the
different non-functional requirements of the system documented in the SRS
document.
• All performance tests can be considered as black-box tests.
• Stress testing
• Volume testing
• Configuration testing
• Compatibility testing
• Regression testing
• Recovery testing
• Maintenance testing
• Documentation testing
• Usability testing
4.8 Coding
• Coding is undertaken once the design phase is complete and the design document
have been successfully reviewed
• In the coding phase every module identified and specified in the design document
is independently coded and unit tested
• Good software development organizations normally require their programmers to
adhere to some well-defined and standard style of coding called coding standards.
• Most software development organizations formulate their own coding standards
that suit them most, and require their engineers to follow these standards
rigorously.
Good software development organizations usually develop their own coding standards and
guidelines depending on what best suits their organization and the type of products they
develop.
• representative coding standards
• representative coding Guidelines
49
• Programmers spend more time reading code than writing code
• They read their own code as well as other programmers code
• Readability is enhanced if some coding conventions are followed by all
• Coding standards provide these guidelines for programmers
• Generally are regarding naming, file organization, statements/declarations,
• Naming conventions
4.8.2 Coding Guidelines
• Package name should be in lower case (mypackage, edu.iitk.maths)
§ Type names should be nouns and start with uppercase (Day, DateOfBirth,…)
§ Var names should be nouns in lowercase; vars with large scope should have
long names; loop iterators should be i, j, k…
§ Const names should be all caps
§ Method names should be verbs starting with lower case (eg getValue())
§ Prefix is should be used for boolean method
§
50
Unit-5 Software Maintenance and project management
51
Various types of maintenance
Corrective: Corrective maintenance of a software product is necessary to rectify the bugs
observed while the system is in use.
Adaptive: A software product might need maintenance when the customers need the
product to run on new platforms, on new operating systems, or when they need the product
to interface with new hardware or software.
Perfective: A software product needs maintenance to support the new features that users
want it to support, to change different functionalities of the system according to customer
demands, or to enhance the performance of the system.
52
5.4 Software RE-Engineering
Software re-engineering is concerned with taking existing legacy systems and re-
implementing them to make them more maintainable. The critical distinction between re-
engineering and new software development is the starting point for the development as
shown in Fig:-
53
• Redocumentation: - Redocumentation is the recreation of a semantically
equivalent representation within the same relative abstraction level.
• Design recovery: - Design recovery entails identifying and extracting meaningful
higher level abstractions beyond those obtained directly from examination of the
source code. This may be achieved from a combination of code, existing design
documentation, personal experience, and knowledge of the problem and application
domains.
5.4 Software Configuration management (CM)
Configuration management (CM) is the process of controlling and documenting change to
a developing system. As the size of an effort increase, so does the necessity of
implementing effective CM. Software configuration management (SCM) is a set of
activities that are designed to control change by identifying the work products that are
likely to change, establishing relationships among them.The process of software
development and maintenance is controlled is called configuration management. The
configuration management is different in development and maintenance phases of life
cycle due to different environments.
Configuration Management Activities: - The activities are divided into four broad
categories.
1. The identification of the components and changes
2. The control of the way by which the changes are made
3. Auditing the changes
4. Status accounting recording and documenting all the activities that have take
place
5.4.1 Functions of SCM
• Identification -identifies those items whose configuration needs to be controlled,
usually consisting of hardware, software, and documentation.
• Change Control - establishes procedures for proposing or requesting changes,
evaluating those changes for desirability, obtaining authorization for changes,
publishing and tracking changes, and implementing changes. This function also
identifies the people and organizations who have authority to make changes at
various levels.
• Status Accounting -is the documentation function of CM. Its primary purpose is to
maintain formal records of established configurations and make regular reports of
configuration status. These records should accurately describe the product, and are
used to verify the configuration of the system for testing, delivery, and other
activities.
• Auditing -Effective CM requires regular evaluation of the configuration. This is
done through the auditing function, where the physical and functional
configurations are compared to the documented configuration. The purpose of
auditing is to maintain the integrity of the baseline and release configurations for all
controlled products
54
5.4.2 SCM Terminology
1. Version Control
• A version control tool is the first stage towards being able to manage multiple
versions.
• Once it is in place, a detailed record of every version of the software must be kept.
This comprises the-
Ø Name of each source code component, including the variations and
revisions
Ø The versions of the various compilers and linkers used
Ø The name of the software staff who constructed the component
Ø The date and the time at which it was constructed
2. Change control process
Change control process comes into effect when the software and associated
documentation are delivered to configuration management change request form as
shown in fig which should record the recommendations regarding the change.
3. Software documentation
It is the written record of the facts about a software system recorded with the intent
to convey purpose, content and clarity.
Two type of documentation:-
User documentation
55
– To produce better quality software at low cost
Benefits of CASE
• Cost saving through all development phases. CASE put the effort reduction
between 30% to 40%
• Use of CASE tool leads to considerable improvement to quality
• CASE tools help to produces high quality and consistent documents
• Use of CASE environment has an impact on style of working of a company, and
makes it conscious of structured and orderly approach
CASE Environment
Coding support
activity
Consistency & Project magt
Completeness analysis facility
Documentation
prototyping
Generation Central
Reposi
Structured analysis Configuration mant
tory
facility facilities
56
Basic cocomo coefficient
Organic Mode
· developed in a familiar, stable environment,
· similar to the previously developed projects
· relatively small and requires little innovation
Semidetached Mode
· intermediate between Organic and Embedded
Embedded Mode
· tight, inflexible constraints and interface requirements
· The product requires great innovation
57
5.8 Software Risk
Software Risk is future uncertain events with a probability of occurrence and a potential
for loss .Risk identification and management are the main concerns in every software
project. Effective analysis of software risks will help to effective planning and assignments
of work.
Categories of risks
· Schedule Risk
· Operational risk
· Technical risk
Risk management is the identification, assessment, and prioritization of risks followed by
coordinated and economical application of resources to minimize, monitor, and control the
probability and/or impact of unfortunate events.
Risk Management Process, describes the steps you need to take to identify, monitor and
control risk. Within the Risk Process, a risk is defined as any future event that may prevent
you to meet your team goals. A Risk Process allows you to identify each risk, quantify the
impact and take action now to prevent it from occurring and reduce the impact should it
eventuate.
This Risk Process helps you:
• Identify critical and non-critical risks
• Document each risk in depth by completing Risk Forms
• Log all risks and notify management of their severity
• Take action to reduce the likelihood of risks occurring
• Reduce the impact on your business, should risk eventuate
58
• Risk Planning Risk Planning is developing and documenting organized,
comprehensive, and interactive strategies and methods for identifying risks. It is
also used for performing risk assessments to establish risk handling priorities,
developing risk handling plans, monitoring the status of risk handling actions,
determining and obtaining the resources to implement the risk management
strategies. Risk planning is used in the development and implementation of
required training and communicating risk information up and down the project
stakeholder organization.
• Risk monitoring and control is the process of identifying and analyzing new risk,
keeping track of these new risks and forming contingency plans incase they arise. It
ensures that the resources that the company puts aside for a project is operating
properly. Risk monitoring and control is important to a project because it helps
ensure that the project stays on track.
59