All Chapter Notes
All Chapter Notes
All Chapter Notes
It became clear that individual approaches to program development did not scale
up to large and complex system. This term was purposed in 1968.
Requirements definition
Operation and
maintenance
Fig 1.1 the water fall model
Because of the cascade from one phase to another is known as the water fall
model. We should plan and schedule all of the process activities before starting
works on them. Principle stages of the water fall model are listed below:-
Advantages
Reflect systematic way of software process
Useful for larger system engineering project
Disadvantages
Concurrent activities
Disadvantages
I. Requirement specification:
same as of water fall model
II. component analysis:
A search is made for components to implement that requirement
specification.
Usually there is no exact match
Components which are discovered may only provide some of
functionalities required.
III. Requirement modification:
The requirements are analyzed using the information about the
components that have been discovered.
They are the modified to reflect the available components
IV. System design with reuse:
Framework of the system is designed or existing frame work is
reused.
Some new software are designed if reusable components are not
available.
V. Development and integration
Software that cannot be externally procured is developed
And the components and COTS (commercial-of-the-shelf) are
integrated to create the new system.
There are three types of software component that may be reused:
a. Web services that are developed according to services and which
are available for remote invocation
b. Collection of object that are developed as a package to be
integrated with a component frame work such as .NET or J2EE.
c. Stand-alone software system that are configured for use in a
particular environment.
Advantages
Disadvantages
May lead to a system that does not meet the real necessary of
the user requirement.
Some control over the system evolution is lost as new as new
versions of the reusable components are not under the control
of the organization using them.
1.4.4 Spiral model
Risk analysis
Risk analysis
Prototype 1
prototype
Code
Integration test
Acceptance test
The Waterfall model has dominated software development for many years, but
iteration of processes is catching in. There are now a number of well-established
iterative development process models that can be classified according to the
levels where iteration is applied. Iteration can improve validation and verification
by allowing earlier quality feedback. Moreover, there seems to be a secret
marriage between teamwork and iteration. Altogether, from a SPI (software
process Iteration) point of view, changing to an iterative development process
model could very well raise your professional standards in software development.
Parts of the process are repeated as system requirement evolve.
System design & implementation work must be reworked to implement the
change requirement.
It is alternative approach to S/w development.
Makes the system that can do all to do little more.
Minimize the risk of building wrong product .e.g. building a table instead of
chair.
Several development process use iteration in high level or level or both.
A. software specification:
Software specification or requirements engineering process of the
understanding & defining what services are required from the system and
identifying the constraints on the system development.
Feasibility study
fe
Requirements
elicitation &analysis
Requirement
specification
Requirement
Feasibility report validation
System model
User and system
requirement
Requirement
documentation
Feasibility study:
o An estimate is made whether the user need may be satisfied using
current software and hardware technologies.
o The study also considers whether the purposed system is cost-effective
from business point of view.
o It should be quick and cheap.
o Should provide information to decide whether or not to go ahead with
more detail analysis.
Requirement elicitation and analysis:
o Derivation of the system requirements by observing the existing
system, discussion with potential users & procurers, task analysis.
o Involve development of one or more system models & prototype.
o Helps us to understand the system to be specified.
Requirement specification
o Activity of translating the information gathered during the analysis
activity into document that defines the set of requirement.
o Two types of requirement are: i) user requirement b) system
requirement
Requirement validation
o Checks the realism, consistency, completeness of requirements.
o Errors in the requirements are discovered & modified to correct
these problems.
B. Software design and implementation
Design inputs
Design activities
Database design
Design outputs
Architectural design
o Identification of the overall system.
o Identify the relationship between principal component
Interface design
o Define the interfaces between system components.
Component design
o Here we take each system component and design how it will operate.
o It may be the list of changes to be made to a reusable component or
a detailed design model.
Database design
o Design the system data structure and how they are to be
represented in a database.
C. System validation
Software validation & verification is intended to show that both that a
System meets both specification and expectation of system customer.
Figure below is of testing phase of plan-driven software process
CASE tools are programs that are used to support software engineering
process. These tools therefore include design editors, data dictionaries,
compilers, debuggers, system building tools, etc.
CASE tools provide process support by automating some process activities
and by providing information about the software that is being developed.
Assist in development and maintenance of software
Developed in 1970’s to speed up the s/w build up process
Allows rapid development of software to cope with the increasing speed of
market demand.
Classification of CASE tools
a. Business system planning
Information engineering tools
Process modeling and management tools
b. Project management
Project planning tools
Risk analysis tools
Project management tools
Requirement tracing tools
c. Programming tools
Integrating and testing tools
Client /server tools
d. Maintenance tools
Requirement engineering tools
Specific examples:
with class-object oriented design & code generation
oracle designer/200-integrated CASE environment
Functional requirements
Functional requirements specify the product capabilities, or things that a product must do for
its users. The functional requirements specify what the product must do. They relate to the
actions that the product must carry out in order to satisfy the fundamental reasons for its
existence. Functional requirement must fully describe the actions that the intended product
can perform. They describe the relationship between the input and output of the system.
Non-functional requirements
Non-functional requirements define system properties such as reliability, performance,
security, response time and storage requirements and constraints like Input output device
capability, system representations.
Non-functional requirements are more critical than functional requirements. A system user can
usually find ways to work around a system function that doesn’t really meet their needs but if
the non-functional requirements are not met, then the system will be useless.
They describe various quality factors, or attributes, which affect the functionality's
effectiveness.
Functional Nonfunctional
Product features Product properties
Describe the work that is done Describe the character of the work
Describe the actions with which the work is Describe the experience of the user while doing
concerned the work
Characterized by verbs Characterized by adjectives
1. Requirements Discovery
The main problem with requirement validation is that the requirements change continuously
during requirements elicitation.
Requirements validation techniques:
Requirement reviews: The requirements are analyzed systematically by a team of
reviewers who check for errors and inconsistencies.
Prototyping: An executable model of the system in question is used to check the validity.
Test-case generation: Requirements should be testable. If a test for a requirement is difficult or
impossible to design, this usually means that the requirements will be difficult to implement
and should be reconsidered.
Models are used during the requirement engineering process to help to derive the
requirements for a system, during the design process to describe the system to engineers
implementing the system and after implementation to document the system’s structure and
structure and operation. The most important aspect of system model is that it leaves out detail
.A model is an abstraction of system being studied rather than an alternative representation of
that system. You may develop different models to represent the system from different
perspective. For example:
1. An external perspective, where you model the context or environment of the system.
2. A structural perspective, where you model the organization of a system or the structure
of the data that is processed by the system.
3. A behavioral perspective, where you model the dynamic behavior of the system and
how it responds to events.
The UML has many diagrams types and so supports the creation of many different types of
system model. However, a survey in 2007 showed that users of the UML thought that five
diagram types could represent the essentials of a system:
1. Activity diagrams, which show the activities involved in process or in data processing.
2. Use case diagrams, which show the interactions between a system and its environment.
3. Sequence diagrams, which show interactions between actors and the system and
between system components.
4. Class diagrams, which show the object classes in the system and the associations
between these classes.
5. State diagrams, which show how the system reacts to internal and external events.
2.1 CONTEXT MODEL
Context models are used to illustrate the boundaries of a system. Social and
organizational concerns may affect the decision on where to position system
boundaries.
At the early stage in the requirement elicitation and analysis process one should decide
on the boundaries of the system.
It involves working with the stake holders to distinguish what the system.
One makes this decision early in the process to limit the system cost and time needed
for analysis.
In some cases the boundary between a system and environment is relatively clear
eg: automated system repacking existing manual or computerized system, the
error of the new system is usually the same as the existing system
Security System
ATM system
Maintenance System
These models may be used separately or together or together, depending on the type of
system that is being developed.
Most business system are primarily driven by data.
They are controlled by the data inputs to the system with relatively little external event
processing.
Data-flow model
It is the system model that show a functional perspective where each transformation
represents a single function or process.
DFM are used to show how data flow through a sequence of processing steps.
Eg: processing could be to filter duplicate records in a customer database
The data is transferred at each step before moving on to the next step.
The processing steps or transformation represents software process or functions when data
flow diagrams are used to document a software design.
These model the behavior of the system in response to external and internal events.
They show the system’s responses to stimuli so are often used for modeling real-time
systems.
State machine models show system states as
Half Power The user has pressed the half power button
Full power The user has pressed the full power button
Timer The user has pressed one of the timer button
Number The user has pressed a numeric key
Door Open The oven door switch is not closed
Door Closed The oven door switch is closed
Start The user has pressed the start button
Cancel The user has pressed the cancel button
It is an entity-relation-attribute model which sets out the entities in the system, the
relationships between these entities and the entity attributes
This model is widely used in database design. Can readily be implemented using
relational databases.
It has no specific notation provided in the UML but objects and associations can be
used.
An object class is an abstraction over a set of objects with common attributes and the
services (operations) provided by each object.
More abstract entities are more difficult to model using this approach
• Inheritance models;
• Aggregation models;
• Interaction models.
Inheritance model
Organise the domain object classes into a hierarchy.
Classes at the top of the hierarchy reflect the common features of all classes.
Object classes inherit their attributes and services from one or more super-
classes. These may then be specialised as necessary.
Class hierarchy design can be a difficult process if duplication in different
branches is to be avoided.
Object Model and the UML
The UML is a standard representation devised by the developers of widely used object-
oriented analysis and design methods.
Notation
• Object classes are rectangles with the name at the top, attributes in the middle
section and operations in the bottom section;
Object Aggregation
An aggregation model shows how classes that are collections are composed of other
classes.
Aggregation models are similar to the part-of relationship in semantic data models.
Fig:Object Aggregation
2.4 Structured Methods
Structured methods incorporate system modelling as an inherent part of the method.
Methods define a set of models, a process for deriving these models and rules and
guidelines that should apply to the models.
This model of software display the organization of a system in terms of components that
make up that system and their relationship.
They may be static models, which show the structure of the system design or dynamic
models, which show the organization of the system when it is executing.
-Sunil Lama
During the architectural design process, system architects have to make a number
of structural decisions that profoundly affect the system and its development
process. Based on their knowledge and experience, they have to consider the
following fundamental questions about the system:
1. Is there generic application architecture that can act as a template for the
system that is being designed?
2. What strategy will be used to control the operation of the components in
the system?
3. What architectural organization is best for delivering the non-functional
requirement of the system?
4. How will the architectural design be evaluated?
5. How should the architecture of the system be documented?
Although each software system is unique, systems in the same application domain
often have similar architectures that reflect the fundamental concepts of the
domain. For embedded systems and systems designed for personal computers,
there is usually only a single processor and you will not have to design a
distributed architecture for the system. However, most large systems are now
distributed systems in which the system software is distributed across many
different computers. The choice of distribution architecture is a key decision that
affects the performance and reliability of the system.
Advantages:
1. Efficient way to share large amount of data.
2. Sub-systems need not be concerned with how data is produced
centralized management e.g. backup, security etc.
3. Sharing model is published as the repository schema.
Disadvantages
1. Sub-systems must agree on a repository data model. Inevitably a
compromise.
2. Data evolution is difficult and expensive.
3. No scope for specific management policies,
4. Difficult to distribute efficiently.
Modular decomposition
Are concerned with the control flow between sub-systems. Distinct from the
system decomposition model.
Centralized control
1. One sub-system has overall responsibility for control and starts and
stops other sub-systems.
Event-based control
1. Each sub-system can respond to externally generated events from
other sub-systems or the system’s environment.
Centralized control:
Driven by externally generated events where the timing of the event is out
with the control of the sub-systems which process the event.
Two principal event – driven models
1. Broadcast models: An event is broadcast to all sub-systems. Any sub-
system which can handle the event may do so.
2. Interrupt- driven models: Used in real-time systems where interrupts are
detected by an interrupt handler and passed to some other component for
processing.
Other event driven models include spreadsheets and production systems.
Application Application
Presentation Presentation
Session Session
Transport Transport
Network Network
Data link Data link
physical Physical
Communication channel
0
Operator console
This architectural model is used for the set of services that are provided by
server and a set of clients that use this service.
Client should know about the server but the server need not know about
clients in client-server architecture.
Mapping of processor to process is not necessarily 1:1 in case of client-
server architecture.
Client and server are logical processors in client-server architecture.
c1 C2 C3
1
Note:
C6
S1, s2, s3 are servers.
S1 S2
C1, c2, c3….c7 are the
clients.
S3
C7
C4 C5
Fig: client-server architecture
Presentation layer
Application layer
O1 O2 O3
S(01) S(o2) S(o3)
S/w BUS
O5
O4
S(o5)
S(04)
Features:
No distinction
Each distributed entity is objects that provide service other object and
receive from other objects.
Object communication is through middleware system called an object
request booker or software bus.
However, more complexes to design the client server architecture.
Advantages:
It allows system designer to delay decision on where and how services
should be provided.
It is very open system architecture that allows to new resources to be as
required.
System is flexible and scalable.
Features:
Definition
A real-time system is a software system where the correct functioning of the
system depends on the results produced by the system and the time at which this
result are produced.
Stimulus system
Given a stimulus, system must produce a response within a specified time.
Periodic stimulus system & stimuli which occurs at the prediction time
intervals.
Aperiodic stimuli which occur at unpredictable time interval.
General Model of Real Time System
Sensor 2 Sensor 3
Sensor 4
Sensor 1
Real -Time
Control
System
Activator 3
Activator 1
Activator 2
Sensor/actuator process
Sensor Actuator
Response
Partition requirement
Design algorithms to process each class of stimulus & response. These must
meet the given timing requirements.
Design a scheduling system which will ensure that processes are started in
time to meet their deadlines.
Integrate using a real-time execution or operating system.
Timing constraints
May require extensive simulation and experiment to ensure that they are
made by the system.
May mean that certain design strategies such as object oriented design
cannot be used because of additional overhead involved.
May mean that low level programming language features have to be used
for performance reason.
Full power Number
Full Power
Full power
Waiting Set time
Half Power
Do set power=300
Door closed
Enabled
Do: display
Ready
Start
Do: operate
over
Door open Door-closed
Cancel
Disabled Waiting
System fault
Do: display Do: display
‘waiting’ time
Real-time Interrupt
Scheduler
clock handler
Process resource
requirements
Processes Available
Resource resource list
awaiting resource
manage
Released resources
Ready processor
Ready list
Dispatcher Processor list
Executing process
Sensor
Movement detectors, window sensors, door sensors
50 window sensor, 50 door sensors, 200 movement sensor
Voltage drop sensor
Action
When an intruder is detected, police are called automatically
Lights are switched on in rooms with active sensors
An audible alarm is switched on
The system switches automatically to backup power when a voltage drop is
detected
Stimuli to be pressed
Power failure -> generated aperiodically by a circuit monitor when
received, the system must switched to backup power within 50 ms.
Introduce alarm -> stimulus generated by system sensor, response is to call
the police, switch on building lights & audible alarm.
Timing constraints
Stimulus/response Timing requirement
Sensor
Consumer
Producer process
process
Advantages
Concept Reuse
When we reuse program, we have to follow the design decision made by the original
developer of the program
This may limit the opportunities for reuse
However, more abstract form of reuse is concept reuse when a particular approach is
described in an implementation independent way and an implementation is then
developed.
Approach to concept reuse is
1. Design Patterns
2. Generative Programming
Pattern element
Name
A meaningful pattern identifier
Problem Description
Solution
Not a concrete design but a template for a design solution that can be installed in different
ways.
Consequences
The results and trade-offs of applying the pattern.
Framework classes
Application with generic functionality that can be adapted and configured for use in
specific content.
Adaptive involves
1. Component and system configuration
2. Adding new components to the system.
3. Selecting from a library of existing components
4. Modifying components to meet new requirements
E.g.; ERP system (Enterprise Resource Planning)
ERP system
Configuration Planning
Tool
Configuration db
System Db
Advantages of reuse are lower costs, faster software development and lower risks.
Design patterns are high-level abstractions that document successful design solutions.
Program generators are also concerned with software reuse- the reusable concepts are
embedded in a generator system.
Application frameworks are collections of concrete and abstract objects that are
designed for reuse through specialisation.
COTS product reuse include lack of control over functionality, performance and
evolution and problems with inter- operation.
ERP systems are created by configuring a generic system with information about a
customer’s business.
Software product lines are related applications developed around a common core of
shared functionality.
6. Component Based Software Engineering
- Amir & Anuj
Component characteristics
Component Model
i. Platform service:
Which enables components to communicate & interoperate in a distributed
environment?
ii. Support service:
Which are common services that are likely to be required by many different
components
• CBSE is inteneded to provide an effective and efficient means for reuse, by building
systems out of existing parts or components. Figuratively, the idea is comparable to
that of lego bricks which can be used to develop a number of different „applications“
from a standard set of components. This will not only reduce the effort needed for
system development but will also have a positive impact on the costs involved. Also
this sounds easy the general idea is not new but hasn‘t been succesfully realized yet.
Effective reuse was/is also one of the major promises of OO. However, classes and
objects are often too specific and fine-grained to be effectively reused (e.g., single
operations instead of applications). The idea of CBSE is to raise the level of
abstartcion of reusable entites. Thus tehy can be seen as standalone entittes or in other
words: One man‘s component can be another mans system. Although this sounds
easy, ist practical application requires knowledge (oprovided services, side effects,
providwers, market, …, etc.),careful planning (risks, impacts on the system, etc.) and
methodological support applying sound engineering practices. Otherwise CBSE is
likely to fail.
• Apart from the reuse, CBSE is based on sound software engineering principles:
CBSE process
• This involves:
There are s/w process that support component based s/w engineers
1. Development for reuse: Concerned with developing components that will be reused in
other application.
2. Development with reuse: Process of developing new application using existing
components and services.
Supporting Process:
1. Component acquisition
2. Component management
3. Component certification
1. Sequential composition
We can create new component from two existing component s by calling the exixting
component in sequence.
2. Hierarchical composition
This type of composition occurs when one component calls directly on the services
provided by another component.
3. Additive composition
This occurs when two or more component are put together to create a new
component, which combine their functionality.
When composing reusable components that have not been written for specific application,
one may need to write adaptors or ‘glue code’ to reconcile the different component interfaces.
i. Parameter incompatibility
ii. Operator incompatibility
iii. Operator incompleteness
CHAPTER 7: VERIFICATION AND VALIDATION
(3hrs,12 marks) -Anuja Ghising
Syllabus
7.1 Planning Verification and validation
7.2 software inspections
7.3 Verification and formal methods
7.4 Critical system verification and validation
Verification
It is an act of reviewing, inspecting, testing, checking, auditing, or
otherwiseestablishing and documenting whether items, processes, services
ordocuments conform to specified requirements.
It can also be defined as the process of evaluating a system or component to
determine whether the products of a given development phase satisfy the conditions
imposed at the start of the phase.
Validation
Validation is the process of evaluating a system or component during or at the end of
the developmentprocess to determine whether it satisfies specified requirements.
Validationis, therefore, 'end-to-end' verification. Validation occurs through the
utilization of various testing approaches.
Requirement System
System design Detailed design
specification specification
overview
rework
individual
preparation
Inspection
meeeting
Inspection procedure
1. First of all, system overview is presented to the inspection team.
2. Secondly, the required codes and documents are distributed to the
inspection team in advance.
3. Then, inspection takes place and errors are discovered. Pre inspection
however may or may not be required.
3. Reader:
The reader guides the inspection team through the review items during
the inspection meetings.
4. Author:
The author is the person who has produced the items under inspection.
The author is present to answer questions about the items under
inspection, and is responsible for all rework. A person may have one
or more of the roles above. In the interests of objectivity, no person
may share the author role with another role.
Arguments for FM
Producing a mathematical specification requires a detailed analysis of the
requirements and this is likely to uncover errors.
They can detect implementation errors before testing, when program is
analyzed alongside the specifications.
Arguments against FM
1. Requires specialized notations that are not understood by domain
experts.
2. It is very expensive to develop a specification and even more
expensive to show that a program meets that specification.
Identify Compute
Prepare test Apply tests
operational observed
data sets to system
profiles reliability
Process et al(1990) suggests five types of review for safety critical system:
1. Review for correct intended function.
2. Review for maintainable, understandable structure.
3. Review to verify that the algorithm and data structure design are
consistent with the specified behavior.
4. Review the consistency of the code and the algorithm and the data
structure design.
5. Review the adequacy of the system test cases.
Security assessment
There are four complementary approaches for security checking:
1. Experience-based validation
2. Tool-based validation
3. Tiger team
4. Formal verification
PAST QUESTIONS
b) Release testing:
In this type of testing a separate testing team tests a complete version of the
system before it is released to users. System testing by the development team
should focus on discovering bugs in the system. The aim of release testing is
to check that the system meets the requirements of system stakeholders.
System testing by the development team should focus on discovering bugs in
the system. Release testing is usually a black-box testing process where
tests are derived from the system specification. The system is treated as
black-box whose behavior. Another name for this is ‘functional testing’, so
called because the tester is only concerned with functionality and not the
implementation of the software.
Testing Testing
Test case design involves designing the text cases (inputs and outputs) used to test
the system. The goal of test case design is to create a set of tests that are effective
in validation and defect testing.
Design approaches:
Requirement-based testing:
Used in validation testing technique where we consider each requirement
and tests for that requirement.
Partition Testing:
It is a software testing technique that divides the input data of a software unit
into partitions of equivalent data from which test cases can be derived. This
technique tries to define test cases that uncover classes of errors, thereby
reducing the total number of test cases that must be developed.
Structural Testing(White-Box Testing):
It is a method of testing software that tests internal structures or workings of
an application, as opposed to its functionality (i.e. black-box testing). In
white-box testing an internal perspective of the system, as well as
programming skills, are used to design test cases. The tester chooses inputs
to exercise paths through the code and determine the appropriate outputs.
Test Data
Tests
Derives
In software testing, test automation is the use of special software (separate from the
software being tested) to control the execution of tests and the comparison of
actual outcomes to predicted outcomes. Test automation can automate previous
repetitive but necessary testing in a formalized testing process already in place, or
add additional testing that would be difficult to perform manually. It reduces
testing costs by supporting the test process with range of software tools. System
such as ‘Junit’ supports the automatic execution of tests. There are two general
approaches to test automation:
Software productivity is the ratio between the amount of software produced to the
labor and expense of producing it. There are two measures of software
productivity:
I. Function-related measures:
Productivity is expressed in terms of the amount of useful functionality
produced in some given time. Function point in a program is computed by
measuring program features:-
A. External inputs and outputs
B. User interactions
C. External interfaces
D. Files used by the system
II. Size:
Line of code delivered
Also measure no. of delivered object code instruction or no. of pages
of system documentation
Useful for programming in FORTRAN, Assembly or COBOL
More expressive the programming language, the lower apparent
productivity
Example: A system which might be coded in 5000 lines of assembly
code. The development time for the various phases in 28 weeks,
Then; productivity= (5000/28) *4
=714 lines/ month
4x
2x
1 2 3 4 5
0.15x
0.25x
1: Feasibility
2: Requirement Design
3: Code
4: Delivery
Fig: Estimation Uncertainty
This is an empirical model that was derived by collecting data from a large number
of software projects. This is a well-documented and non-proprietary estimation
model.
Time required is independent of the number of people working in the project. Staff
required cannot be computed by dividing the development time by the required
schedule. The number of people working in a project varies depending on the
phase of the project.
IOE QUESTIONS:
Regression testing can be used not only for testing the correctness of a program,
but often also for tracking the quality of its output. For instance, in the design of
a compiler, regression testing could track the code size, simulation time and
compilation time of the test suite cases.
What are the basic principles of software testing? List the characteristics of
testability of software. List out possible errors of black-box testing.
The basic principles of software testing are:
1) Testing an application exhaustively is impossible
Example
Assume that we have been given an application which produces bank
statements that are sent to the customers. It is impossible for us to test each
and every notice that is being generated to each customer. Only way to
perform is to identify a suitable sample for it.
Example:
An application built to be used inside an aircraft, requires rigorous testing
and subject it to high quality standards. But an application built for storing
the addresses in a Personal computer need not be tested rigorous similar to
the previous application.
3) Testing a software is to find out the defects - not to prove that the software is
error free.
Example
A software may not have any reported defects (all defects identified are
fixed) but still it may fail in the production environment.
Example:
The module was prepared by new programmer
The complexity of that particular module is very high etc..
6) Performing the similar kind of testing again and again does not identify
the defects.
Executing same set of test cases will not identify the defects present in the
software.
o Only a small number of possible inputs can be tested and many program
paths will be left untested
o Tests can be redundant if the software designer/ developer has already run a
test case
Testing is one of the very important core parts of software development and
implementation. Comment on this statement and explain various testing
techniques.
Software testing is the process of exercising with the specific intent of finding
errors prior to delivery to the end user. Testing is part of a broader process of
software verification and validation. Testing results in higher quality software,
more satisfied user and lower maintenance cost, more accurate and reliable
results. Testing costs 1/3 to ½ of the total cost of software development process.
Hence testing is the very important core parts of software development and
implementation.
Various testing techniques are:
A. SYSTEM TESTING:
System testing fully exercise the computer based system to verify the system
elements have been properly integrated and perform allocated functions. An
independent testing team is responsible for system testing. The tests are based
on system specification.
- Integration testing:
In this type of testing the test team has access to the system code. The
system is tested as components are integrated.
The purpose of integration testing is to verify functional, performance, and
reliability requirements placed on major design items. These "design items",
i.e. assemblages (or groups of units), are exercised through their interfaces
using Black box testing, success and error cases being simulated via
appropriate parameter and data inputs. Simulated usage of shared data areas
and inter-process communication is tested and individual subsystems are
exercised through their input interface. Test cases are constructed to test that
all components within assemblages interact correctly, for example across
procedure calls or process activations, and this is done after testing
individual modules, i.e. unit testing. The overall idea is a "building block"
approach, in which verified assemblages are added to a verified base which
is then used to support the integration testing of further assemblages.
Some different types of integration testing are big bang, top-down, and
bottom-up.
Big Bang
In this approach, all or most of the developed modules are coupled together
to form a complete software system or major part of the system and then
used for integration testing. The Big Bang method is very effective for
saving time in the integration testing process. However, if the test cases and
their results are not recorded properly, the entire integration process will be
more complicated and may prevent the testing team from achieving the goal
of integration testing.
Top-down and Bottom-up
Bottom Up Testing is an approach to integrated testing where the lowest
level components are tested first, then used to facilitate the testing of higher
level components. The process is repeated until the component at the top of
the hierarchy is tested.
All the bottom or low-level modules, procedures or functions are integrated
and then tested. After the integration testing of lower level integrated
modules, the next level of modules will be formed and can be used for
integration testing. This approach is helpful only when all or most of the
modules of the same development level are ready. This method also helps to
determine the levels of software developed and makes it easier to report
testing progress in the form of a percentage.
Top Down Testing is an approach to integrated testing where the top
integrated modules are tested and the branch of the module is tested step by
step until the end of the related module.
The main advantage of the Bottom-Up approach is that bugs are more
easily found. With Top-Down, it is easier to find a missing branch link
- Release testing:
In this type of testing a separate testing team tests a complete version of the
system before it is released to users. System testing by the development team
should focus on discovering bugs in the system. The aim of release testing is
to check that the system meets the requirements of system stakeholders.
System testing by the development team should focus on discovering bugs in
the system. Release testing is usually a black-box testing process where
tests are derived from the system specification. The system is treated as
black-box whose behavior. Another name for this is ‘functional testing’, so
called because the tester is only concerned with functionality and not the
implementation of the software.
B.COMPONENT TESTING:
Component System
Testing Testing
- The support and overall business will not realize the benefit of the solution as
rapidly.
1. Assumptions
2. Factors
3. Mathematical function
Quality Mode The quality model established in the first part of the standard, ISO
9126-1, classifies software quality in a structured set of characteristics and sub-
characteristics. These are also considered as nonfunctional requirements metrics.
These are:
1-Functionality - A set of attributes that bear on the existence of a set of functions
and their specified properties. The functions are those that satisfy stated or implied
needs.
Suitability
Accuracy
Interoperability – the capability of different programs to exchange data
via a common set of exchange formats, to read and write the same file
formats, and to use the same protocols
Compliance – the flexibility of the software to accept new features and
enhancements.
Security – preventing unauthorized access to the software.
Maturity
Recoverability - software product can be modified in order to correct
defects, meet new requirements, make future maintenance easier, or
cope with a changed environment
Fault Tolerance - the property that enables a system (often computer-
based) to continue operating properly in the event of the failure of (or
one or more faults within) some of its components.
3- Usability - A set of attributes that bear on the effort needed for use, and on the
individual assessment of such use, by a stated or implied set of users.
Learnability
Understandability
Operability - ability to keep a system in a functioning and operating
condition.
4- Efficiency - A set of attributes that bear on the relationship between the level of
performance of the software and the amount of resources used, under stated
conditions.
Behavior
Resource Behavior
5- Maintainability - A set of attributes that bear on the effort needed to make
specified modifications.
3. Stability
4. Changeability
5. Testability
6. Installability
7. Replaceability
8. Adaptability
9.11 CMM
Change
Proposals
System
Building
Change
Management
Version Release
Management Management
Change management is concerned with keeping track of these changes and ensuring that they
are implemented in the most cost-effective way.
Change Management Process
Request change by completing a change request for
Analyze change request
if change is valid then
Assess how change might be implemented
Assess change cost
Submit request to change control board
if change is accepted then
repeat
make changes to software
submit changed software for quality approval
until software quality is adequate create new system version
else
reject change
request
else
reject change request
Versions/variants/releases
• Version An instance of a system which is functionally distinct in some way from other
system instances.
• Variant An instance of a system which is functionally identical but non-functionally
distinct from other instances of a system.
• Release An instance of a system which is distributed to users outside of the development
team.
Version identification
• Procedures for version identification should define an unambiguous way of identifying
component versions.
• There are three basic techniques for component identification
a) Version numbering
b) Attribute-based identification
a) C) Change-oriented identification
A)Version numbering
• Simple naming scheme uses a linear derivation
Eg.V1, V1.1, V1.2, V2.1, V2.2 etc.
• The actual derivation structure is a tree or a network rather than a sequence.
• A hierarchical naming scheme leads to fewer errors in version identification.
V1.1a v1.1.1
b) Attribute-based identification
• Attributes can be associated with a version with the combination of attributes identifying
that version
Examples of attributes are Date, Creator, Programming Language, Customer, Status etc.
• This is more flexible than an explicit naming scheme for version retrieval; However, it
can cause problems with uniqueness - the set of attributes have to be chosen so that all
versions can be uniquely identified.
• In practice, a version also needs an associated name for easy reference.
c) Change-oriented identification
• Integrates versions and the changes made to create these versions.
• Used for systems rather than components.
• Each proposed change has a change set that describes changes made to implement that
change.
• Change sets are applied in sequence so that, in principle, a version of the system that
incorporates an arbitrary set of changes may be created.
Release management
• Release management incorporate changes forced on the system by errors discovered by
users and by hardware changes.
• They must also incorporate new system functionality.
• Release planning is concerned with when to issue a system version as a release.
Release problems
• Customer may not want a new release of the System due to unwanted facilities.
• Release management should not assume that all previous releases have been accepted.
All files required for a release should be re-created when a new release is installed.
CM workbenches
1. Open workbenches
Tools for each stage in the CM process are integrated through organizational procedures
and scripts. Gives flexibility in tool selection.
2. Integrated workbenches
Provide whole-process, integrated support for configuration management. More tightly
integrated tools so easier to use. However, the cost is less flexibility in the tools used.
3. Change management tools
Change management is a procedural process so it can be modeled and integrated with a
version management system.
Change management tools
Form editor to support processing the change request forms
Workflow system to define who does what and to automate information transfer
Change database that manages change proposals and is linked to a VM system.
Change reporting system that generates management reports on the status of change
requests.
4. Version management tools
a database.
● The configuration data base should record information about changes and change requests.
● CASE tools may be stand-alone tools or may be integrated systems which integrate support for
Explain software requirement specification (SRS). What are the characteristics of a good
software requirement specification document? (068 Baisakh)
Explain the importance of requirement engineering. List out requirement elicitation techniques.
What are the problems in formation of requirements? (067 Asadh)