Software Engineering Notes All Units 6th - Sem
Software Engineering Notes All Units 6th - Sem
Software Engineering Notes All Units 6th - Sem
Lecture Notes
on
Software Engineering
Btech-CSE, 6th Sem
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
MODULE-1
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
Software Products are nothing but software systems delivered to the customer with the
documentation that describes how to install and use the system. In certain cases, software
products may be part of system products where hardware, as well as software, is delivered to a
customer. Software products are produced with the help of the software process. The software
process is a way in which we produce software.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
Software Crisis is a term used in computer science for the difficulty of writing useful and
efficient computer programs in the required time. The software crisis was due to using the same
workforce, same methods, and same tools even though rapidly increasing software demand, the
complexity of software, and software challenges. With the increase in software complexity,
many software problems arise because existing methods were insufficient.
If we use the same workforce, same methods, and same tools after the fast increase in software
demand, software complexity, and software challenges, then there arise some issues like
software budget problems, software efficiency problems, software quality problems, software
management, and delivery problems, etc. This condition is called a Software Crisis.
• The cost of owning and maintaining software was as expensive as developing the software.
• At that time Projects were running overtime.
• At that time Software was very inefficient.
• The quality of the software was low quality.
• Software often did not meet user requirements.
• The average software project overshoots its schedule by half.
• At that time Software was never delivered.
• Non-optimal resource utilization.
• Challenging to alter, debug, and enhance.
• The software complexity is harder to change.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
Decomposition:
Decomposition is a process of breaking down. It will be breaking down functions into smaller
parts. It is another important principle of software engineering to handle problem complexity. This
principle is profusely made use of by several software engineering techniques to contain the
exponential growth of the perceived problem complexity. The decomposition principle is
popularly is says the divide and conquer principle.
Functional Decomposition:
It is a term that engineers use to describe a set of steps in which they break down the overall
function of a device, system, or process into its smaller parts.
Steps for the Functional Decomposition:
1. Find the most general function
2. Find the closest sub-functions
3. Find the next levels of sub-functions
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
SDLC Phases:
1. Planning:
Planning is the initial stage. This phase is about things like the costs of developing a product,
capacity planning around team members, project schedule, and resource allocation. It can either
be the plan of a groundbreaking thought or learning the current framework with progress as the
objective. The planning stage additionally incorporates project plans, cost assessments, and
acquisition necessities.
2. Analysis:
The analysis phase is the most important phase of the software development life cycle since it sets
the requirements for what to build. In this phase, it is vital to understand the client’s requirements
and make sure everyone is on board with the same understanding.
3. Design:
In this phase, the system and software design prepared from the requirement specifications. System
Design helps in specifying hardware and system requirements and also helps in defining overall
system architecture. The system design specifications serve as input for the next phase of the
model.
4. Implementation:
After receiving system design documents, the work divided into modules, and actual frontend and
backend coding started. Since, in this phase, the code produced so it is the main focus for the
developer. Implementation is the longest phase of the software development life cycle (SDLC).
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
5. Testing
After the code developed, it tested against the requirements. It makes sure that the product is
actually solving the needs addressed and gathered during the requirements phase. During the
Testing phase, all types of functional testing like unit testing, integration testing, system testing,
and acceptance testing are done as well as non-functional testing are also done.
6.Maintenance: When the customers start using the developed system then the actual problems
come up and need to be solved from time to time. This process where care taken for the developed
product known as maintenance. It is the last stage, however, it doesn’t end here. Now, the product
item checked to guarantee the product working properly without any bugs or imperfections.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
Process Models
A software process model is an abstraction of the software development process. The models
specify the stages and order of a process. So, think of this as a representation of the order of
activities of the process and the sequence in which they are performed.
The goal of a software process model is to provide guidance for controlling and coordinating the
tasks to achieve the end product and objectives as effectively as possible.
There are many kinds of process models for meeting different requirements. We refer to these
as SDLC models (Software Development Life Cycle models). The most popular and important
SDLC models are as follows:
• Waterfall model
• V model
• Incremental model
• RAD model
• Agile model
• Iterative model
• Prototype model
• Spiral model
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
1. Project requirements
Before you choose a model, take some time to go through the project requirements and clarify
them alongside your organization’s or team’s expectations. Will the user need to specify
requirements in detail after each iterative session? Will the requirements change during the
development process?
2. Project size
Consider the size of the project you will be working on. Larger projects mean bigger teams, so
you’ll need more extensive and elaborate project management plans.
3. Project complexity
Complex projects may not have clear requirements. The requirements may change often, and the
cost of delay is high. Ask yourself if the project requires constant monitoring or feedback from the
client.
4. Cost of delay
Is the project highly time-bound with a huge cost of delay, or are the timelines flexible?
5. Customer involvement
Do you need to consult the customers during the process? Does the user need to participate in all
phases?
6. Familiarity with technology
This involves the developers’ knowledge and experience with the project domain, software tools,
language, and methods needed for development.
7. Project resources
This involves the amount and availability of funds, staff, and other resources.
Waterfall Model:
The waterfall model and its derivatives were extremely popular in the 1970s. It is heavily being
used across many development projects. It is possibly the most obvious and intuitive way in which
software can be developed through a team effort.
The waterfall model is the oldest paradigm for software engineering. The original waterfall model
was proposed by Winston Royce. We can think of the waterfall model as a generic model that has
been extended in many ways for catering to certain specific software development situations to
realize all other software life cycle models.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
Iterative Model
The iterative development model develops a system by building small portions of all the features.
This helps to meet the initial scope quickly and release it for feedback.
In the iterative model, you start off by implementing a small set of software requirements. These
are then enhanced iteratively in the evolving versions until the system is completed. This process
model starts with part of the software, which is then implemented and reviewed to identify further
requirements.
Like the incremental model, the iterative model allows you to see the results at the early stages of
development. This makes it easy to identify and fix any functional or design flaws. It also makes
it easier to manage risk and change requirements.
The deadline and budget may change throughout the development process, especially for large
complex projects. The iterative model is a good choice for large software that can be easily broken
down into modules.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
Prototyping Model:
The Prototyping Model is one of the most popularly used Software Development Life Cycle
Models (SDLC models). This model is used when the customers do not know the exact project
requirements beforehand. In this model, a prototype of the end product is first developed, tested,
and refined as per customer feedback repeatedly till a final acceptable prototype is achieved which
forms the basis for developing the final product.
In this process model, the system is partially implemented before or during the analysis phase
thereby giving the customers an opportunity to see the product early in the life cycle. The process
starts by interviewing the customers and developing the incomplete high-level paper model. This
document is used to build the initial prototype supporting only the basic functionality as desired
by the customer. Once the customer figures out the problems, the prototype is further refined to
eliminate them. The process continues until the user approves the prototype and finds the working
model to be satisfactory.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
There are four types of Prototyping Models, which are described below.
This technique offers a useful method of exploring ideas and getting customer feedback for each
of them. In this method, a developed prototype need not necessarily be a part of the ultimately
accepted prototype. Customer feedback helps in preventing unnecessary design faults and hence,
the final prototype developed is of better quality. +
2. Evolutionary Prototyping
In this method, the prototype developed initially is incrementally refined on the basis of customer
feedback till it finally gets accepted. In comparison to Rapid Throwaway Prototyping, it offers a
better approach that saves time as well as effort. This is because developing a prototype from
scratch for every iteration of the process can sometimes be very frustrating for the developers.
3. Incremental Prototyping
In this type of incremental prototyping, the final expected product is broken into different small
pieces of prototypes and developed individually. In the end, when all individual pieces are properly
developed, then the different prototypes are collectively merged into a single final product in their
predefined order. It’s a very efficient approach that reduces the complexity of the development
process, where the goal is divided into sub-parts and each sub-part is developed individually. The
time interval between the project’s beginning and final delivery is substantially reduced because
all parts of the system are prototyped and tested simultaneously. Of course, there might be the
possibility that the pieces just do not fit together due to some lack of ness in the development phase
this can only be fixed by careful and complete plotting of the entire system before prototyping
starts.
4. Extreme Prototyping
This method is mainly used for web development. It consists of three sequential independent
phases:
1. In this phase, a basic prototype with all the existing static pages is presented in HTML
format.
2. In the 2nd phase, Functional screens are made with a simulated data process using a
prototype services layer.
3. This is the final step where all the services are implemented and associated with the final
prototype.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
This Extreme Prototyping method makes the project cycling and delivery robust and fast and keeps
the entire developer team focused and centralized on product deliveries rather than discovering all
possible needs and specifications and adding necessitated features.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
• The prototype may give a false sense of completion, leading to the premature release of the
product.
• The prototype may not consider technical feasibility and scalability issues that can arise during
the final product development.
• The prototype may be developed using different tools and technologies, leading to additional
training and maintenance costs.
• The prototype may not reflect the actual business requirements of the customer, leading to
dissatisfaction with the final product.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
Evolutionary Model
The evolutionary model is a combination of the Iterative and Incremental models of the software
development life cycle. Delivering your system in a big bang release, delivering it in incremental
process over time is the action done in this model. Some initial requirements and architecture
envisioning need to be done. It is better for software products that have their feature sets
redefined during development because of user feedback and other factors.
What is the Evolutionary Model?
The Evolutionary development model divides the development cycle into smaller, incremental
waterfall models in which users can get access to the product at the end of each cycle.
1. Feedback is provided by the users on the product for the planning stage of the next cycle
and the development team responds, often by changing the product, plan, or process.
2. Therefore, the software product evolves with time.
3. All the models have the disadvantage that the duration of time from the start of the project
to the delivery time of a solution is very high.
4. The evolutionary model solves this problem with a different approach.
5. The evolutionary model suggests breaking down work into smaller chunks, prioritizing
them, and then delivering those chunks to the customer one by one.
6. The number of chunks is huge and is the number of deliveries made to the customer.
7. The main advantage is that the customer’s confidence increases as he constantly gets
quantifiable goods or services from the beginning of the project to verify and validate his
requirements.
8. The model allows for changing requirements as well as all work is broken down into
maintainable work chunks.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
1. It is used in large projects where you can easily find modules for incremental
implementation. Evolutionary model is commonly used when the customer wants to start
using the core features instead of waiting for the full software.
2. Evolutionary model is also used in object oriented software development because the
system can be easily portioned into units in terms of objects.
1. Sometimes it is hard to divide the problem into several versions that would be acceptable to
the customer which can be incrementally implemented and delivered.
The Spiral Model is one of the most important Software Development Life Cycle models, which
provides support for Risk Handling. This article focuses on discussing the Spiral Model in
detail.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
The Spiral Model is often used for complex and large software development projects, as it allows
for a more flexible and adaptable approach to software development. It is also well-suited to
projects with significant uncertainty or high levels of risk.The Radius of the spiral at any point
represents the expenses(cost) of the project so far, and the angular dimension represents the
progress made so far in the current phase.
Each phase of the Spiral Model is divided into four quadrants as shown in the above figure. The
functions of these four quadrants are discussed below:
1. Objectives determination and identify alternative solutions: Requirements are gathered
from the customers and the objectives are identified, elaborated, and analyzed at the start of
every phase. Then alternative solutions possible for the phase are proposed in this quadrant.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
2. Identify and resolve Risks: During the second quadrant, all the possible solutions are
evaluated to select the best possible solution. Then the risks associated with that solution are
identified and the risks are resolved using the best possible strategy. At the end of this
quadrant, the Prototype is built for the best possible solution.
3. Develop the next version of the Product: During the third quadrant, the identified
features are developed and verified through testing. At the end of the third quadrant, the next
version of the software is available.
4. Review and plan for the next Phase: In the fourth quadrant, the Customers evaluate the
so-far developed version of the software. In the end, planning for the next phase is started.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
4. Customer Satisfaction: Customers can see the development of the product at the early
phase of the software development and thus, they habituated with the system by using it
before completion of the total product.
5. Iterative and Incremental Approach: The Spiral Model provides an iterative and
incremental approach to software development, allowing for flexibility and adaptability in
response to changing requirements or unexpected events.
6. Emphasis on Risk Management: The Spiral Model places a strong emphasis on risk
management, which helps to minimize the impact of uncertainty and risk on the software
development process.
7. Improved Communication: The Spiral Model provides for regular evaluations and
reviews, which can improve communication between the customer and the development
team.
8. Improved Quality: The Spiral Model allows for multiple iterations of the software
development process, which can result in improved software quality and reliability.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
RAD model
The Rapid Application Development Model was first proposed by IBM in the 1980s. The RAD
model is a type of incremental process model in which there is an extremely short development
cycle. When the requirements are fully understood and the component-based construction
approach is adopted then the RAD model is used. Various phases in RAD are Requirements
Gathering, Analysis and Planning, Design, Build or Construction, and finally Deployment.
The critical feature of this model is the use of powerful development tools and techniques. A
software project can be implemented using this model if the project can be broken down into
small modules wherein each module can be assigned independently to separate teams. These
modules can finally be combined to form the final product. Development of each module
involves the various basic steps as in the waterfall model i.e. analyzing, designing, coding, and
then testing, etc. as shown in the figure. Another striking feature of this model is a short period
i.e. the time frame for delivery(time-box) is generally 60-90 days.
Multiple teams work on developing the software system using the RAD model parallely.
The use of powerful developer tools such as JAVA, C++, Visual BASIC, XML, etc. is also an
integral part of the projects. This model consists of 4 basic phases:
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
Advantages:
• The use of reusable components helps to reduce the cycle time of the project.
• Feedback from the customer is available at the initial stages.
• Reduced costs as fewer developers are required.
• The use of powerful development tools results in better quality products in comparatively
shorter time spans.
• The progress and development of the project can be measured through the various stages.
• It is easier to accommodate changing requirements due to the short iteration time spans.
• Productivity may be quickly boosted with a lower number of employees.
Disadvantages:
• The use of powerful and efficient tools requires highly skilled professionals.
• The absence of reusable components can lead to the failure of the project.
• The team leader must work closely with the developers and customers to close the project
on time.
• The systems which cannot be modularized suitably cannot use this model.
• Customer involvement is required throughout the life cycle.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
• It is not meant for small-scale projects as in such cases, the cost of using automated tools
and techniques may exceed the entire budget of the project.
• Not every application can be used with RAD.
Applications:
1. This model should be used for a system with known requirements and requiring a short
development time.
2. It is also suitable for projects where requirements can be modularized and reusable
components are also available for development.
3. The model can also be used when already existing system components can be used in
developing a new system with minimum changes.
4. This model can only be used if the teams consist of domain experts. This is because relevant
knowledge and the ability to use powerful techniques are a necessity.
5. The model should be chosen when the budget permits the use of automated tools and
techniques required.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
AGILE MODEL
The meaning of Agile is swift or versatile. "Agile process model" refers to a software development
approach based on iterative development. Agile methods break tasks into smaller iterations, or
parts do not directly involve long term planning. The project scope and requirements are laid down
at the beginning of the development process. Plans regarding the number of iterations, the duration
and the scope of each iteration are clearly defined in advance.
Each iteration is considered as a short time "frame" in the Agile process model, which typically
lasts from one to four weeks. The division of the entire project into smaller parts helps to minimize
the project risk and to reduce the overall project delivery time requirements. Each iteration
involves a team working through a full software development life cycle including planning,
requirements analysis, design, coding, and testing before a working product is demonstrated to the
client.
1. Requirements gathering: In this phase, you must define the requirements. You should
explain business opportunities and plan the time and effort needed to build the project.
Based on this information, you can evaluate technical and economic feasibility.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
2. Design the requirements: When you have identified the project, work with stakeholders
to define requirements. You can use the user flow diagram or the high-level UML diagram
to show the work of new features and show how it will apply to your existing system.
3. Construction/ iteration: When the team defines the requirements, the work begins.
Designers and developers start working on their project, which aims to deploy a working
product. The product will undergo various stages of improvement, so it includes simple,
minimal functionality.
4. Testing: In this phase, the Quality Assurance team examines the product's performance
and looks for the bug.
5. Deployment: In this phase, the team issues a product for the user's work environment.
6. Feedback: After releasing the product, the last step is feedback. In this, the team receives
feedback about the product and works through the feedback.
Principles of Agile:
1. The highest priority is to satisfy the customer through early and continuous delivery of
valuable software.
2. It welcomes changing requirements, even late in development.
3. Deliver working software frequently, from a couple of weeks to a couple of months, with a
preference for the shortest timescale.
4. Build projects around motivated individuals. Give them the environment and the support
they need and trust them to get the job done.
5. Working software is the primary measure of progress.
6. Simplicity the art of maximizing the amount of work not done is essential.
7. The most efficient and effective method of conveying information to and within a
development team is face-to-face conversation.
8. By the amount of work that has been finished, gauge your progress.
9. Never give up on excellence.
10. Take advantage of change to gain a competitive edge.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
• Step 1: In the first step, concept, and business opportunities in each possible project are
identified and the amount of time and work needed to complete the project is estimated.
Based on their technical and financial viability, projects can then be prioritized and
determined which ones are worthwhile pursuing.
• Step 2: In the second phase, known as inception, the customer is consulted regarding the
initial requirements, team members are selected, and funding is secured. Additionally, a
schedule outlining each team’s responsibilities and the precise time at which each sprint’s
work is expected to be finished should be developed.
• Step 3: Teams begin building functional software in the third step, iteration/construction,
based on requirements and ongoing feedback. Iterations, also known as single development
cycles, are the foundation of the Agile software development cycle.
o Scrum
o Crystal
o Dynamic Software Development Method(DSDM)
o Feature Driven Development(FDD)
o Lean Software Development
o eXtremeProgramming(XP)
Scrum
SCRUM is an agile development process focused primarily on ways to manage tasks in team-
based development conditions.
o Scrum Master: The scrum can set up the master team, arrange the meeting and remove
obstacles for the process
o Product owner: The product owner makes the product backlog, prioritizes the delay and
is responsible for the distribution of functionality on each repetition.
o Scrum Team: The team manages its work and organizes the work to complete the sprint
or cycle.
eXtremeProgramming(XP)
This type of methodology is used when customers are constantly changing demands or
requirements, or when they are not sure about the system's performance.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
MODULE-2
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
Requirement gathering is a crucial phase in the software development life cycle where
information about the desired features, functionalities, and constraints of a software system is
collected. Effective requirement gathering is essential for understanding the needs and
expectations of stakeholders, guiding the development process, and delivering a product that meets
user requirements. Here are key steps and considerations in the requirement gathering process:
1. **Identify Stakeholders:**
- Identify and involve all relevant stakeholders, including end-users, customers, project
managers, developers, testers, and other individuals or groups who have a vested interest in the
software.
4. **Organize Workshops:**
- Conduct workshops or group sessions to bring together various stakeholders for collaborative
discussions. Workshops can facilitate communication and help uncover different perspectives and
requirements.
8. **Document Requirements:**
- Document requirements in a clear and structured manner. This documentation may include
functional requirements, non-functional requirements, use cases, user stories, and any other
relevant information.
9. **Prioritize Requirements:**
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
- Work with stakeholders to prioritize requirements based on their importance and urgency. This
helps in managing scope and focusing on critical features.
13. **Traceability:**
- Establish traceability between requirements and other project artifacts, such as design
documents and test cases. This helps ensure that every requirement is accounted for throughout
the development process.
14. **Communication:**
- Maintain open and effective communication channels with stakeholders. Regularly update
them on the progress of requirement gathering and seek their feedback.
Effective requirement gathering lays the foundation for a successful software development project
by providing a clear understanding of what needs to be built and guiding subsequent phases of the
development life cycle. It is a collaborative and dynamic process that requires ongoing
communication and collaboration among all stakeholders.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
Requirements Analysis
Requirement analysis is significant and essential activity after elicitation. We analyze, refine, and
scrutinize the gathered requirements to make consistent and unambiguous requirements. This
activity reviews all requirements and may provide a graphical view of the entire system. After the
completion of the analysis, it is expected that the understandability of the project may improve
significantly. Here, we may also use the interaction with the customer to clarify points of confusion
and to understand which requirements are more important than others.
(i)Draw the context diagram: The context diagram is a simple model that defines the boundaries
and interfaces of the proposed systems with the external world. It identifies the entities outside the
proposed system that interact with the system. The context diagram of student result management
system is given below:
(ii) Development of a Prototype (optional): One effective way to find out what the customer
wants is to construct a prototype, something that looks and preferably acts as part of the system
they say they want.
We can use their feedback to modify the prototype until the customer is satisfied continuously.
Hence, the prototype helps the client to visualize the proposed system and increase the
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
understanding of the requirements. When developers and users are not sure about some of the
elements, a prototype may help both the parties to take a final decision.
Some projects are developed for the general market. In such cases, the prototype should be shown
to some representative sample of the population of potential purchasers. Even though a person who
tries out a prototype may not buy the final system, but their feedback may allow us to make the
product more attractive to others.
The prototype should be built quickly and at a relatively low cost. Hence it will always have
limitations and would not be acceptable in the final system. This is an optional activity.
(iii) Model the requirements: This process usually consists of various graphical representations
of the functions, data entities, external entities, and the relationships between them. The graphical
view may help to find incorrect, inconsistent, missing, and superfluous requirements. Such models
include the Data Flow diagram, Entity-Relationship diagram, Data Dictionaries, State-transition
diagrams, etc.
(iv) Finalize the requirements: After modeling the requirements, we will have a better
understanding of the system behavior. The inconsistencies and ambiguities have been identified
and corrected. The flow of data amongst various modules has been analyzed. Elicitation and
analyze activities have provided better insight into the system. Now we finalize the analyzed
requirements, and the next step is to document these requirements in a prescribed format.
1. **User Interfaces:** Describes how users interact with the system, including details about
menus, screens, buttons, and other interface elements.
2. **Data Handling:** Specifies how the system will manage and manipulate data, including data
input, storage, retrieval, and processing.
3. **Processing Logic:** Defines the algorithms and logic that the system must follow to perform
specific functions or operations.
4. **System Behavior:** Describes the expected behavior of the system under different conditions
and scenarios.
5. **Business Rules:** Outlines the rules and regulations that the system must adhere to, often
derived from the business or operational processes it supports.
6. **Security Requirements:** Specifies the security features and measures the system must
implement to protect data and ensure authorized access.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
9. **External Interfaces:** Describes how the software will interact with other systems, services,
or external components.
10. **Error Handling:** Outlines how the system should respond to errors or exceptional
situations, including error messages and recovery mechanisms.
11. **Reporting:** Specifies the types of reports the system should generate and the information
they should include.
12. **Audit Trail:** Describes the system's ability to record and track user activities for auditing
purposes.
13. **Documentation:** Includes requirements for user manuals, technical documentation, and
any other documentation needed for system understanding and maintenance.
14. **Legal and Compliance Requirements:** Outlines any legal or regulatory requirements that
the system must comply with.
15. **Testing Requirements:** Describes the conditions and criteria for testing the software to
ensure that it meets the specified functional requirements.
16. **Usability:** Specifies the characteristics that contribute to the system's ease of use,
including user feedback, help features, and accessibility.
Functional requirements are crucial for both developers and stakeholders, as they provide a clear
roadmap for the development process and serve as a basis for validating the successful
implementation of the software.
1. **Performance:** Describes how the system performs in terms of speed, response time,
throughput, and efficiency. Examples include maximum response time for user interactions,
system scalability, and the ability to handle a specific number of concurrent users.
2. **Reliability:** Specifies the system's ability to perform its functions consistently and reliably
under various conditions. This includes measures such as system uptime, availability, and fault
tolerance.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
3. **Availability:** Defines the percentage of time the system should be operational and
accessible to users. Availability requirements often include factors such as planned downtime for
maintenance.
4. **Scalability:** Describes the system's capability to handle increased load or user demands by
expanding resources (e.g., adding more servers) without degrading performance.
5. **Security:** Outlines the measures and mechanisms to protect the system from unauthorized
access, data breaches, and other security threats. This includes authentication, authorization,
encryption, and audit trails.
6. **Usability:** Specifies characteristics related to the user interface and overall user experience.
This may include factors like ease of use, accessibility, and user satisfaction.
7. **Maintainability:** Describes the ease with which the software can be maintained, updated,
and enhanced over time. This includes aspects like code readability, modularity, and
documentation.
9. **Compatibility:** Specifies the compatibility of the software with other systems, software, or
technologies, ensuring seamless integration.
10. **Interoperability:** Describes how well the system can interact with other systems, often
focusing on communication protocols and data exchange formats.
11. **Reliability:** Outlines the system's ability to perform its functions consistently and without
errors over time.
12. **Compliance:** Ensures that the system adheres to legal, regulatory, and industry-specific
standards. This may include privacy regulations, data protection laws, or industry-specific
guidelines.
14. **Backup and Recovery:** Specifies the procedures and requirements for data backup,
recovery, and disaster recovery to ensure data integrity and availability.
Non-functional requirements are critical for shaping the overall performance and characteristics
of the software system and play a significant role in its success and acceptance by users and
stakeholders.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
that may have an impact on the design of the system. An SRS should identify and specify all such
constraints.
Non-Functional Attributes
In this, non-functional attributes are explained that are required by software system for better
performance. An example may include Security, Portability, Reliability, Reusability, Application
compatibility, Data integrity, Scalability capacity, etc.
Preliminary Schedule and Budget
In this, initial version and budget of project plan are explained which include overall time duration
required and overall cost required for development of project.
Uses of SRS document
• Development team require it for developing product according to the need.
• Test plans are generated by testing group based on the describe external behavior.
• Maintenance and support staff need it to understand what the software product is supposed to
do.
• Project manager base their plans and estimates of schedule, effort and resources on it.
• customer rely on it to know that product they can expect.
• As a contract between developer and customer.
• in documentation purpose.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
Decision Table
1. Requirement Description:
• Identify the specific requirements or features in the system that involve decision-making
logic.
2. Condition and Action Definition:
• Clearly define the conditions that influence the decision and the corresponding actions
that should be taken based on those conditions.
3. Decision Table Representation:
• Create a decision table to represent the various combinations of conditions and actions.
• Use columns for each condition and action, and rows for each unique combination.
4. Integration into SRS:
• Integrate the decision table into the SRS document, typically in the section related to the
specific requirement or feature it addresses.
• Provide context and explanations as needed to ensure that readers understand the purpose
and interpretation of the decision table.
5. Example:
• For instance, if the SRS specifies a requirement related to user authentication, you might
include a decision table that outlines conditions such as "Correct Username," "Correct
Password," and actions like "Grant Access" or "Deny Access."
### Requirement: User Authentication
#### Explanation:
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
- When both the correct username and password are provided, access is granted.
- If the username is correct but the password is incorrect, access is denied.
- If the username is incorrect, access is denied.
Decision Tree
In a Software Requirements Specification (SRS) document, decision trees are not typically used
in their graphical form, as they are more commonly associated with algorithmic or data analysis
processes. However, decision logic and conditions that lead to different outcomes can certainly
be expressed and documented within the SRS.
1. **Requirement Description:**
- Clearly define the specific requirement or feature that involves decision-making.
3. **Decision Logic:**
- Present the decision logic in a structured and textual manner.
- Use if-else statements, bullet points, or any other format that clearly outlines the conditions
and corresponding actions.
4. **Example:**
- If the SRS addresses a requirement related to user authentication, you might express the
decision logic like this:
```markdown
### Requirement: User Authentication
3. If Incorrect Username:
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
```
5. **Pseudo-Code:**
- In some cases, especially for complex decision logic, pseudo-code might be included to
provide a more algorithmic representation.
```markdown
### Requirement: Complex Decision Logic
#### Pseudo-Code:
```
if (Condition A is true) {
// Action A
performActionA();
} else if (Condition B is true) {
// Action B
performActionB();
} else {
// Default Action
performDefaultAction();
}
```
In this way, while not using a graphical decision tree, you can clearly represent decision logic
within the SRS using text, pseudo-code, or any other format that enhances understanding. The
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
goal is to communicate how the system should behave under different conditions.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
• Describes the content and qualities of a good software requirements specification (SRS)
• Presents several sample SRS outlines
• Establish the basis for agreement between the customers and the suppliers on what the
software product is to do
• Reduce the development effort
o Elarlyrequirements → reduces later redesign, recoding, retesting
• Provide a basis for realistic estimates of costs and schedules
• Provide a basis for validation and verification
• Facilitate transfer of the software product to new users or new machines
• Serve as a basis for enhancement requests
• Goals of SRS
o Functionality, interfaces, performance, qualities, design constraints
• Environment of the SRS
o Where does it fit in the overall project hierarchy
• Characteristics of a good SRS
o Generalization of the characteristicsto the document
• Evolution of the SRS
o Implies a change management process
• Prototyping
o Helps elicit software requirements and reach closure on the SRS
• Including design and project requirements in the SRS
o Focus on external behavior and the product, not the design and the production
process
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
1. Introduction
2. General description of the software product
3. Specific requirements (detailed)
4. Additional information such as appendixes and index, if necessary
SRS: 1. Introduction
1.1. Purpose
1.2. Scope
1.4. References
1.5. Overview
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
• Describe and justify technical skills and capabilities of each user class
2.4. Constraints
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
• Detail all inputs and outputs (complement, not duplicate, information presented in section
2)
• Examples: GUI screens, file formats
3.2 Functions
• Include detailed specifications of each use case, including collaboration and other
diagrams useful for this purpose
• Include the static and the dynamic numerical requirements placed on the software or on
human interaction with the software as a whole.
• Specify design constraints that can be imposed by other standards, hardware limitations,
etc.
• Report format
• Data naming
• Accounting & Auditing procedures
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
Structured Analysis and Structured Design (SA/SD) is a diagrammatic notation that is designed
to help people understand the system. The basic goal of SA/SD is to improve quality and reduce
the risk of system failure. It establishes concrete management specifications and documentation.
It focuses on the solidity, pliability, and maintainability of the system.
Structured Analysis and Structured Design (SA/SD) is a software development method that was
popular in the 1970s and 1980s. The method is based on the principle of structured programming,
which emphasizes the importance of breaking down a software system into smaller, more
manageable components.
In SA/SD, the software development process is divided into two phases: Structured Analysis and
Structured Design. During the Structured Analysis phase, the problem to be solved is analyzed and
the requirements are gathered. The Structured Design phase involves designing the system to meet
the requirements that were gathered in the Structured Analysis phase.
Structured Analysis and Structured Design (SA/SD) is a traditional software development
methodology that was popular in the 1980s and 1990s. It involves a series of techniques for
designing and developing software systems in a structured and systematic way. Here are some
key concepts of SA/SD:
1. Functional Decomposition: SA/SD uses functional decomposition to break down a complex
system into smaller, more manageable subsystems. This technique involves identifying the
main functions of the system and breaking them down into smaller functions that can be
implemented independently.
2. Data Flow Diagrams (DFDs): SA/SD uses DFDs to model the flow of data through the
system. DFDs are graphical representations of the system that show how data moves between
the system’s various components.
3. Data Dictionary: A data dictionary is a central repository that contains descriptions of all the
data elements used in the system. It provides a clear and consistent definition of data
elements, making it easier to understand how the system works.
4. Structured Design: SA/SD uses structured design techniques to develop the system’s
architecture and components. It involves identifying the major components of the system,
designing the interfaces between them, and specifying the data structures and algorithms that
will be used to implement the system.
5. Modular Programming: SA/SD uses modular programming techniques to break down the
system’s code into smaller, more manageable modules. This makes it easier to develop, test,
and maintain the system.
Some advantages of SA/SD include its emphasis on structured design and documentation, which
can help improve the clarity and maintainability of the system. However, SA/SD has some
disadvantages, including its rigidity and inflexibility, which can make it difficult to adapt to
changing business requirements or technological trends. Additionally, SA/SD may not be well-
suited for complex, dynamic systems, which may require more agile development
methodologies.
1. Requirements gathering: The first step in the SA/SD process is to gather requirements from
stakeholders, including users, customers, and business partners.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
2. Structured Analysis: During the Structured Analysis phase, the requirements are analyzed to
identify the major components of the system, the relationships between those components,
and the data flows within the system.
3. Data Modeling: During this phase, a data model is created to represent the data used in the
system and the relationships between data elements.
4. Process Modeling: During this phase, the processes within the system are modeled using
flowcharts and data flow diagrams.
5. Input/Output Design: During this phase, the inputs and outputs of the system are designed,
including the user interface and reports.
6. Structured Design: During the Structured Design phase, the system is designed to meet the
requirements gathered in the Structured Analysis phase. This may include selecting
appropriate hardware and software platforms, designing databases, and defining data
structures.
7. Implementation and Testing: Once the design is complete, the system is implemented and
tested.
SA/SD has been largely replaced by more modern software development methodologies, but its
principles of structured analysis and design continue to influence current software development
practices. The method is known for its focus on breaking down complex systems into smaller
components, which makes it easier to understand and manage the system as a whole.
Basically, the approach of SA/SD is based on the Data Flow Diagram. It is easy to understand
SA/SD but it focuses on well-defined system boundary whereas the JSD approach is too complex
and does not have any graphical representation.
SA/SD is combined known as SAD and it mainly focuses on the following 3 points:
System
Process
Technology
SA/SD involves 2 phases:
Analysis Phase: It uses Data Flow Diagram, Data Dictionary, State Transition diagram and ER
diagram.
Design Phase: It uses Structure Chart and Pseudo Code.
1. Analysis Phase:
Analysis Phase involves data flow diagram, data dictionary, state transition diagram, and entity-
relationship diagram.
Data Flow Diagram:
In the data flow diagram, the model describes how the data flows through the system. We can
incorporate the Boolean operators and & or link data flow when more than one data flow may be
input or output from a process.
For example, if we have to choose between two paths of a process we can add an operator or and
if two data flows are necessary for a process we can add an operator. The input of the process
“check-order” needs the credit information and order information whereas the output of the process
would be a cash-order or a good-credit-order.
Data Dictionary:
The content that is not described in the DFD is described in the data dictionary. It defines the data
store and relevant meaning. A physical data dictionary for data elements that flow between
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
processes, between entities, and between processes and entities may be included. This would also
include descriptions of data elements that flow external to the data stores.
A logical data dictionary may also be included for each such data element. All system names,
whether they are names of entities, types, relations, attributes, or services, should be entered in the
dictionary.
ER Diagram:
ER diagram specifies the relationship between data store. It is basically used in database design. It
basically describes the relationship between different entities.
2. Design Phase:
Design Phase involves structure chart and pseudocode.
Structure Chart:
It is created by the data flow diagram. Structure Chart specifies how DFS’s processes are grouped
into tasks and allocated to the CPU. The structured chart does not show the working and internal
structure of the processes or modules and does not show the relationship between data or data
flows. Similar to other SASD tools, it is time and cost-independent and there is no error-checking
technique associated with this tool. The modules of a structured chart are arranged arbitrarily and
any process from a DFD can be chosen as the central transform depending on the analysts’ own
perception. The structured chart is difficult to amend, verify, maintain, and check for completeness
and consistency.
Pseudo Code: It is the actual implementation of the system. It is an informal way of programming
that doesn’t require any specific programming language or technology.
Advantages of Structured Analysis and Structured Design (SA/SD):
Clarity and Simplicity: The SA/SD method emphasizes breaking down complex systems into
smaller, more manageable components, which makes the system easier to understand and manage.
Better Communication: The SA/SD method provides a common language and framework for
communicating the design of a system, which can improve communication between stakeholders
and help ensure that the system meets their needs and expectations.
Improved maintainability: The SA/SD method provides a clear, organized structure for a system,
which can make it easier to maintain and update the system over time.
Better Testability: The SA/SD method provides a clear definition of the inputs and outputs of a
system, which makes it easier to test the system and ensure that it meets its requirements.
Disadvantages of Structured Analysis and Structured Design (SA/SD):
Time-Consuming: The SA/SD method can be time-consuming, especially for large and complex
systems, as it requires a significant amount of documentation and analysis.
Inflexibility: Once a system has been designed using the SA/SD method, it can be difficult to make
changes to the design, as the process is highly structured and documentation-intensive.
Limited Iteration: The SA/SD method is not well-suited for iterative development, as it is designed
to be completed in a single pass.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
System design involves creating both a High-Level Design (HLD), which is like a roadmap
showing the overall plan, and a Low-Level Design (LLD), which is a detailed guide for
programmers on how to build each part. It ensures a well-organized and smoothly functioning
project. High-Level Design and Low-Level Design are the two main aspects of System Design.
What is High Level Design(HLD)?
High-level design or HLD refers to the overall system, a design that consists description of the
system architecture and design and is a generic system design that includes:
System architecture
Database design
Brief description of systems, services, platforms, and relationships among modules.
A diagram representing each design aspect is included in the HLD (which is based on business
requirements and anticipated results).
It contains description of hardware, software interfaces, and also user interfaces.
It is also known as macro level/system design
It is created by solution architect.
The workflow of the user’s typical process is detailed in the HLD, along with performance
specifications.
What is Low Level Design(LLD)?
LLD, or Low-Level Design, is a phase in the software development process where detailed system
components and their interactions are specified.
It describes detailed description of each and every module means it includes actual logic for every
system component and it goes deep into each modules specification.
It is also known as micro level/detailed design.
It is created by designers and developers.
It involves converting the high-level design into a more detailed blueprint, addressing specific
algorithms, data structures, and interfaces.
LLD serves as a guide for developers during coding, ensuring the accurate and efficient
implementation of the system’s functionality.
Conclusion
High-Level Design documents are like big-picture plans that help project managers and architects
understand how a system will work and low-Level Design documents are more detailed and are
made for programmers.
They show exactly how to write the code and make the different parts of the system fit together.
Both documents are important for different people involved in making and maintaining the
software.
Creating a High-Level Design is like making a big plan for the software, and it helps find problems
early, so the quality of the software can be better assured.
On the other hand, when Low-Level Design is well-documented, it makes it easier for others to
check the code and ensure its quality during the actual writing of the software.
Coupling and Cohesion – Software Engineering
Introduction: The purpose of the Design phase in the Software Development Life Cycle is to
produce a solution to a problem given in the SRS(Software Requirement Specification) document.
The output of the design phase is a Software Design Document (SDD).
Coupling and Cohesion are two key concepts in software engineering that are used to measure the
quality of a software system’s design.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
Coupling refers to the degree of interdependence between software modules. High coupling means
that modules are closely connected and changes in one module may affect other modules. Low
coupling means that modules are independent and changes in one module have little impact on
other modules.
Cohesion refers to the degree to which elements within a module work together to fulfill a single,
well-defined purpose. High cohesion means that elements are closely related and focused on a
single purpose, while low cohesion means that elements are loosely related and serve multiple
purposes.
Both coupling and cohesion are important factors in determining the maintainability, scalability,
and reliability of a software system. High coupling and low cohesion can make a system difficult
to change and test, while low coupling and high cohesion make a system easier to maintain and
improve.
Basically, design is a two-part iterative process. The first part is Conceptual Design which tells the
customer what the system will do. Second is Technical Design which allows the system builders
to understand the actual hardware and software needed to solve a customer’s problem.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
Shows interface.
Modularization: Modularization is the process of dividing a software system into multiple
independent modules where each module works independently. There are many advantages of
Modularization in software engineering. Some of these are given below:
Easy to understand the system.
System maintenance is easy.
A module can be used many times as their requirements. No need to write it again and again.
Coupling: Coupling is the measure of the degree of interdependence between the modules. A good
software will have low coupling.
Types of Coupling:
Data Coupling: If the dependency between the modules is based on the fact that they communicate
by passing only data, then the modules are said to be data coupled. In data coupling, the
components are independent of each other and communicate through data. Module
communications don’t contain tramp data. Example-customer billing system.
Stamp Coupling In stamp coupling, the complete data structure is passed from one module to
another module. Therefore, it involves tramp data. It may be necessary due to efficiency factors-
this choice was made by the insightful designer, not a lazy programmer.
Control Coupling: If the modules communicate by passing control information, then they are said
to be control coupled. It can be bad if parameters indicate completely different behavior and good
if parameters allow factoring and reuse of functionality. Example- sort function that takes
comparison function as an argument.
External Coupling: In external coupling, the modules depend on other modules, external to the
software being developed or to a particular type of hardware. Ex- protocol, external file, device
format, etc.
Common Coupling: The modules have shared data such as global data structures. The changes in
global data mean tracing back to all modules which access that data to evaluate the effect of the
change. So it has got disadvantages like difficulty in reusing modules, reduced ability to control
data accesses, and reduced maintainability.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
Content Coupling: In a content coupling, one module can modify the data of another module, or
control flow is passed from one module to the other module. This is the worst form of coupling
and should be avoided.
Temporal Coupling: Temporal coupling occurs when two modules depend on the timing or order
of events, such as one module needing to execute before another. This type of coupling can result
in design issues and difficulties in testing and maintenance.
Sequential Coupling: Sequential coupling occurs when the output of one module is used as the
input of another module, creating a chain or sequence of dependencies. This type of coupling can
be difficult to maintain and modify.
Communicational Coupling: Communicational coupling occurs when two or more modules share
a common communication mechanism, such as a shared message queue or database. This type of
coupling can lead to performance issues and difficulty in debugging.
Functional Coupling: Functional coupling occurs when two modules depend on each other’s
functionality, such as one module calling a function from another module. This type of coupling
can result in tightly-coupled code that is difficult to modify and maintain.
Data-Structured Coupling: Data-structured coupling occurs when two or more modules share a
common data structure, such as a database table or data file. This type of coupling can lead to
difficulty in maintaining the integrity of the data structure and can result in performance issues.
Interaction Coupling: Interaction coupling occurs due to the methods of a class invoking methods
of other classes. Like with functions, the worst form of coupling here is if methods directly access
internal parts of other methods. Coupling is lowest if methods communicate directly through
parameters.
Component Coupling: Component coupling refers to the interaction between two classes where a
class has variables of the other class. Three clear situations exist as to how this can happen. A class
C can be component coupled with another class C1, if C has an instance variable of type C1, or C
has a method whose parameter is of type C1,or if C has a method which has a local variable of
type C1. It should be clear that whenever there is component coupling, there is likely to be
interaction coupling.
Cohesion: Cohesion is a measure of the degree to which the elements of the module are
functionally related. It is the degree to which all elements directed towards performing a single
task are contained in the component. Basically, cohesion is the internal glue that keeps the module
together. A good software design will have high cohesion.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
Types of Cohesion:
Functional Cohesion: Every essential element for a single computation is contained in the
component. A functional cohesion performs the task and functions. It is an ideal situation.
Sequential Cohesion: An element outputs some data that becomes the input for other element, i.e.,
data flow between the parts. It occurs naturally in functional programming languages.
Communicational Cohesion: Two elements operate on the same input data or contribute towards
the same output data. Example- update record in the database and send it to the printer.
Procedural Cohesion: Elements of procedural cohesion ensure the order of execution. Actions are
still weakly connected and unlikely to be reusable. Ex- calculate student GPA, print student record,
calculate cumulative GPA, print cumulative GPA.
Temporal Cohesion: The elements are related by their timing involved. A module connected with
temporal cohesion all the tasks must be executed in the same time span. This cohesion contains
the code for initializing all the parts of the system. Lots of different activities occur, all at unit
time.
Logical Cohesion: The elements are logically related and not functionally. Ex- A component reads
inputs from tape, disk, and network. All the code for these functions is in the same component.
Operations are related, but the functions are significantly different.
Coincidental Cohesion: The elements are not related(unrelated). The elements have no conceptual
relationship other than location in source code. It is accidental and the worst form of cohesion. Ex-
print next line and reverse the characters of a string in a single component.
Procedural Cohesion: This type of cohesion occurs when elements or tasks are grouped together
in a module based on their sequence of execution, such as a module that performs a set of related
procedures in a specific order. Procedural cohesion can be found in structured programming
languages.
Communicational Cohesion: Communicational cohesion occurs when elements or tasks are
grouped together in a module based on their interactions with each other, such as a module that
handles all interactions with a specific external system or module. This type of cohesion can be
found in object-oriented programming languages.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
Temporal Cohesion: Temporal cohesion occurs when elements or tasks are grouped together in a
module based on their timing or frequency of execution, such as a module that handles all periodic
or scheduled tasks in a system. Temporal cohesion is commonly used in real-time and embedded
systems.
Informational Cohesion: Informational cohesion occurs when elements or tasks are grouped
together in a module based on their relationship to a specific data structure or object, such as a
module that operates on a specific data type or object. Informational cohesion is commonly used
in object-oriented programming.
Functional Cohesion: This type of cohesion occurs when all elements or tasks in a module
contribute to a single well-defined function or purpose, and there is little or no coupling between
the elements. Functional cohesion is considered the most desirable type of cohesion as it leads to
more maintainable and reusable code.
Layer Cohesion: Layer cohesion occurs when elements or tasks in a module are grouped together
based on their level of abstraction or responsibility, such as a module that handles only low-level
hardware interactions or a module that handles only high-level business logic. Layer cohesion is
commonly used in large-scale software systems to organize code into manageable layers.
Advantages of low coupling:
Improved maintainability: Low coupling reduces the impact of changes in one module on other
modules, making it easier to modify or replace individual components without affecting the entire
system.
Enhanced modularity: Low coupling allows modules to be developed and tested in isolation,
improving the modularity and reusability of code.
Better scalability: Low coupling facilitates the addition of new modules and the removal of
existing ones, making it easier to scale the system as needed.
Advantages of high cohesion:
Improved readability and understandability: High cohesion results in clear, focused modules with
a single, well-defined purpose, making it easier for developers to understand the code and make
changes.
Better error isolation: High cohesion reduces the likelihood that a change in one part of a module
will affect other parts, making it easier to
Improved reliability: High cohesion leads to modules that are less prone to errors and that function
more consistently,
leading to an overall improvement in the reliability of the system.
Disadvantages of high coupling:
Increased complexity: High coupling increases the interdependence between modules, making the
system more complex and difficult to understand.
Reduced flexibility: High coupling makes it more difficult to modify or replace individual
components without affecting the entire system.
Decreased modularity: High coupling makes it more difficult to develop and test modules in
isolation, reducing the modularity and reusability of code.
Disadvantages of low cohesion:
Increased code duplication: Low cohesion can lead to the duplication of code, as elements that
belong together are split into separate modules.
Reduced functionality: Low cohesion can result in modules that lack a clear purpose and contain
elements that don’t belong together, reducing their functionality and making them harder to
maintain.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
Difficulty in understanding the module: Low cohesion can make it harder for developers to
understand the purpose and behavior of a module, leading to errors and a lack of clarity.
Example 1:
Consider an example, we all have been playing Legos to build different structures. Here we have
several components i.e. blocks which we integrate to build the structure that we want to.
Similarly, we can break the complex codes into different components and using or integrating
those components you can create a new program to develop new software.
Example 2:
Consider that if we have to divide an automobile into several subsystems then the components or
subsystems would be: engines, breaks, wheels, chassis etc.
Here you can observe all the subsystems are independents of each other as much as possible.
And these components can be integrated to design a new automobile.
Why Modularity?
To understand the importance of modularity, consider that we have a monolithic software that
has a large program including a single module. Now if we ask any software engineer to
understand this large program, then it is not that easy for him to do so.
There will be a lot of local variables, global variables their span of reference, several control
paths etc. This will increase the complexity of the large program making it hard to digest. As a
solution to this, the large program must be divided into several components or modules as it will
become easy to understand the modules.
As we are saying that effort or the developing cost will get reduced with the increase in modules.
But as the number, if modules get increased, the cost required to integrate the several modules
also get increased.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
So, you must be careful while modularizing the software. The software should neither be left as
un-modularized nor it must be over-modularized.
Now, with modularity when you try to develop software using the modules that are independent
of each other and has very few references to each other then you have to be conscious at all the
stages of software development such as:
1. Architectural Design
In the architectural design phase, the large-scale structure of the software is determined. You
have to be very careful while creating modularity at this phase as you have to the entire logical
structure of the software.
2. Components Design
If you have created modularity in the architectural design of the software it becomes easy to
figure out and design the individual components. The modular components have a well-defined
purpose and they have a few connections with the other components.
3. Debugging
If the components are designed using modularity then it becomes easy to track them down. You
can easily debug which component is responsible for the error.
Now as we say that the components have little connection to other components of the software so
correcting a single component will not have an adverse effect on the other component.
4. Testing
Once the components are integrated to develop software it becomes almost impossible to test the
entire software at once. Testing one component at a time is much easier.
5. Maintenance
Maintenance is a process of fixing or enhancing the system, to perform according to users need.
Here also modularity plays a vital role. As making changes to a module must not affect another
connected module in the system.
6. Independent Development
Software is never developed by one person. There is a team of people who develop the software
in terms of modules and components. Each person in a team is assigned to develop an individual
component that’s why they also have to take care that the interfaces between the components are
few and all of them are clear.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
7. Damage Control
If the connection between the components of the system is few and clear then the error in one
component will not spread damage to the other components of the system.
8. Software Reuse
Good modularity makes you reuse the component of the earlier software. The reusable
components must:
Classification of Components
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
Though this is the general classification of any software it provides a guide to the developer to
create modularity straight away from the architectural design of the software.
Though this is the general classification of any software it provides a guide to the developer to
create modularity straight away from the architectural design of the software.
Benefits of Modularity
1. Modularity let the development of software be divided into several components that can be
implemented simultaneously by the team of developers. This minimizes the time that is required
to develop software.
2. Modularity makes the components of the software reusable.
3. As modularity breaks the large complex program into components, it improves manageability.
As it is easy to develop, test, and maintain the small components.
4. It is also easy to debug and trace the error in modular programs.
So, this is all about modularity in software engineering. We have seen the importance of
modularity and how it can be used to develop efficient software. We have also learned about the
term cohesion and coupling plays an important role in creating good modularity and we have
ended up discussing the benefits of modularity.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
The design process for software systems often has two levels. At the first level, the focus is on
deciding which modules are needed for the system based on SRS (Software Requirement
Specification) and how the modules should be interconnected.
Function Oriented Design is an approach to software design where the design is decomposed
into a set of interacting units where each unit has a clearly defined function.
Generic Procedure
Start with a high-level description of what the software/program does. Refine each part of the
description by specifying in greater detail the functionality of each part. These points lead to a
Top-Down Structure.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
7. Output module: Module which take information from their superordinate and pass it on to
its subordinates.
8. Transform module: Modules that exist solely for the sake of transforming data into some
other form.
9. Coordinate module: Modules whose primary concern is managing the flow of data to and
from different subordinates.
10. A structure chart is a nice representation for a design that uses functional abstraction.
DFD is the abbreviation for Data Flow Diagram. The flow of data of a system or a
process is represented by DFD. It also gives insight into the inputs and outputs of each entity
and the process itself. DFD does not have control flow and no loops or decision rules are
present. Specific operations depending on the type of data can be explained by a flowchart. It
is a graphical tool, useful for communicating with users ,managers and other personnel. it is
useful for analyzing existing as well as proposed system.
It should be pointed out that a DFD is not a flowchart. In drawing the DFD, the designer has to
specify the major transforms in the path of the data flowing from the input to the output. DFDs
can be hierarchically organized, which helps in progressively partitioning and analyzing large
systems.
It provides an overview of
• What data is system processes.
• What transformation are performed.
• What data are stored.
• What results are produced , etc.
Data Flow Diagram can be represented in several ways. The DFD belongs to structured-
analysis modeling tools. Data Flow diagrams are very popular because they help us to visualize
the major steps and data involved in software-system processes.
Characteristics of DFD
• DFDs are commonly used during problem analysis.
• DFDs are quite general and are not limited to problem analysis for software requirements
specification.
• DFDs are very useful in understanding a system and can be effectively used during
analysis.
• It views a system as a function that transforms the inputs into desired outputs.
• The DFD aims to capture the transformations that take place within a system to the input
data so that eventually the output data is produced.
• The processes are shown by named circles and data flows are represented by named arrows
entering or leaving the bubbles.
• A rectangle represents a source or sink and it is a net originator or consumer of data. A
source sink is typically outside the main system of study.
Components of DFD
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
Object-Oriented Design
In the object-oriented design method, the system is viewed as a collection of objects (i.e., entities).
The state is distributed among the objects, and each object handles its state data. For example, in
a Library Automation Software, each library representative may be a separate object with its data
and functions to operate on these data. The tasks defined for one purpose cannot refer or change
data of other objects. Objects have their internal data which represent their state. Similar objects
create a class. In other words, each object is a member of some class. Classes may inherit features
from the superclass.
1. Objects: All entities involved in the solution design are known as objects. For example,
person, banks, company, and users are considered as objects. Every entity has some
attributes associated with it and has some methods to perform on the attributes.
2. Classes: A class is a generalized description of an object. An object is an instance of a
class. A class defines all the attributes, which an object can have and methods, which
represents the functionality of the object.
3. Messages: Objects communicate by message passing. Messages consist of the integrity of
the target object, the name of the requested operation, and any other action needed to
perform the function. Messages are often implemented as procedure or function calls.
4. Abstraction In object-oriented design, complexity is handled using abstraction.
Abstraction is the removal of the irrelevant and the amplification of the essentials.
5. Encapsulation: Encapsulation is also called an information hiding concept. The data and
operations are linked to a single unit. Encapsulation not only bundles essential information
of an object together but also restricts access to the data and methods from the outside
world.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
6. Inheritance: OOD allows similar classes to stack up in a hierarchical manner where the
lower or sub-classes can import, implement, and re-use allowed variables and functions
from their immediate superclasses.This property of OOD is called an inheritance. This
makes it easier to define a specific class and to create generalized classes from specific
ones.
7. Polymorphism: OOD languages provide a mechanism where methods performing similar
tasks but vary in arguments, can be assigned the same name. This is known as
polymorphism, which allows a single interface is performing functions for different types.
Depending upon how the service is invoked, the respective portion of the code gets
executed.
Text-Based User Interface: This method relies primarily on the keyboard. A typical example of
this is UNIX.
Advantages
o Many and easier to customizations options.
o Typically capable of more important tasks.
Disadvantages
o Relies heavily on recall rather than recognition.
o Navigation is often more difficult.
Graphical User Interface (GUI): GUI relies much more heavily on the mouse. A typical example
of this type of interface is any versions of the Windows operating systems.
GUI Characteristics
Characteristics Descriptions
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
Icons Icons different types of information. On some systems, icons represent files. On
other icons describes processes.
Menus Commands are selected from a menu rather than typed in a command language.
Pointing A pointing device such as a mouse is used for selecting choices from a menu or
indicating items of interests in a window.
Graphics Graphics elements can be mixed with text or the same display.
Advantages
o Less expert knowledge is required to use it.
o Easier to Navigate and can look through folders quickly in a guess and check manner.
o The user may switch quickly from one task to another and can interact with several
different applications.
Disadvantages
o Typically decreased options.
o Usually less customizable. Not easy to use one button for tons of different variations.
UI Design Principles
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
Structure: Design should organize the user interface purposefully, in the meaningful and usual
based on precise, consistent models that are apparent and recognizable to users, putting related
things together and separating unrelated things, differentiating dissimilar things and making
similar things resemble one another. The structure principle is concerned with overall user
interface architecture.
Simplicity: The design should make the simple, common task easy, communicating clearly and
directly in the user's language, and providing good shortcuts that are meaningfully related to longer
procedures.
Visibility: The design should make all required options and materials for a given function visible
without distracting the user with extraneous or redundant data.
Feedback: The design should keep users informed of actions or interpretation, changes of state or
condition, and bugs or exceptions that are relevant and of interest to the user through clear, concise,
and unambiguous language familiar to users.
Tolerance: The design should be flexible and tolerant, decreasing the cost of errors and misuse by
allowing undoing and redoing while also preventing bugs wherever possible by tolerating varied
inputs and sequences and by interpreting all reasonable actions.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
input commands that are entered by keyboard; the commands invoked at the command prompt are
then run by the computer.
How do CLIs work?
Once a computer system is running, its CLI opens on a blank screen with a command prompt and
commands can be entered.
Types of CLI commands include the following:
• system commands that are encoded as part of the operating system interface;
• executable programs that, when successfully invoked, run text-based or graphical applications;
and
• batch programs (or batch files or shell scripts) which are text files listing a sequence of
commands. When successfully invoked, a batch program runs its commands which may
include both system commands and executable programs.
CLI is more than a simple command/response system, as most have additional features that make
one preferable to another. Some features include the following:
• Scripting capability enables users to write programs that can be run on the system from the
command line.
• Command pipes enable users to direct the output of one program to be the input for another
program ("piping" the flow of data).
• System variables can be set at the command line, or the values of those variables displayed.
• Command history features enable the user to recall previous commands issued. Some save
command history for the session (like PowerShell), others can be configured to store session
history for longer (like bash).
menu-driven interface
A menu-driven interface is a type of user interface where users interact with a program or system
through a series of menus. These menus present options or commands that the user can select,
typically through the use of a pointer, keyboard, or touchscreen, simplifying the interaction with
the system.
Benefits of menu-driven interface
Menu-driven interfaces come with several benefits:
• Intuitive Navigation: Menus logically categorize and group similar functions together, making
it easier for users to find what they need.
• Reduced Errors: By limiting user choices to valid options, the chances of errors are reduced.
• Efficiency: Menus often provide shortcuts to frequently used functions, enhancing user
efficiency.
• Accessibility: They can be more accessible for users with certain disabilities because they don’t
rely on memorizing specific commands or sequences.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
• Consistency: They provide a consistent structure and operation across different parts of an
application or system, improving the user experience.
• Flexibility: They are adaptable to different input methods (mouse, touch, keyboard), making
them suitable for a variety of devices and contexts.
• User-friendly: They are typically easy to understand and use, even for less tech-savvy users, as
they offer a visual representation of options and commands.
How to create menu-driven interface
Creating a menu-driven interface involves a multi-step process. Here’s a outline:
1. Identify User Needs: Understand the needs and requirements of your users, the tasks
they need to perform, and the context of use. This is usually achieved through methods
such as user interviews, surveys, and usage data analysis.
2. Design the Menu Structure: Define the hierarchy of the menus based on the identified
user tasks. Group similar functions together. Consider the depth and breadth of the menu
structure – it should be easy to navigate, not too deep (many levels) or too broad (many
options on one level).
3. Design the Menu Layout: Design the visual representation of the menu. This might be
dropdown menus, sidebars, toolbars, etc. The layout should be consistent across the
application.
4. Implement the Menu: Using a programming language or a software tool, implement the
menu in your application. This often involves coding the behavior of the menu, including
handling user interactions.
5. Test and Iterate: Perform usability testing to verify that the menu works as intended and
is easy to use. Use the feedback to refine and improve the menu.
6. Document: Document the design and implementation details of the menu interface for
future reference and updates.
ICONIC INTERFACE
A user interface that displays graphic elements to represent menu options. Also called an "iconic
interface" and "widget-based interface," the term is often used to contrast a GUI interface with a
command-line interface..
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
MODULE-3
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
Software testing techniques are the ways employed to test the application under test against
the functional or non-functional requirements gathered from business. Each testing technique
helps to find a specific type of defect. For example, Techniques that may find structural
defects might not be able to find the defects against the end-to-end business flow. Hence,
multiple testing techniques are applied in a testing project to conclude it with acceptable
quality. Software testing techniques are methods used to design and execute tests to evaluate
software applications. The following are common testing techniques:
1. Manual testing – Involves manual inspection and testing of the software by a human tester.
2. Automated testing – Involves using software tools to automate the testing process.
3. Functional testing – Tests the functional requirements of the software to ensure they are
met.
4. Non-functional testing – Tests non-functional requirements such as performance, security,
and usability.
5. Unit testing – Tests individual units or components of the software to ensure they are
functioning as intended.
6. Integration testing – Tests the integration of different components of the software to
ensure they work together as a system.
7. System testing – Tests the complete software system to ensure it meets the specified
requirements.
8. Acceptance testing – Tests the software to ensure it meets the customer’s or end-user’s
expectations.
9. Regression testing – Tests the software after changes or modifications have been made to
ensure the changes have not introduced new defects.
10. Performance testing – Tests the software to determine its performance characteristics such
as speed, scalability, and stability.
11. Security testing – Tests the software to identify vulnerabilities and ensure it meets security
requirements.
12. Exploratory testing – A type of testing where the tester actively explores the software to
find defects, without following a specific test plan.
13. Boundary value testing – Tests the software at the boundaries of input values to identify
any defects.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
14. Usability testing – Tests the software to evaluate its user-friendliness and ease of use.
15. User acceptance testing (UAT) – Tests the software to determine if it meets the end-user’s
needs and expectations.
Principles of Testing
1. All the tests should meet the customer’s requirements.
2. To make our software testing should be performed by a third party.
3. Exhaustive testing is not possible. As we need the optimal amount of testing based on the
risk assessment of the application.
4. All the tests to be conducted should be planned before implementing it.
5. It follows the Pareto rule (80/20 rule) which states that 80% of errors come from 20% of
program components.
6. Start testing with small parts and extend it to large parts.
1. Improves software quality and reliability – By using different testing techniques, software
developers can identify and fix defects early in the development process, reducing the risk
of failure or unexpected behaviour in the final product.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
2. Enhances user experience – Techniques like usability testing can help to identify usability
issues and improve the overall user experience.
3. Increases confidence – By testing the software, developers, and stakeholders can have
confidence that the software meets the requirements and works as intended.
4. Facilitates maintenance – By identifying and fixing defects early, testing makes it easier to
maintain and update the software.
5. Reduces costs – Finding and fixing defects early in the development process is less expensive
than fixing them later in the life cycle.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
Code review is a process of software quality assurance that concerns primarily the code base.
A peer or a senior developer, called a reviewer, reads parts of the source code to give a second
opinion on it. The key purpose is to optimize the code in the latter stages and prevent the
unstable code from launching into usage. It also creates some spirit of collective ownership over
the project’s progress and keeps the team involved in planning the later phases of development.
In case the code lines cover more than one domain, a minimum of 2 experts are required to
review it. The reviewers help to:
• enhance code quality,
• figure out logic problems,
• identify bugs,
• uncover edge cases.
The process touches upon 4 major areas:
• Code,
• Formatting consistency with overall solution design,
• Documentation quality,
• The compliance of coding standards with project requirements.
What Are the Benefits of Code Review?
According to Stripe research conducted with Harris Poll, developers spend over 4 hours a week
on average fixing bad code. That constitutes about 300B USD in lost productivity every year.
So, we are going to disclose what are the benefits of code review for the development company.
1. Ensuring consistency in design and implementation
Every specialist has their own background and a unique style of programming. Thus, the
collaboration of multiple developers in big projects can be challenging. Code review helps all
experts working on the project standardize the source code and adhere to certain coding
practices.
It is also helpful for future developers in building new features without wasting time on code
studies, especially when we are talking about open-source projects with multiple contributors.
2. Discovering bugs earlier
With source code review, developers get the chance to spot and fix the problem before the users
ever see it. Moreover, by moving this process earlier in the development cycle, the specialists
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
can start fixing without waiting until the end of a lifecycle, when more effort is needed to
remember the reasoning, solutions, and code itself.
3. Verification for the developed and required features
Each project has well-defined requirements and scope of work, and several developers working
on the project can create various features accordingly. It’s vital to assure that none of them
misinterpreted a requirement or crafted a useless feature. It’s exactly what code review helps to
achieve while also ensuring all the critical features were created as defined in the specification
and requirements.
4. Sharing knowledge
Code review practices encourage not only collaboration between the experts and exchanging
feedback, but also sharing of ideas, skills, and knowledge of the latest technologies. Thus,
junior team members can learn new approaches, techniques, and solutions, upgrading their
knowledge.
5. Enhancing security
Team members check the source code for vulnerabilities and warn developers about the threats.
So, code reviews help to create high-level safety, especially when security experts are involved.
6. Better documentation creation
Code reviews help create better documentation so that the developers can easily add new
features to the solution in the future or upgrade the existing ones.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
9. Automate
There are things to check manually, but there are ones that can be verified with automatic tools.
Such tools can scan the entire codebase in less than a minute, spot its defects and offer solutions
right away.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
Software documentation
is a written piece of text that is often accompanied by a software program. This makes the life
of all the members associated with the project easier. It may contain anything from API
documentation, build notes or just help content. It is a very critical process in software
development. It’s primarily an integral part of any computer code development method.
Moreover, computer code practitioners are a unit typically concerned with the worth, degree of
usage, and quality of the actual documentation throughout the development and its maintenance
throughout the total method. Motivated by the requirements of Novatel opposition, a world-
leading company developing package in support of worldwide navigation satellite system, and
based mostly on the results of a former systematic mapping studies area unit aimed at a higher
understanding of the usage and therefore the quality of varied technical documents throughout
computer code development and their maintenance. For example, before the development of any
software product requirements is documented which is called Software Requirement
Specification (SRS). Requirement gathering is considered a stage of Software Development Life
Cycle (SDLC).
Another example can be a user manual that a user refers to for installing, using, and providing
maintenance to the software application/product.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
Purpose of Documentation:
• Due to the growing importance of computer code necessities, the method of crucial them
needs to be effective to notice desired results. As to such determination of necessities is often
beneath sure regulation and pointers that area unit core in getting a given goal.
• These all imply that computer code necessities area unit expected to alter thanks to the ever
ever-changing technology within the world. However, the very fact that computer code
information I’d obtained through development has to be modified within the wants of users
and the transformation of the atmosphere area unit is inevitable.
• What is more, computer code necessities ensure that there’s verification and therefore the
testing method, in conjunction with prototyping and conferences there are focus teams and
observations?
• For a software engineer reliable documentation is typically a should the presence of
documentation helps keep track of all aspects of associate applications, and it improves the
standard of wares, it’s the most focused area of unit development, maintenance, and
information transfer to alternative developers. Productive documentation can build info
simply accessible, offer a restricted range of user entry purposes, facilitate new users to learn
quickly, alter the merchandise and facilitate chopping out the price.
• For a programmer reliable documentation is always a must the presence keeps track of all
aspects of an application and helps in keeping the software updated.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
7. Review documentation
The documentation consists of too many web-pages collectively holding a large chunk of
information that’s serving a sole purpose – educate and spread knowledge to anyone who is
trying to understand or implement the software. While working with a lot of information it is
important ta take feedback from senior architects and make any necessary changes aligning the
documentation with its sole purpose depending on the type of documentation.
Advantages of software documentation
• The presence of documentation helps in keeping the track of all aspects of an application
and also improves the quality of the software product.
• The main focus is based on the development, maintenance, and knowledge transfer to other
developers.
• Helps development teams during development.
• Helps end-users in using the product.
• Improves overall quality of software product
• It cuts down duplicative work.
• Makes easier to understand code.
• Helps in establishing internal coordination in work.
Disadvantages of software documentation
• The documenting code is time-consuming.
• The software development process often takes place under time pressure, due to which
many times the documentation updates don’t match the updated code.
• The documentation has no influence on the performance of an application.
• Documenting is not so fun, it’s sometimes boring to a certain extent.
The agile methodology encourages engineering groups to invariably concentrate on delivering
prices to their customers. This key should be thought-about within the method of manufacturing
computer code documentation.a good package ought to be provided whether it’s a computer code
specifications document for programmers, testers, or a computer code manual for finish users.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
Principles of Testing
• All the tests should meet the customer’s requirements.
• To make our software testing should be performed by a third party.
• Exhaustive testing is not possible. As we need the optimal amount of testing based on the
risk assessment of the application.
• All the tests to be conducted should be planned before implementing it
• It follows the Pareto rule(80/20 rule) which states that 80% of errors come from 20% of
program components.
• Start testing with small parts and extend it to large parts.
• Types of Testing
There are basically 10 types of Testing.
• Unit Testing
• Integration Testing
• System Testing
• Functional Testing
• Acceptance Testing
• Smoke Testing
• Regression Testing
• Performance Testing
• Security Testing
• User Acceptance Testing
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
Unit Testing
Unit testing is a method of testing individual units or components of a software application. It is
typically done by developers and is used to ensure that the individual units of the software are
working as intended. Unit tests are usually automated and are designed to test specific parts of
the code, such as a particular function or method. Unit testing is done at the lowest level of
the software development process, where individual units of code are tested in isolation.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
3. Gray Box Testing: This technique is used in executing the relevant test cases, test
methods, and test functions, and analyzing the code performance for the modules.
Advantages of Unit Testing: Some of the advantages of Unit Testing are listed below.
• It helps to identify bugs early in the development process before they become more
difficult and expensive to fix.
• It helps to ensure that changes to the code do not introduce new bugs.
• It makes the code more modular and easier to understand and maintain.
• It helps to improve the overall quality and reliability of the software.
Black-box testing is a type of software testing in which the tester is not concerned with the
internal knowledge or implementation details of the software but rather focuses on validating
the functionality based on the provided specifications or requirements.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
6.
7. It can be converted into a decision table like:
8.
Each column corresponds to a rule which will become a test case for testing. So there will be 4
test cases.
5. Requirement-based testing – It includes validating the requirements given in the SRS of a
software system.
6. Compatibility testing – The test case results not only depends on the product but is also on
the infrastructure for delivering functionality. When the infrastructure parameters are changed it
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
is still expected to work properly. Some parameters that generally affect the compatibility of
software are:
1. Processor (Pentium 3, Pentium 4) and several processors.
2. Architecture and characteristics of machine (32-bit or 64-bit).
3. Back-end components such as database servers.
4. Operating System (Windows, Linux, etc).
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
7. Scalability: Black box testing can be scaled up or down depending on the size and
complexity of the application being tested.
8. Limited knowledge of application: Testers performing black box testing have limited
knowledge of the application being tested, which helps to ensure that testing is more
representative of how the end users will interact with the application.
Advantages of Black Box Testing:
• The tester does not need to have more functional knowledge or programming skills to
implement the Black Box Testing.
• It is efficient for implementing the tests in the larger system.
• Tests are executed from the user’s or client’s point of view.
• Test cases are easily reproducible.
• It is used in finding the ambiguity and contradictions in the functional specifications.
Disadvantages of Black Box Testing:
• There is a possibility of repeating the same tests while implementing the testing process.
• Without clear functional specifications, test cases are difficult to implement.
• It is difficult to execute the test cases because of complex inputs at different stages of testing.
• Sometimes, the reason for the test failure cannot be detected.
• Some programs in the application are not tested.
• It does not reveal the errors in the control structure.
• Working with a large sample space of inputs can be exhaustive and consumes a lot of time.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
Branch Coverage:
In this technique, test cases are designed so that each branch from all decision points is traversed
at least once. In a flowchart, all edges must be traversed at least once.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
3. Condition Coverage
In this technique, all individual conditions must be covered as shown in the following example:
• READ X, Y
• IF(X == 0 || Y == 0)
• PRINT ‘0’
• #TC1 – X = 0, Y = 55
• #TC2 – X = 5, Y = 0
4. Multiple Condition Coverage
In this technique, all the possible combinations of the possible outcomes of conditions are tested
at least once. Let’s consider the following example:
• READ X, Y
• IF(X == 0 || Y == 0)
• PRINT ‘0’
• #TC1: X = 0, Y = 0
• #TC2: X = 0, Y = 5
• #TC3: X = 55, Y = 0
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
• #TC4: X = 55, Y = 5
5. Basis Path Testing
In this technique, control flow graphs are made from code or flowchart and then Cyclomatic
complexity is calculated which defines the number of independent paths so that the minimal
number of test cases can be designed for each independent path. Steps:
• Make the corresponding control flow graph
• Calculate the cyclomatic complexity
• Find the independent paths
• Design test cases corresponding to each independent path
• V(G) = P + 1, where P is the number of predicate nodes in the flow graph
• V(G) = E – N + 2, where E is the number of edges and N is the total number of nodes
• V(G) = Number of non-overlapping regions in the graph
• #P1: 1 – 2 – 4 – 7 – 8
• #P2: 1 – 2 – 3 – 5 – 7 – 8
• #P3: 1 – 2 – 3 – 6 – 7 – 8
• #P4: 1 – 2 – 4 – 7 – 1 – . . . – 7 – 8
6. Loop Testing
Loops are widely used and these are fundamental to many algorithms hence, their testing is very
important. Errors often occur at the beginnings and ends of loops.
• Simple loops: For simple loops of size n, test cases are designed that:
1. Skip the loop entirely
2. Only one pass through the loop
3. 2 passes
4. m passes, where m < n
5. n-1 ans n+1 passes
• Nested loops: For nested loops, all the loops are set to their minimum count, and we start
from the innermost loop. Simple loop tests are conducted for the innermost loop and this is
worked outwards till all the loops have been tested.
• Concatenated loops: Independent loops, one after another. Simple loop tests are applied
for each. If they’re not independent, treat them like nesting.
White Testing is performed in 2 Steps
1. Tester should understand the code well
2. Tester should write some code for test cases and execute them
Tools required for White box testing:
• PyUnit
• Sqlmap
• Nmap
• ParasoftJtest
• Nunit
• VeraUnit
• CppUnit
• Bugzilla
• Fiddler
• JSUnit.net
• OpenGrok
• Wireshark
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
• HP Fortify
• CSUnit
Features of White box Testing
1. Code coverage analysis: White box testing helps to analyze the code coverage of an
application, which helps to identify the areas of the code that are not being tested.
2. Access to the source code: White box testing requires access to the application’s source
code, which makes it possible to test individual functions, methods, and modules.
3. Knowledge of programming languages: Testers performing white box testing must have
knowledge of programming languages like Java, C++, Python, and PHP to understand the
code structure and write tests.
4. Identifying logical errors: White box testing helps to identify logical errors in the code,
such as infinite loops or incorrect conditional statements.
5. Integration testing: White box testing is useful for integration testing, as it allows testers
to verify that the different components of an application are working together as expected.
6. Unit testing: White box testing is also used for unit testing, which involves testing
individual units of code to ensure that they are working correctly.
7. Optimization of code: White box testing can help to optimize the code by identifying any
performance issues, redundant code, or other areas that can be improved.
8. Security testing: White box testing can also be used for security testing, as it allows testers
to identify any vulnerabilities in the application’s code.
9. Verification of Design: It verifies that the software’s internal design is implemented in
accordance with the designated design documents.
10. Check for Accurate Code: It verifies that the code operates in accordance with the
guidelines and specifications.
11. Identifying Coding Mistakes: It finds and fix programming flaws in your code, including
syntactic and logical errors.
12. Path Examination: It ensures that each possible path of code execution is explored and
test various iterations of the code.
13. Determining the Dead Code: It finds and remove any code that isn’t used when the
programme is running normally (dead code).
Advantages of Whitebox Testing
1. Thorough Testing: White box testing is thorough as the entire code and structures are tested.
2. Code Optimization: It results in the optimization of code removing errors and helps in
removing extra lines of code.
3. Early Detection of Defects: It can start at an earlier stage as it doesn’t require any
interface as in the case of black box testing.
4. Integration with SDLC: White box testing can be easily started in Software Development
Life Cycle.
5. Detection of Complex Defects: Testers can identify defects that cannot be detected
through other testing techniques.
6. Comprehensive Test Cases: Testers can create more comprehensive and effective test
cases that cover all code paths.
7. Testers can ensure that the code meets coding standards and is optimized for performance.
Disadvantages of White box Testing
1. Programming Knowledge and Source Code Access: Testers need to have programming
knowledge and access to the source code to perform tests.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
2. Overemphasis on Internal Workings: Testers may focus too much on the internal
workings of the software and may miss external issues.
3. Bias in Testing: Testers may have a biased view of the software since they are familiar
with its internal workings.
4. Test Case Overhead: Redesigning code and rewriting code needs test cases to be written
again.
5. Dependency on Tester Expertise: Testers are required to have in-depth knowledge of the
code and programming language as opposed to black-box testing.
6. Inability to Detect Missing Functionalities: Missing functionalities cannot be detected as
the code that exists is tested.
7. Increased Production Errors: High chances of errors in production.
Cyclomatic Complexity
The cyclomatic complexity of a code section is the quantitative measure of the number
of linearly independent paths in it. It is a software metric used to indicate the complexity of a
program. It is computed using the Control Flow Graph of the program. The nodes in the graph
indicate the smallest group of commands of a program, and a directed edge in it connects the
two nodes i.e. if the second command might immediately follow the first command.
For example, if the source code contains no control flow statement then its cyclomatic
complexity will be 1, and the source code contains a single path in it. Similarly, if the source
code contains one if condition then cyclomatic complexity will be 2 because there will be two
paths one for true and the other for false.
Mathematically, for a structured program, the directed graph inside the control flow is the edge
joining two basic blocks of the program as control may pass from first to second.
In the case of a single method, P is equal to 1. So, for a single subroutine, the formula can be
defined as
M=E–N+2
where
E = the number of edges in the control flow graph
N = the number of nodes in the control flow graph
P = the number of connected components
How to Calculate Cyclomatic Complexity?
Steps that should be followed in calculating cyclomatic complexity and test cases design are:
Construction of graph with nodes and edges from code.
• Identification of independent paths.
• Cyclomatic Complexity Calculation
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
The cyclomatic complexity calculated for the above code will be from the control flow graph.
The graph shows seven shapes(nodes), and seven lines(edges), hence cyclomatic complexity is
7-7+2 = 2.
Use of Cyclomatic Complexity
• Determining the independent path executions thus proven to be very helpful for Developers
and Testers.
• It can make sure that every path has been tested at least once.
• Thus help to focus more on uncovered paths.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
Mutation Testing
Mutation Testing is a type of Software Testing that is performed to design new software tests
and also evaluate the quality of already existing software tests. Mutation testing is related to
modification a program in small ways. It focuses to help the tester develop effective tests or
locate weaknesses in the test data used for the program.
Mutation testing can be applied to design models, specifications, databases, tests, and XML. It
is a structural testing technique, which uses the structure of the code to guide the testing
process. It can be described as the process of rewriting the source code in small ways in order
to remove the redundancies in the source code.
Objective of Mutation Testing:
The objective of mutation testing is:
• To identify pieces of code that are not tested properly.
• To identify hidden defects that can’t be detected using other testing methods.
• To discover new kinds of errors or bugs.
• To calculate the mutation score.
• To study error propagation and state infection in the program.
• To assess the quality of the test cases.
Types of Mutation Testing:
Mutation testing is basically of 3 types:
1. Value Mutations:
In this type of testing the values are changed to detect errors in the program. Basically a small
value is changed to a larger value or a larger value is changed to a smaller value. In this testing
basically constants are changed.
Example:
Initial Code:
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
int c = (a + b) % mod;
Changed Code:
if(a < b)
c = 10;
else
c = 20;
Changed Code:
if(a > b)
c = 10;
else
c = 20;
3. Statement Mutations:
In statement mutations a statement is deleted or it is replaces by some other statement.
Example:
Initial Code:
if(a < b)
c = 10;
else
c = 20;
Changed Code:
if(a < b)
d = 10;
else
d = 20;
Tools used for Mutation Testing :
• Judy
• Jester
• Jumble
• PIT
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
• MuClipse.
Advantages of Mutation Testing:
• It brings a good level of error detection in the program.
• It discovers ambiguities in the source code.
• It finds and solves the issues of loopholes in the program.
• It helps the testers to write or automate the better test cases.
• It provides more efficient programming source code.
Disadvantages of Mutation Testing:
• It is highly costly and time-consuming.
• It is not able for Black Box Testing.
• Some, mutations are complex and hence it is difficult to implement or run against various
test cases.
• Here, the team members who are performing the tests should have good programming
knowledge.
• Selection of correct automation tool is important to test the programs.
DEBUGGING
What is Debugging?
Debugging is the process of finding and resolving defects or problems within a computer
program that prevent the correct operation of computer software or a system.
Need for debugging
Once errors are known during a program code, it’s necessary to initially establish the precise
program statements liable for the errors and so to repair them.
Challenges in Debugging
There are a lot of problems at the same time as acting the debugging. These are the following:
1. Debugging is finished through the individual that evolved the software program and it’s
miles difficult for that person to acknowledge that an error was made.
2. Debugging is typically performed under a tremendous amount of pressure to fix the
supported error as quick as possible.
3. It can be difficult to accurately reproduce input conditions.
4. Compared to the alternative software program improvement activities, relatively little
research, literature, and formal preparation exist in the procedure of debugging.
Debugging Approaches
The following are a number of approaches popularly adopted by programmers for debugging.
1. Brute Force Method
This is the foremost common technique of debugging however is that the least economical
method. during this approach, the program is loaded with print statements to print the
intermediate values with the hope that a number of the written values can facilitate to spot the
statement in error. This approach becomes a lot of systematic with the utilisation of a symbolic
program (also known as a source code debugger), as a result of values of various variables will
be simply checked and breakpoints and watch-points can be easily set to check the values of
variables effortlessly.
2. Backtracking
This is additionally a reasonably common approach. during this approach, starting from the
statement at which an error symptom has been discovered, the source code is derived backward
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
till the error is discovered. sadly, because the variety of supply lines to be derived back will
increase, the quantity of potential backward methods will increase and should become
unimaginably large so limiting the utilisation of this approach.
3. Cause Elimination Method
In this approach, a listing of causes that may presumably have contributed to the error
symptom is developed and tests are conducted to eliminate every error. A connected technique
of identification of the error from the error symptom is that the package fault tree analysis.
4. Program Slicing
This technique is analogous to backtracking. Here the search house is reduced by process
slices. A slice of a program for a specific variable at a particular statement is that the set of
supply lines preceding this statement which will influence the worth of that variable.
Debugging Guidelines
Debugging is commonly administrated by programmers supported their ingenuity. The
subsequent are some general tips for effective debugging:
1. Many times debugging needs an intensive understanding of the program style. making an
attempt to rectify supported a partial understanding of the system style and implementation
might need an excessive quantity of effort to be placed into debugging even straightforward
issues.
2. Debugging might generally even need a full plan of the system. In such cases, a typical
mistake that novice programmers usually create is trying to not fix the error however its
symptoms.
3. One should be watched out for the likelihood that a slip correction might introduce new
errors. so when each spherical of error-fixing, regression testing should be administrated.
Integration Testing
Integration testing is the process of testing the interface between two software units or modules.
It focuses on determining the correctness of the interface. The purpose of integration testing is
to expose faults in the interaction between integrated units. Once all the modules have been unit-
tested, integration testing is performed.
Integration testing is a software testing technique that focuses on verifying the interactions and
data exchange between different components or modules of a software application. The goal of
integration testing is to identify any problems or bugs that arise when different components are
combined and interact with each other. Integration testing is typically performed after unit testing
and before system testing. It helps to identify and resolve integration issues early in the
development cycle, reducing the risk of more severe and costly problems later on.
Integration testing can be done by picking module by module. This can be done so that there
should be a proper sequence to be followed. And also if you don’t want to miss out on any
integration scenarios then you have to follow the proper sequence. Exposing the defects is the
major focus of the integration testing and the time of interaction between the integrated units.
Integration test approaches – There are four types of integration testing approaches. Those
approaches are the following:
1. Big-Bang Integration Testing – It is the simplest integration testing approach, where all the
modules are combined and the functionality is verified after the completion of individual module
testing. In simple words, all the modules of the system are simply put together and tested. This
approach is practicable only for very small systems. If an error is found during the integration
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
testing, it is very difficult to localize the error as the error may potentially belong to any of the
modules being integrated. So, debugging errors reported during Big Bang integration testing is
very expensive to fix.
Advantages:
1. It is convenient for small systems.
2. Simple and straightforward approach.
3. Can be completed quickly.
4. Does not require a lot of planning or coordination.
5. May be suitable for small systems or projects with a low degree of interdependence
between components.
Disadvantages:
1. There will be quite a lot of delay because you would have to wait for all the modules to be
integrated.
2. High-risk critical modules are not isolated and tested on priority since all modules are
tested at once.
3. Not Good for long projects.
4. High risk of integration problems that are difficult to identify and diagnose.
5. This can result in long and complex debugging and troubleshooting efforts.
6. This can lead to system downtime and increased development costs.
7. May not provide enough visibility into the interactions and data exchange between
components.
8. This can result in a lack of confidence in the system’s stability and reliability.
9. This can lead to decreased efficiency and productivity.
10. This may result in a lack of confidence in the development team.
11. This can lead to system failure and decreased user satisfaction.
2. Bottom-Up Integration Testing – In bottom-up testing, each module at lower levels are
tested with higher modules until all modules are tested. The primary purpose of this integration
testing is that each subsystem tests the interfaces among various modules making up the
subsystem. This integration testing uses test drivers to drive and pass appropriate data to the
lower-level modules.
Advantages:
• In bottom-up testing, no stubs are required.
• A principal advantage of this integration testing is that several disjoint subsystems can be
tested simultaneously.
• It is easy to create the test conditions.
• Best for applications that uses bottom up design approach.
• It is Easy to observe the test results.
Disadvantages:
• Driver modules must be produced.
• In this testing, the complexity that occurs when the system is made up of a large number of
small subsystems.
• As Far modules have been created, there is no working model can be represented.
3. Top-Down Integration Testing – Top-down integration testing technique is used in order to
simulate the behaviour of the lower-level modules that are not yet integrated. In this integration
testing, testing takes place from top to bottom. First, high-level modules are tested and then low-
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
level modules and finally integrating the low-level modules to a high level to ensure the system
is working as intended.
Advantages:
• Separately debugged module.
• Few or no drivers needed.
• It is more stable and accurate at the aggregate level.
• Easier isolation of interface errors.
• In this, design defects can be found in the early stages.
Disadvantages:
• Needs many Stubs.
• Modules at lower level are tested inadequately.
• It is difficult to observe the test output.
• It is difficult to stub design.
4. Mixed Integration Testing – A mixed integration testing is also called sandwiched
integration testing. A mixed integration testing follows a combination of top down and bottom-
up testing approaches. In top-down approach, testing can start only after the top-level module
have been coded and unit tested. In bottom-up approach, testing can start only after the bottom
level modules are ready. This sandwich or mixed approach overcomes this shortcoming of the
top-down and bottom-up approaches. It is also called the hybrid integration testing. also, stubs
and drivers are used in mixed integration testing.
Advantages:
• Mixed approach is useful for very large projects having several sub projects.
• This Sandwich approach overcomes this shortcoming of the top-down and bottom-up
approaches.
• Parallel test can be performed in top and bottom layer tests.
Disadvantages:
• For mixed integration testing, it requires very high cost because one part has a Top-down
approach while another part has a bottom-up approach.
• This integration testing cannot be used for smaller systems with huge interdependence
between different modules.
Applications:
1. Identify the components: Identify the individual components of your application that need
to be integrated. This could include the frontend, backend, database, and any third-party
services.
2. Create a test plan: Develop a test plan that outlines the scenarios and test cases that need
to be executed to validate the integration points between the different components. This
could include testing data flow, communication protocols, and error handling.
3. Set up test environment: Set up a test environment that mirrors the production
environment as closely as possible. This will help ensure that the results of your integration
tests are accurate and reliable.
4. Execute the tests: Execute the tests outlined in your test plan, starting with the most
critical and complex scenarios. Be sure to log any defects or issues that you encounter
during testing.
5. Analyze the results: Analyze the results of your integration tests to identify any defects or
issues that need to be addressed. This may involve working with developers to fix bugs or
make changes to the application architecture.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
6. Repeat testing: Once defects have been fixed, repeat the integration testing process to
ensure that the changes have been successful and that the application still works as
expected.
System Testing
System testing is a type of software testing that evaluates the overall functionality and
performance of a complete and fully integrated software solution. It tests if the system meets the
specified requirements and if it is suitable for delivery to the end-users. This type of testing is
performed after the integration testing and before the acceptance testing.
System Testing is a type of software testing that is performed on a complete integrated system
to evaluate the compliance of the system with the corresponding requirements. In system testing,
integration testing passed components are taken as input. The goal of integration testing is to
detect any irregularity between the units that are integrated together. System testing detects
defects within both the integrated units and the whole system. The result of system testing is the
observed behavior of a component or a system when it is tested. System Testing is carried out
on the whole system in the context of either system requirement specifications or functional
requirement specifications or in the context of both. System testing tests the design and behavior
of the system and also the expectations of the customer. It is performed to test the system beyond
the bounds mentioned in the software requirements specification (SRS). System Testing is
basically performed by a testing team that is independent of the development team that helps to
test the quality of the system impartial. It has both functional and non-functional testing. System
Testing is a black-box testing. System Testing is performed after the integration testing and
before the acceptance testing.
System Testing Process: System Testing is performed in the following steps:
• Test Environment Setup: Create testing environment for the better quality testing.
• Create Test Case: Generate test case for the testing process.
• Create Test Data: Generate the data that is to be tested.
• Execute Test Case: After the generation of the test case and the test data, test cases are
executed.
• Defect Reporting: Defects in the system are detected.
• Regression Testing: It is carried out to test the side effects of the testing process.
• Log Defects: Defects are fixed in this step.
• Retest: If the test is not successful then again test is performed.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
Regression Testing
is the process of testing the modified parts of the code and the parts that might get affected due
to the modifications to ensure that no new errors have been introduced in the software after the
modifications have been made. Regression means the return of something and in the software
field, it refers to the return of a bug.
When to do regression testing?
• When a new functionality is added to the system and the code has been modified to absorb
and integrate that functionality with the existing code.
• When some defect has been identified in the software and the code is debugged to fix it.
• When the code is modified to optimize its working.
Process of Regression testing:
Firstly, whenever we make some changes to the source code for any reason like adding new
functionality, optimization, etc. then our program when executed fails in the previously
designed test suite for obvious reasons. After the failure, the source code is debugged in order
to identify the bugs in the program. After identification of the bugs in the source code,
appropriate modifications are made. Then appropriate test cases are selected from the already
existing test suite which covers all the modified and affected parts of the source code. We
can add new test cases if required. In the end, regression testing is performed using the
selected test cases.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
• It ensures that no new bugs has been introduced after adding new functionalities to the
system.
• As most of the test cases used in Regression Testing are selected from the existing test suite,
and we already know their expected outputs. Hence, it can be easily automated by the
automated tools.
• It helps to maintain the quality of the source code.
Disadvantages of Regression Testing:
• It can be time and resource consuming if automated tools are not used.
• It is required even after very small changes in the code.
Software Reliability
Software reliability is also defined as the probability that a software system fulfills its assigned
task in a given environment for a predefined number of input cases, assuming that the hardware
and the input are free of error.
For example, large next-generation aircraft will have over 1 million source lines of software on-
board; next-generation air traffic control systems will contain between one and two million lines;
the upcoming International Space Station will have over two million lines on-board and over 10
million lines of ground support software; several significant life-critical defense systems will have
over 5 million source lines of software. While the complexity of software is inversely associated
with software reliability, it is directly related to other vital factors in software quality, especially
functionality, capability, etc.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
Hardware faults are mostly physical faults. Software faults are design faults, which are
tough to visualize, classify, detect, and correct.
Hardware components generally fail due to Software component fails due to bugs.
wear and tear.
In hardware, design faults may also exist, but In software, we can simply find a strict
physical faults generally dominate. corresponding counterpart for
"manufacturing" as the hardware
manufacturing process, if the simple action of
uploading software modules into place does not
count. Therefore, the quality of the software
will not change once it is uploaded into the
storage and start running
Hardware exhibits the failure features shown Software reliability does not show the same
in the following figure: features similar as hardware. A possible curve
is shown in the following figure:
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
and C stand for burn-in phase, useful life If we projected software reliability on the same
phase, and end-of-life phase respectively. axes.
There are two significant differences between hardware and software curves are:
One difference is that in the last stage, the software does not have an increasing failure rate as
hardware does. In this phase, the software is approaching obsolescence; there are no motivations
for any upgrades or changes to the software. Therefore, the failure rate will not change.
The second difference is that in the useful-life phase, the software will experience a radical
increase in failure rate each time an upgrade is made. The failure rate levels off gradually, partly
because of the defects create and fixed after the updates.
The upgrades in above figure signify feature upgrades, not upgrades for reliability. For feature
upgrades, the complexity of software is possible to be increased, since the functionality of the
software is enhanced. Even error fixes may be a reason for more software failures if the bug fix
induces other defects into the software. For reliability upgrades, it is likely to incur a drop in
software failure rate, if the objective of the upgrade is enhancing software reliability, such as a
redesign or reimplementation of some modules using better engineering approaches, such as clean-
room method.
A partial list of the distinct features of software compared to hardware is listed below:
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
Wear-out: Software does not have an energy-related wear-out phase. Bugs can arise without
warning.
Time dependency and life cycle: Software reliability is not a purpose of operational time.
Environmental factors: Do not affect Software reliability, except it may affect program inputs.
Reliability prediction: Software reliability cannot be predicted from any physical basis since it
depends entirely on human factors in design.
Redundancy: It cannot improve Software reliability if identical software elements are used.
Failure rate motivators: It is generally not predictable from analyses of separate statements.
Built with standard components: Well-understood and extensively tested standard element will
help improve maintainability and reliability. But in the software industry, we have not observed
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
this trend. Code reuse has been around for some time but to a minimal extent. There are no standard
elements for software, except for some standardized logic structures.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
Reliability metrics are used to quantitatively expressed the reliability of the software product. The
option of which parameter is to be used depends upon the type of system to which it applies & the
requirements of the application domain.
Measuring software reliability is a severe problem because we don't have a good understanding of
the nature of software. It is difficult to find a suitable method to measure software reliability and
most of the aspects connected to software reliability. Even the software estimates have no uniform
definition. If we cannot measure the reliability directly, something can be measured that reflects
the features related to reliability.
The current methods of software reliability measurement can be divided into four categories:
1. Product Metrics
Product metrics are those which are used to build the artifacts, i.e., requirement specification
documents, system design documents, etc. These metrics help in the assessment if the product is
right sufficient through records on attributes like usability, reliability, maintainability &
portability. In these measurements are taken from the actual body of the source code.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
Project metrics define project characteristics and execution. If there is proper management of the
project by the programmer, then this helps us to achieve better products. A relationship exists
between the development process and the ability to complete projects on time and within the
desired quality objectives. Cost increase when developers use inadequate methods. Higher
reliability can be achieved by using a better development process, risk management process,
configuration management process.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
3. Process Metrics
Process metrics quantify useful attributes of the software development process & its
environment. They tell if the process is functioning optimally as they report on characteristics
like cycle time & rework time. The goal of process metric is to do the right job on the first time
through the process. The quality of the product is a direct function of the process. So process
metrics can be used to estimate, monitor, and improve the reliability and quality of software.
Process metrics describe the effectiveness and quality of the processes that produce the software
product.
Examples are:
A fault is a defect in a program which appears when the programmer makes an error and causes
failure when executed under particular conditions. These metrics are used to determine the
failure-free execution software.
Reliability Metrics
Reliability metrics are used to quantitatively expressed the reliability of the software product. The
option of which metric is to be used depends upon the type of system to which it applies & the
requirements of the application domain.
Some reliability metrics which can be used to quantify the reliability of the software product are
as follows:
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
MTTF is described as the time interval between the two successive failures. An MTTF of 200
mean that one failure can be expected each 200-time units. The time units are entirely dependent
on the system & it can even be stated in the number of transactions. MTTF is consistent for
systems with large transactions.
For example, It is suitable for computer-aided design systems where a designer will work on a
design for several hours as well as for Word-processor systems.
o measure MTTF, we can evidence the failure data for n failures. Let the failures appear at the
time instants t1,t2.....tn.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
Once failure occurs, some-time is required to fix the error. MTTR measures the average time it
takes to track the errors causing the failure and to fix them.
We can merge MTTF & MTTR metrics to get the MTBF metric.
Thus, an MTBF of 300 denoted that once the failure appears, the next failure is expected to appear
only after 300 hours. In this method, the time measurements are real-time & not the execution time
as in MTTF.
It is the number of failures appearing in a unit time interval. The number of unexpected events
over a specific time of operation. ROCOF is the frequency of occurrence with which unexpected
role is likely to appear. A ROCOF of 0.02 mean that two failures are likely to occur in each 100
operational time unit steps. It is also called the failure intensity metric.
POFOD is described as the probability that the system will fail when a service is requested. It is
the number of system deficiency given several systems inputs.
POFOD is the possibility that the system will fail when a service request is made.
A POFOD of 0.1 means that one out of ten service requests may fail.POFOD is an essential
measure for safety-critical systems. POFOD is relevant for protection systems where services are
demanded occasionally.
6. Availability (AVAIL)
Availability is the probability that the system is applicable for use at a given time. It takes into
account the repair time & the restart time for the system. An availability of 0.995 means that in
every 1000 time units, the system is feasible to be available for 995 of these. The percentage of
time that a system is applicable for use, taking into account planned and unplanned downtime. If
a system is down an average of four hours out of 100 hours of operation, its AVAIL is 96%.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
Requirements denote what features the software must include. It specifies the functionality that
must be contained in the software. The requirements must be written such that is no misconception
between the developer & the client. The requirements must include valid structure to avoid the
loss of valuable data.
he requirements should be thorough and in a detailed manner so that it is simple for the design
stage. The requirements should not include inadequate data. Requirement Reliability metrics
calculates the above-said quality factors of the required document.
The quality methods that exists in design and coding plan are complexity, size, and modularity.
Complex modules are tough to understand & there is a high probability of occurring bugs. The
reliability will reduce if modules have a combination of high complexity and large size or high
complexity and small size. These metrics are also available to object-oriented code, but in this,
additional metrics are required to evaluate the quality.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
First, it provides that the system is equipped with the tasks that are specified in the requirements.
Because of this, the bugs due to the lack of functionality reduces.
The second method is calculating the code, finding the bugs & fixing them. To ensure that the
system includes the functionality specified, test plans are written that include multiple test cases.
Each test method is based on one system state and tests some tasks that are based on an associated
set of requirements. The goals of an effective verification program is to ensure that each elements
is tested, the implication being that if the system passes the test, the requirements’ functionality is
contained in the delivered system.
Software fault tolerance is the ability for software to detect and recover from a fault that is
happening or has already happened in either the software or hardware in the system in which the
software is running to provide service by the specification.
Software fault tolerance is a necessary component to construct the next generation of highly
available and reliable computing systems from embedded systems to data warehouse systems.
To adequately understand software fault tolerance, it is important to understand the nature of the
problem that software fault tolerance is supposed to solve.
Software faults are all design faults. Software manufacturing, the reproduction of software, is
considered to be perfect. The source of the problem being solely designed faults is very different
than almost any other system in which fault tolerance is the desired property.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
1. Recovery Block
The recovery block method is a simple technique developed by Randel. The recovery block
operates with an adjudicator, which confirms the results of various implementations of the same
algorithm. In a system with recovery blocks, the system view is broken down into fault recoverable
blocks.
The entire system is constructed of these fault-tolerant blocks. Each block contains at least a
primary, secondary, and exceptional case code along with an adjudicator. The adjudicator is the
component, which determines the correctness of the various blocks to try.
The adjudicator should be kept somewhat simple to maintain execution speed and aide in
correctness. Upon first entering a unit, the adjudicator first executes the primary alternate. (There
may be N alternates in a unit which the adjudicator may try.) If the adjudicator determines that the
fundamental block failed, it then tries to roll back the state of the system and tries the secondary
alternate.
If the adjudicator does not accept the results of any of the alternates, it then invokes the exception
handler, which then indicates the fact that the software could not perform the requested operation.
The recovery block technique increases the pressure on the specification to be specific enough
to create various multiple alternatives that are functionally the same. This problem is further
discussed in the context of the N-version software method.
2. N-Version Software
The N-version software methods attempt to parallel the traditional hardware fault tolerance
concept of N-way redundant hardware. In an N-version software system, every module is done
with up to N different methods. Each variant accomplishes the same function, but hopefully in a
various way. Each version then submits its answer to voter or decider, which decides the correct
answer, and returns that as the result of the module.
This system can hopefully overcome the design faults present in most software by relying upon
the design diversity concept. An essential distinction in N-version software is the fact that the
system could include multiple types of hardware using numerous versions of the software.
N-version software can only be successful and successfully tolerate faults if the required design
diversity is met. The dependence on appropriate specifications in N-version software, (and
recovery blocks,) cannot be stressed enough.
The differences between the recovery block technique and the N-version technique are not too
numerous, but they are essential. In traditional recovery blocks, each alternative would be executed
serially until an acceptable solution is found as determined by the adjudicator. The recovery block
method has been extended to contain concurrent execution of the various alternatives.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
The N-version techniques have always been designed to be implemented using N-way hardware
concurrently. In a serial retry system, the cost in time of trying multiple methods may be too
expensive, especially for a real-time system. Conversely, concurrent systems need the expense of
N-way hardware and a communications network to connect them.
The recovery block technique requires that each module build a specific adjudicator; in the N-
version method, a single decider may be used. The recovery block technique, assuming that the
programmer can create a sufficiently simple adjudicator, will create a system, which is challenging
to enter into an incorrect state.
A software reliability model indicates the form of a random process that defines the behavior of
software failures to time.
Software reliability models have appeared as people try to understand the features of how and
why software fails, and attempt to quantify software reliability.
Over 200 models have been established since the early 1970s, but how to quantify software
reliability remains mostly unsolved.
There is no individual model that can be used in all situations. No model is complete or even
representative.
Most software models contain the following parts:
o Assumptions
o Factors
A mathematical function that includes the reliability with the elements. The mathematical function
is generally higher-order exponential or logarithmic.
Software Reliability Modeling Techniques
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
Both kinds of modeling methods are based on observing and accumulating failure data and
analyzing with statistical inference.
Data Reference Uses historical information Uses data from the current software
development effort.
When used in Usually made before Usually made later in the life cycle
development development or test phases; (after some data have been
cycle can be used as early as collected); not typically used in
concept phase. concept or development phases.
Reliability Models
A reliability growth model is a numerical model of software reliability, which predicts how
software reliability should improve over time as errors are discovered and repaired. These models
help the manager in deciding how much efforts should be devoted to testing. The objective of the
project manager is to test and debug the system until the required level of reliability is reached.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
Software reliability growth modeling is a process used to predict and manage the improvement
of software reliability over time. It involves statistical techniques to analyze historical data on
software failures and defects to make projections about future reliability.
1. **Data Collection**: The first step in software reliability growth modeling is collecting data
on software failures and defects. This data typically includes information such as the number of
reported failures, the time between failures, and the severity of each failure.
2. **Model Selection**: Once the data is collected, the next step is to select an appropriate
reliability growth model. There are several types of models used in software reliability growth
modeling, including:
- **Non-homogeneous Poisson Process (NHPP)**: This model assumes that failures occur
according to a Poisson process, but the failure intensity changes over time.
- **Goel-Okumoto Model**: This is one of the earliest and most widely used reliability growth
models. It assumes that failures occur according to a Poisson process with a constant failure
intensity.
- **Logarithmic Model**: This model assumes that the number of remaining defects decreases
logarithmically over time.
- **Rayleigh Model**: This model assumes that the software reliability growth follows a
Rayleigh distribution, which is commonly used in reliability engineering to model the time to
failure of systems.
- **Weibull Model**: This model assumes that the software reliability growth follows a
Weibull distribution, which is a flexible distribution widely used in reliability engineering.
3. **Parameter Estimation**: Once a model is selected, the next step is to estimate its parameters
using the collected data. This involves fitting the model to the data to find the values of the
model parameters that best describe the observed failure behavior.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
5. **Prediction and Analysis**: Once the model is validated, it can be used to make predictions
about future reliability based on the current state of the software and the observed failure
behavior. This can help software developers and managers make informed decisions about when
to release the software and how to allocate resources for testing and debugging.
Overall, software reliability growth modeling is a valuable tool for understanding and managing
the reliability of software systems. By analyzing historical failure data and making predictions
about future reliability, organizations can improve the quality and reliability of their software
products.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
Software Quality Management System contains the methods that are used by the authorities to
develop products having the desired quality.
Managerial Structure
Quality System is responsible for managing the structure as a whole. Every Organization has a
managerial structure.
Individual Responsibilities
Each individual present in the organization must have some responsibilities that should be
reviewed by the top management and each individual present in the system must take this
seriously.
Quality System Activities
The activities which each quality system must have been
1. Project Auditing.
2. Review of the quality system.
3. It helps in the development of methods and guidelines.
Evolution of Quality Management System
Quality Systems are basically evolved over the past some years. The evolution of a Quality
Management System is a four-step process.
1. The main task of quality control is to detect defective devices, and it also helps in finding
the cause that leads to the defect. It also helps in the correction of bugs.
2. Quality Assurance helps an organization in making good quality products. It also helps in
improving the quality of the product by passing the products through security checks.
3. Total Quality Management(TQM) checks and assures that all the procedures must be
continuously improved regularly through process measurements.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
The Capability Maturity Model (CMM) is a procedure used to develop and refine an
organization's software development process.
The model defines a five-level evolutionary stage of increasingly organized and consistently
more mature processes.
CMM was developed and is promoted by the Software Engineering Institute (SEI), a research
and development center promote by the U.S. Department of Defense (DOD).
Methods of SEICMM
Capability Evaluation: Capability evaluation provides a way to assess the software process
capability of an organization. The results of capability evaluation indicate the likely contractor
performance if the contractor is awarded a work. Therefore, the results of the software process
capability assessment can be used to select a contractor.
SEI CMM categorized software development industries into the following five maturity levels.
The various levels of SEI CMM have been designed so that it is easy for an organization to build
its quality system starting from scratch slowly.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
Level 1: Initial
Ad hoc activities characterize a software development organization at this level. Very few or no
processes are described and followed. Since software production processes are not limited,
different engineers follow their process and as a result, development efforts become chaotic.
Therefore, it is also called a chaotic level.
Level 2: Repeatable
At this level, the fundamental project management practices like tracking cost and schedule are
established. Size and cost estimation methods, like function point analysis, COCOMO, etc. are
used.
Level 3: Defined
At this level, the methods for both management and development activities are defined and
documented. There is a common organization-wide understanding of operations, roles, and
responsibilities. The ways through defined, the process and product qualities are not measured.
ISO 9000 goals at achieving this level.
Level 4: Managed
At this level, the focus is on software metrics. Two kinds of metrics are composed.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
Product metrics measure the features of the product being developed, such as its size, reliability,
time complexity, understandability, etc.
Process metrics follow the effectiveness of the process being used, such as average defect
correction time, productivity, the average number of defects found per hour inspection, the average
number of failures detected during testing per LOC, etc. The software process and product quality
are measured, and quantitative quality requirements for the product are met. Various tools like
Pareto charts, fishbone diagrams, etc. are used to measure the product and process quality. The
process metrics are used to analyze if a project performed satisfactorily. Thus, the outcome of
process measurements is used to calculate project performance rather than improve the process.
Level 5: Optimizing
At this phase, process and product metrics are collected. Process and product measurement data
are evaluated for continuous process improvement.
Except for SEI CMM level 1, each maturity level is featured by several Key Process Areas (KPAs)
that contains the areas an organization should focus on improving its software process to the next
level. The focus of each level and the corresponding key process areas are shown in the fig.
SEI CMM provides a series of key areas on which to focus to take an organization from one level
of maturity to the next. Thus, it provides a method for gradual quality improvement over various
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
stages. Each step has been carefully designed such that one step enhances the capability already
built up.
Software Maintenance refers to the process of modifying and updating a software system
after it has been delivered to the customer. It is a critical part of the software development life
cycle (SDLC) and is necessary to ensure that the software continues to meet the needs of the
users over time. This article focuses on discussing Software Maintenance in detail.
What is Software Maintenance?
Software maintenance is a continuous process that occurs throughout the entire life cycle of the
software system.
• The goal of software maintenance is to keep the software system working correctly,
efficiently, and securely, and to ensure that it continues to meet the needs of the users.
• This can include fixing bugs, adding new features, improving performance, or updating the
software to work with new hardware or software systems.
• It is also important to consider the cost and effort required for software maintenance when
planning and developing a software system.
• It is important to have a well-defined maintenance process in place, which includes testing
and validation, version control, and communication with stakeholders.
• It’s important to note that software maintenance can be costly and complex, especially for
large and complex systems. Therefore, the cost and effort of maintenance should be taken
into account during the planning and development phases of a software project.
• It’s also important to have a clear and well-defined maintenance plan that includes regular
maintenance activities, such as testing, backup, and bug fixing.
Several Key Aspects of Software Maintenance
1. Bug Fixing: The process of finding and fixing errors and problems in the software.
2. Enhancements: The process of adding new features or improving existing features to meet
the evolving needs of the users.
3. Performance Optimization: The process of improving the speed, efficiency, and
reliability of the software.
4. Porting and Migration: The process of adapting the software to run on new hardware or
software platforms.
5. Re-Engineering: The process of improving the design and architecture of the software to
make it more maintainable and scalable.
6. Documentation: The process of creating, updating, and maintaining the documentation for
the software, including user manuals, technical specifications, and design documents.
Several Types of Software Maintenance
1. Corrective Maintenance: This involves fixing errors and bugs in the software system.
2. Patching: It is an emergency fix implemented mainly due to pressure from management.
Patching is done for corrective maintenance but it gives rise to unforeseen future errors due
to lack of proper impact analysis.
3. Adaptive Maintenance: This involves modifying the software system to adapt it to
changes in the environment, such as changes in hardware or software, government policies,
and business rules.
4. Perfective Maintenance: This involves improving functionality, performance, and
reliability, and restructuring the software system to improve changeability.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
5. Preventive Maintenance: This involves taking measures to prevent future problems, such
as optimization, updating documentation, reviewing and testing the system, and
implementing preventive measures such as backups.
Maintenance can be categorized into proactive and reactive types. Proactive maintenance
involves taking preventive measures to avoid problems from occurring, while reactive
maintenance involves addressing problems that have already occurred.
Maintenance can be performed by different stakeholders, including the original development
team, an in-house maintenance team, or a third-party maintenance provider. Maintenance
activities can be planned or unplanned. Planned activities include regular maintenance tasks that
are scheduled in advance, such as updates and backups. Unplanned activities are reactive and
are triggered by unexpected events, such as system crashes or security breaches. Software
maintenance can involve modifying the software code, as well as its documentation, user
manuals, and training materials. This ensures that the software is up-to-date and continues to
meet the needs of its users.
Software maintenance can also involve upgrading the software to a new version or platform.
This can be necessary to keep up with changes in technology and to ensure that the software
remains compatible with other systems. The success of software maintenance depends on
effective communication with stakeholders, including users, developers, and management.
Regular updates and reports can help to keep stakeholders informed and involved in the
maintenance process.
Software maintenance is also an important part of the Software Development Life Cycle
(SDLC). To update the software application and do all modifications in software application so
as to improve performance is the main focus of software maintenance. Software is a model that
runs on the basis of the real world. so, whenever any change requires in the software that means
the need for real-world changes wherever possible.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
• Older software programs, which had been intended to paint on sluggish machines with
much less reminiscence and garage ability can not maintain themselves tough in opposition
to newly coming more advantageous software programs on contemporary-day hardware.
• Changes are frequently left undocumented which can also additionally reason greater
conflicts in the future.
• As the era advances, it turns into high prices to preserve vintage software programs.
• Often adjustments made can without problems harm the authentic shape of the software
program, making it difficult for any next adjustments.
• There is a lack of Code Comments.
• Lack of documentation: Poorly documented systems can make it difficult to understand
how the system works, making it difficult to identify and fix problems.
• Legacy code: Maintaining older systems with outdated technologies can be difficult, as it
may require specialized knowledge and skills.
• Complexity: Large and complex systems can be difficult to understand and modify,
making it difficult to identify and fix problems.
• Changing requirements: As user requirements change over time, the software system may
need to be modified to meet these new requirements, which can be difficult and time-
consuming.
• Interoperability issues: Systems that need to work with other systems or software can be
difficult to maintain, as changes to one system can affect the other systems.
• Lack of test coverage: Systems that have not been thoroughly tested can be difficult to
maintain as it can be hard to identify and fix problems without knowing how the system
behaves in different scenarios.
• Lack of personnel: A lack of personnel with the necessary skills and knowledge to
maintain the system can make it difficult to keep the system up-to-date and running
smoothly.
• High-Cost: The cost of maintenance can be high, especially for large and complex
systems, which can be difficult to budget for and manage.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
Reverse Engineering
Reverse Engineering is the process of extracting knowledge or design information from anything
man-made and reproducing it based on the extracted information. It is also called back
engineering. The main objective of reverse engineering is to check out how the system works.
There are many reasons to perform reverse engineering. Reverse engineering is used to know
how the thing works. Also, reverse engineering is to recreate the object by adding some
enhancements.
Software Reverse Engineering
Software Reverse Engineering is the process of recovering the design and the requirements
specification of a product from an analysis of its code. Reverse Engineering is becoming
important, since several existing software products, lack proper documentation, are highly
unstructured, or their structure has degraded through a series of maintenance efforts.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
• Schedule disruptions: Maintenance can cause disruptions to the normal schedule and
operations of the software, leading to potential downtime and inconvenience.
• Complexity: Maintaining and updating complex software systems can be challenging,
requiring specialized knowledge and expertise.
• Risk of introducing new bugs: The process of fixing bugs or adding new features can
introduce new bugs or problems, making it important to thoroughly test the software after
maintenance.
• User resistance: Users may resist changes or updates to the software, leading to decreased
satisfaction and adoption.
• Compatibility issues: Maintenance can sometimes cause compatibility issues with other
software or hardware, leading to potential integration problems.
• Lack of documentation: Poor documentation or lack of documentation can make software
maintenance more difficult and time-consuming, leading to potential errors or delays.
• Technical debt: Over time, software maintenance can lead to technical debt, where the
cost of maintaining and updating the software becomes increasingly higher than the cost of
developing a new system.
• Skill gaps: Maintaining software systems may require specialized skills or expertise that
may not be available within the organization, leading to potential outsourcing or increased
costs.
• Inadequate testing: Inadequate testing or incomplete testing after maintenance can lead to
errors, bugs, and potential security vulnerabilities.
• End-of-life: Eventually, software systems may reach their end-of-life, making maintenance
and updates no longer feasible or cost-effective. This can lead to the need for a complete
system replacement, which can be costly and time-consuming.
• File servers (client requests selected records from a file, server transmits records to client
over the network)
• Database servers (client sends SQL requests to server, server processes the request and
returns the results to the client over the network)
• Transaction servers (client sends requests that invokes remote procedures on the server
side, sever executes procedures invoked and returns the results to the client)
• Groupware servers (server provides set of applications that enable communication among
clients using text, images, bulletin boards, video, etc.)
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
• Distributed presentation - database and application logic remain on the server, client
software is used to reformat server data into GUI format
• Remote presentation - similar to distributed presentation, primary database and application
logic remain on the server, data sent by the server is used by the client to prepare the user
presentation
• Distributed logic - client is assigned all user presentation tasks associated with data entry
and formulating server queries, server is assigned data management tasks and updates
information based on user actions
• Remote data management - applications on server side create new data sources,
applications on client side process the new data returned by the server
• Distributed databases - data is spread across multiple clients and servers, requiring clients
to support data management as well as application and GUI components
• Fat server - most software functions for C/S system are allocated to the server
• Thin clients - network computer approach relegating all application processing to a fat
server
• Pipes (permit messaging between different machines running different operating systems)
• Remote procedure calls (permit process running on one machine to invoke execution of
process residing on another machine)
• Client/server SQL interaction (SQL requests passed from client to server DBMS, this
mechanism is limited to RDBMS)
• CORBA (ORB)
• COM (Microsoft)
• JavaBeans (Sun)
• Data and architectural design - dominates the design process to be able to effectively use
the capabilities of RDBMS or OODMBS
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
• Event-driven paradigm - when used, behavioral modeling should be conducted and the
control-oriented aspects of the behavioral model should translated into the design model
• Interface design - elevated in importance, since the user interaction/presentation
component implements all functions typically associated with a GUI
• Object-oriented point of view - often chosen, since an object structure is provides by events
initiated in the GUI and their event handlers within the client-based software
• Best described as communicating processes style architecture whose goal is to achieve easy
scalability when adding and arbitrary number of clients
• Since modern C/S systems tend to be component-based, an object request broker (ORB)
architecture is used for implementation
• Object adapters or wrappers provide service to facilitate communication among client and
server components
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
1. For each elementary business process, identify the files created, updated, referenced, or
deleted.
2. Use the files from step 1 as basis for defining components or objects.
3. For each component, retrieve the business rules and other business object information that
has been established for the relevant file.
4. Determine which rules are relevant to the process and decompose the rules down to the
method level.
5. As required, define any additional components that are needed to implement the methods.
• Begins with testing in the small and then proceeds to integration testing using the non-
incremental or big bang approach
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
Service-Oriented Architecture
Service-Oriented Architecture (SOA) is a stage in the evolution of application development
and/or integration. It defines a way to make software components reusable using the
interfaces. Formally, SOA is an architectural approach in which applications make use of
services available in the network. In this architecture, services are provided to form applications,
through a network call over the internet. It uses common communication standards to speed up
and streamline the service integrations in applications. Each service in SOA is a complete
business function in itself. The services are published in such a way that it makes it easy for the
developers to assemble their apps using those services. Note that SOA is different from
microservice architecture.
• SOA allows users to combine a large number of facilities from existing services to form
applications.
• SOA encompasses a set of design principles that structure system development and provide
means for integrating components into a coherent and decentralized system.
• SOA-based computing packages functionalities into a set of interoperable services, which
can be integrated into different software systems belonging to separate business domains.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
Services might aggregate information and data retrieved from other services or create workflows
of services to satisfy the request of a given service consumer. This practice is known as service
orchestration Another important interaction pattern is service choreography, which is the
coordinated interaction of services without a single point of control.
Components of
SOA:
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
Advantages of SOA:
• Service reusability: In SOA, applications are made from existing services. Thus, services
can be reused to make many applications.
• Easy maintenance: As services are independent of each other they can be updated and
modified easily without affecting other services.
• Platform independent: SOA allows making a complex application by combining services
picked from different sources, independent of the platform.
• Availability: SOA facilities are easily available to anyone on request.
• Reliability: SOA applications are more reliable because it is easy to debug small services
rather than huge codes
• Scalability: Services can run on different servers within an environment, this increases
scalability
Disadvantages of SOA:
• High overhead: A validation of input parameters of services is done whenever services
interact this decreases performance as it increases load and response time.
• High investment: A huge initial investment is required for SOA.
• Complex service management: When services interact they exchange messages to tasks.
the number of messages may go in millions. It becomes a cumbersome task to handle a large
number of messages.
Practical applications of SOA: SOA is used in many ways around us whether it is mentioned
or not.
1. SOA infrastructure is used by many armies and air forces to deploy situational awareness
systems.
2. SOA is used to improve healthcare delivery.
3. Nowadays many apps are games and they use inbuilt functions to run. For example, an app
might need GPS so it uses the inbuilt GPS functions of the device. This is SOA in mobile
solutions.
4. SOA helps maintain museums a virtualized storage pool for their information and content.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
Business Services - SaaS Provider provides various business services to start-up the business. The
SaaS business services include ERP (Enterprise Resource Planning), CRM (Customer
Relationship Management), billing, and sales.
Social Networks - As we all know, social networking sites are used by the general public, so social
networking service providers use SaaS for their convenience and handle the general public's
information.
Mail Services - To handle the unpredictable number of users and load on e-mail services, many
e-mail providers offering their services using SaaS.
SaaS pricing is based on a monthly fee or annual fee subscription, so it allows organizations to
access business functionality at a low cost, which is less than licensed applications.
Unlike traditional software, which is sold as a licensed based with an up-front cost (and often an
optional ongoing support fee), SaaS providers are generally pricing the applications using a
subscription fee, most commonly a monthly or annually fee.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
SaaS pricing is based on a monthly fee or annual fee subscription, so it allows organizations to
access business functionality at a low cost, which is less than licensed applications.
Unlike traditional software, which is sold as a licensed based with an up-front cost (and often an
optional ongoing support fee), SaaS providers are generally pricing the applications using a
subscription fee, most commonly a monthly or annually fee.
3. One to Many
SaaS services are offered as a one-to-many model means a single instance of the application is
shared by multiple users.
The software is hosted remotely, so organizations do not need to invest in additional hardware.
Software as a service removes the need for installation, set-up, and daily maintenance for the
organizations. The initial set-up cost for SaaS is typically less than the enterprise software. SaaS
vendors are pricing their applications based on some usage parameters, such as a number of users
using the application. So SaaS does easy to monitor and automatic updates.
All users will have the same version of the software and typically access it through the web
browser. SaaS reduces IT support costs by outsourcing hardware and software maintenance and
support to the IaaS provider.
6. Multidevice support
SaaS services can be accessed from any device such as desktops, laptops, tablets, phones, and thin
clients.
7. API Integration
SaaS services easily integrate with other software or services through standard APIs.
8. No client-side installation
SaaS services are accessed directly from the service provider using the internet connection, so do
not need to require any software installation.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
1) Security
Actually, data is stored in the cloud, so security may be an issue for some users. However, cloud
computing is not more secure than in-house deployment.
2) Latency issue
Since data and applications are stored in the cloud at a variable distance from the end-user, there
is a possibility that there may be greater latency when interacting with the application compared
to local deployment. Therefore, the SaaS model is not suitable for applications whose demand
response time is in milliseconds.
3) No client-side installation
SaaS services are accessed directly from the service provider using the internet connection, so do
not need to require any software installation.
1) Security
Actually, data is stored in the cloud, so security may be an issue for some users. However, cloud
computing is not more secure than in-house deployment.
2) Latency issue
Since data and applications are stored in the cloud at a variable distance from the end-user, there
is a possibility that there may be greater latency when interacting with the application compared
to local deployment. Therefore, the SaaS model is not suitable for applications whose demand
response time is in milliseconds.
Prepared by Dr.AparnaRajesh A
Aryan Institute of Engineering and Technology-CSE
Prepared by Dr.AparnaRajesh A