Se Rpac Ketan
Se Rpac Ketan
Se Rpac Ketan
Assignment 1
Stages of SDLC ::
The software development life cycle includes 5 stages that help throughout the development process. These
stages allow development teams to ensure the software is high quality and efficient, meeting the particular
requirements of your clients.
People involved, such as customers and developers, overview the guide and recommend necessary
changes. It is like a group of engineers analyzing a SaaS product and saying, “Maybe we should
simplify the chat function.” It is a super essential stage of the software development life cycle. Any
failure will result in either cost overruns or total project shutdown.
4. Testing ::
Testing plays a crucial role in the software development life cycle. It is like you followed the correct
recipe (plan and guide stages) to bake (development stage) a pie (software). Now, you want to make
sure that the pie is delicious before serving it to your guests (customers), so you take a bite. That bite
is testing to ensure you have baked a yummy pie.
In the testing stage, you look for deficiencies and flaws in your software. You fix those problems until
the solution meets the original requirements. The development team uses automation and manual
testing to analyze bugs in the software. Also, the testing phase should run side-by-side with the
development stage.
Benefits of SDLC ::
Well-organized development process - When you follow a structured software development life cycle, it offers
an organized framework for development. It segments the entire development process into different stages,
allowing you to track progress hassle-free.
Enhanced quality assurance - Since the stages are predefined, you can easily test and validate the process at
every phase. It helps in determining and correcting issues quickly in the development cycle. All the quality
assurance practices help you achieve high-quality software solutions with fewer defects and bugs.
Improved collaboration - The software development life cycle enables effective collaboration and
communication among the stakeholders, development teams, and other departments involved in the process.
Each stage has well-defined responsibilities, ensuring everyone stays on the same page. Moreover, SDLC also
involves regular meetings and transparent documentation for better coordination, increasing the chancesof
successful outcomes.
Better resource management - The software development lifecycle helps planning and efficient use of
resources like time, budget, and human capital. You can estimate the required resources at each stage and
prepare your steps accordingly to avoid underutilization or overutilization of resources, allowing you to
experience cost-effectiveness.
Adaptation and flexibility - Software development life cycle frameworks like Agile allow you to amend the
solution to ensure the product is always updated. SDLC permits iterative and incremental development based
on users’ and stakeholders’ feedback. This allows your software to respond to changes rapidly and ensures
your product meets contemporary user expectations and needs.
Pros of SDLC ::
Cost-efficiency
Better planning
Organized documentation
Early risk assessment and mitigation
More satisfied customers
Cons of SDLC ::
It can be time-consuming for large projects
Inadequate planning may lead to high upfront costs
A traditional SDLC model may be rigid and opposed to change
Assignment - 2
1.1 Purpose - The purpose of the project is to maintain the details of books and library members of different
libraries. The main purpose of this project is to maintain a easy circulation system between clients and the
libraries, to issue books using single library card, also to search and reserve any book from different
available libraries and to maintain details about the user (fine, address, phone number).
The proposed Library Management System will take care of the current book detail at any point of time. The
book issue, book return will update the current book details automatically so that user will get the update
current book details.
1.2 Scope - Manually updating the library system into an android based application so that the user
can know the details of the books available and maximum limit on borrowing from their computer
and also through their phones.
The ILM System provides information like details of the books, insertion of new books, deletion of
lost books, limitation on issuing books, fine on keeping a book more than one month from the
issued date.
1.3 Overview – This system provides an easy solution. implementation of Library Management starts with
entering and updating master records like book details, library information. Any further transaction like
book issue, book return will automatically update the current books.
1.4 Additional information – This system will work together with the library computer. It will not be
operated independently. Various computers in the library might be networked together.
2. General description ::
With the increase in the number of readers, better management of libraries system is required. The Library
management system focuses on improving the management of libraries in a city or town. “What If you can
check whether a book is available in the library through your phone?” or “what if instead of having different
library cards for different libraries you can just have one ?” or “you can reserve a book or issue a book from
your phone sitting at your home!”. The Integrated Library Management system provides you the ease of
issuing, renewing, or reserving a book from a library within your town through your phone. The Integrated
Library Management system is developed on the android platform which basically focuses on issuing,
renewing and reserving a book.
3. Functional requirements ::
Register -
Description : First the user will have to register/sign up. There are two different types of users.
The library manager/head : The manager have to provide details about the name of library ,address, phone
number, email id.
Regular person/student : The user has to provide details about his/her name, address, phone number, email
id.
Sign up -
Input: Detail about the user as mentioned in the description.
Output: Confirmation of registration status and a membership number and password will be generated and
mailed to the user.
Processing: All details will be checked and if any errors are found then an error message is displayed else a
membership number and password will be generated.
Login -
Input: Enter the membership number and password provided.
Output : User will be able to use the features of the software.
1. Usability Requirement - The system shall allow the users to access the system from the phone
using android application. The system uses an android application as an interface. Since all users are
familiar with the general usage of mobile app, no special training is required. The system is user
friendly which makes the system easy.
2. Availability Requirement - The system is available 100% for the user and is used 24 hrs a day and
365 days a year. The system shall be operational 24 hours a day and 7 days a week.
3. Efficiency Requirement - Mean Time to Repair (MTTR) - Even if the system fails, the system
will be recovered back up within an hour or less.
4. Accuracy - The system should accurately provide real time information taking into consideration
various concurrency issues. The system shall provide 100% access reliability.
5. Interface requirements ::
The user interface must be highly intuitive or interactive because there will not be limited assistance for the
user who is operating the College system.
5.1 Hardware interface:
5.1.1 Student enrollment number
5.1.2 Student identity card for verification
5.2 Software interface:
5.2.1 Any windows operating system.
5.2.2 The PHP must be installed.
5.2.3 For database handling MYSQL must be installed
6. Performance requirements ::
Login/Registration will not take more than 10 seconds.
Any financial transactions will not take more than 15 seconds.
Assignment - 3
UML diagrams
-----------------------------------------------------------------------------
UML Diagram for Library Management System
UML, which stands for Unified Modeling Language, is a way to visually represent the architecture, design,
and implementation of complex software systems.
1. Use case diagram - Use cases are a methodology used to understand how a user interacts with a product or
system. They are used to identify, clarify, and organize system requirements. Use cases can be written or
diagrammed, and are used in many phases of software development.
2. Activity diagram - Activity diagram is basically a flowchart to represent the flow from one activity to
another activity. The activity can be described as an operation of the system. The control flow is drawn from
one operation to another. This flow can be sequential, branched, or concurrent.
3. Sequence diagram - A Sequence Diagram is a type of interaction diagram because it describes how and
in what order a group of objects works together. These diagrams are used by software developers and business
professionals to understand requirements for a new system or to document an existing process.
4. State diagram - A state diagram (also known as a state machine or state chart diagram) is an illustration of
all the possible behavioral states a software system component may exhibit and the various state changes it's
predicted to undergo over the course of its operations.
5. Class diagram - In software engineering, a class diagram in the Unified Modeling Language (UML) is a
type of static structure diagram that describes the structure of a system by showing the system's classes, their
attributes, operations (or methods), and the relationships among objects.
DFD (Data Flow Diagram) - Also known as DFD, Data flow diagrams are used to graphically represent the
flow of data in a business information system. DFD describes the processes that are involved in a systemto
transfer data from the input to the file storage and reports generation.
Data flow diagrams can be divided into logical and physical.
The logical data flow diagram describes flow of data through a system to perform certain
functionality of a business.
The physical data flow diagram describes the implementation of the logical data flow.
A data dictionary in Software Engineering means a file or a set of files that includes a database’s metadata
(hold records about other objects in the database), like data ownership, relationships of the data to another
object, and some other data. A data dictionary in Software Engineering means a file or a set of files that
includes a database’s metadata (hold records about other objects in the database), like data ownership,
relationships of the data to another object, and some other data.
Level 0 ::
Level 1 ::
Level 2 ::
Assignment – 4
Also take into consideration the following cost drivers with their ratings:
Storage constraints (Low)
Experience in developing similar software (High)
Programming capabilities of the developers (High)
Application of software engineering methods (High)
Use of software tools (High)
(All other cost drivers have nominal rating).
Objectives:-
To get an rough idea about the cost.
To get an early stage design.
To get rough-order of level estimates of software cost.
To get idea about planning and resource allocation.
To get a specific schedule for the project to work accordingly and complete project in time.
Reference:-
URL:- http://www.softstarsystems.com/overview.htm
http://www.mhhe.com/engcs/compsci/pressman/information/olc/COCOMO.html
Pre-requisite:-
The project that uses COCOMO model should be small.
The development environment should be known.
A similar type of project should already present or should be made i.e. historical information is
required.
No requirement should specify any innovation because scope of innovation is very little.
Theory concept:-
BASICCOCOMO:-
The COCOMO is Cost Constructive Model.
The model is an open model. As it is an open model every single detail provided. Details would
include all the assumptions, definitions, equations, etc.
COCOMO is an algorithmic cost model.
It is based on the historical information. It is inspired by past projects and applications that are
already developed.
It is easy to use COCOMO in small projects.
COCOMO is size based model.
A concept of cost driver is present in COCOMO model.
Cost driver are those critical features which drive the cost i.e. which affects the cost of the project.
The cost drivers may vary the cost of building a project.
There are three modes of COCOMO model. They are as follows:
1) Organic mode
2) Semidetached mode
3) Embedded mode
1) Organic mode:-
In this mode the development projects are less complicated and involve small experienced teams.
Project is developed in familiar environment.
As the team is small the communication among the group members is more.
The co-ordination of the members is more.
As the project is small the scope of innovation is very little.
EFFORT (E)=2.4(KDSI)1.05
o Where E is person-months
SCHEDULE=2.5(E) 0.38
2) Semidetachedmode:-
In size it lies between organic mode and embedded mode.
It consists of experts as well as freshers.
Consequently the experience of the team will be average i.e. their will mixture of persons in context
to experience .
This also means that team members will have experience knowledge about some aspects of the
system that is under development but not fully informed and possibility of average domain skills.
EFFORT (E)=3(KDSI)1.12
Where E is person-months
SCHEDULE=2.5(E) 0.35
3) Embedded mode:-
In size the project is very large.
The team is very large.
The team members are highly skilled.
EFFORT(E)=3.6(KDSI) 1.20
SCHEDULE=2.5(E) 0.32
INTERMEDIATECOCOMO :-
This model can considers 15 cost drivers.
The estimation in this model is less while accuracy is more.
These 15 attributes are rated by 6 point scale.
The product of effort multipliers of all the 15 attributes gives the Effort Adjustment Factor (EAF).
COCOMOII :-
COCOMO II covers the following areas:-
Application Composition Model
Early Design Stage Model
Post Architecture Stage Model
Application Composition Model:- This is used during prototype or for the rough of the project.
Early Design Stage Model:- The model is used during Basic Architecture i.e. Design.
Post Architecture Stage Model:- This model is used during construction of software.
The COCOMO II application composition model uses object points. Estimation models (using FP
and KLOC) are also available as part of COCOMO II.
Object points is indirect software measure using counts of screens ,reports, 3GL components.
Sampleoutput:-
Assignment – 5
External Input –
The first Transactional Function allows a user to maintain Internal Logical Files (ILFs) through the ability to
add, change and delete the data. For example, a pilot can add, change and delete navigational information
prior to and during the mission. In this case the pilot is utilizing a transaction referred to as an External Input
(EI). An External Input gives the user the capability to maintain the data in ILF's through adding, changing
and deleting its contents.
External Output –
The next Transactional Function gives the user the ability to produce outputs. For example a pilot has the
ability to separately display ground speed, true air speed and calibrated air speed. The results displayed are
derived using data that is maintained and data that is referenced. In function point terminology the resulting
display is called an External Output (EO).
External Inquiries –
The final capability provided to users through a computerized system addresses the requirement to select and
display specific data from files. To accomplish this user inputs selection information that is used to retrieve
data that meets the specific criteria. In this situation there is no manipulation of the data. It is a direct
retrieval of information contained on the files. For example if a pilot displays terrain clearance data that was
previously set, the resulting output is the direct retrieval of stored information. These transactions are
referred to as External Inquiries (EQ).
Drawing a table for FP estimation: The counts for each level of complexity for each type of component can
be entered into a table such as the following one. Each count is multiplied by the numerical rating shown to
determine the rated value. The rated values on each row are summed across the table, giving a total value for
each type of component. These totals are then summed across the table, giving a total value for each type of
component. These totals are then summoned down to arrive at the Total Number of Unadjusted Function
Points.
http://groups.engin.umd.umich.edu/CIS/course.des/cis525/js/f00/artan/functionpoints.htm
Assignment – 6
-----------------------------------------------------------------------------
1. Guarantees that all independent execution path is exercised at least once;
2. Guarantees that both the true and false side of all logical decisions are exercised;
3. Executes the loop at the boundary values and within the boundaries.
Sketch out Design control flow diagram and Apply Cyclomatic complexity for given Code. Identify
numbers of Independence path require for testing.
OBJECTIVES:-
To find out the number of linearly independent path, number of nodes, and number of total edges.
To find out logical complexity of the software.
Ensure that each and every component can tested at least once.
To find out the information about the risk using cyclomatic complexity.
PRE-REQUESITE:-
We would require the software definition to find out the cyclomatic complexity of that software.
THEORYCONCEPT:-
Cyclomatic Complexity:-
Cyclomatic Complexity is a software matric that provides a measurement of the logical complexity
of the program.
It is used to know, how many paths to look for.
When used in the context of the basis path testing method, the value computed for cyclomatic
complexity defines the number of independent paths in the basis set of a program and provides us
with an upper bound for the number of tests that must be conducted to ensure that all statements have
been executed at least once.
Cyclomatic complexity is computed in one of the three ways:
The number of regions corresponds to the cyclomatic complexity.
Cyclomatic complexity, V(G), for a flow graph, G, is defined as
V(G) = E-N+2
Where E = Edges.
N = Nodes
It is also defined as,
V(G) = P + 1
SAMPLEOUTPUT:-
Cyclomatic Complexity:-
As we already know there are three ways to find out the cyclomatic complexity.
(1) The number of regions corresponds to the cyclomatic complexity. i.e., =4
Assignment – 7
Testing
-----------------------------------------------------------------------------
Design and priorities the test cases using test case templates for the project which is to be pursued as a part
of project with web/internet technology subject.
Objectives:-
To find maximum test cases of the project.
To assure quality of the project.
To check whether the project is working as expected.
To know the risks in the project.
Pre-requisite:-
We require project that is to be tested.
ID 1
Priority 1
Title Add customer
Pre-Conditions Sign in with sales authorization (I.e. user id and password)
Test Steps Select the client module.
Enter the customer information.
Click “Add”.
Expected Results Message appears in the program’s status bar. The message
reads ‘New Customer added’.
Actual Result Pass / Fail
The example above contains the main headings that a test case needs for most cases. However, there are many
more headings which could be useful and you can find these below.
You probably won’t need all these fields in every situation, so start with a few fields and add others to the test
case template later as the need arises. You can get started initially using just the fields in the example above.
ID (identification):-
The ID field makes it easier to cross-reference test cases, both with one another and from defect reports.
The ID obviously has to be unique, i.e.: there can never be more than one test case with the same ID
number.
The most common approach is to use a continuous sequence, so that test cases are identified as 1, 2, 3,
and so on. You don’t have to prefix the ID with a code (like UM01 for the first test case in the user
module), but some people do use prefixes to make it easier to see which part of the system a test case
belongs to.
If you have access to a good test management tool, you won’t need to use prefixes, since the tool will
automatically assign an ID number to each test case.
Some people put the test case number in the test case title, to make it easier to sort the test cases for
example, however, it’s better to keep ID as a custom field and use the tool’s sorting functions, because
it can get pretty difficult to keep the numbering consistent and sequential as the number of test cases
grows.
Title:-
The title should provide a concise, revealing description of the test case, such as “Add customer”.
The title is important because it’s often the first or only thing you see when you are scanning a list of
test cases for example.
You should gather the test cases in a test management tool, or in a document
(often referred to as the test specification). Clear titles are key to helping testers
quickly find the right test cases.
Pre-conditions:-
In the pre-conditions heading, you should explain any activities that the tester needs to carry out
before he/she can execute the test steps. They may need to add test data, perform other functions,
execute other test cases, or navigate to a particular part of the system.
The pre-conditions field isn’t relevant to every test case, so you may want to include it in the
template but not make it mandatory. If you don’t describe pre-conditions accurately, the testers may
not be able to conduct the test. If you have several cases that all have the same pre-conditions, you
should move the pre-conditions to a test run or test specification instead, to avoid writing the same
instructions repeatedly.
Test Steps:-
The Test Steps section gives the tester a numbered list of the steps to perform in the system, which
makes it easier to understand the test case.
It is recommended to have 3-8 test steps per test case, although sometimes you might need a smaller
number of test cases with a larger number of test steps. Too many steps make it difficult for developers
and testers to reproduce the steps in the event that a bug report is filed against the test case.
Expected Results:-
The tester needs to know the expected result in order to assess whether the test case is successful.
The optimal level of detail in this field varies from situation to situation.
One of the most common problems is the wording of the expected outcome, especially if the
description is so vague that it’s not possible to tell whether the test case has succeeded. Ensure that the
wording is clear and concise.
Output:-
ID 1
Priority 1
Title Login Successful
Pre-Conditions Sign in with given user id and password
Test Steps Select the login button.
Enter the login information.
Click “login”
Expected Results Page opens for appropriate user
Actual Result
ID 2
Priority 2
Title Login Unsuccessful
Pre-Conditions Sign in with wrong user id and password
Test Steps Select the login button.
Enter the login information.
Click “login”
ID 3
Priority 3
Title Patient Registration Successful
Pre-Conditions Home page of hospital should be visited
Test Steps Select the registration from of patient.
2. Enter the form details.
3. Click “submit”.
Expected Results Message appears successfully registered
Actual Result
ID 4
Priority 5
Title Patient Registration Unsuccessful
Pre-Conditions Home page of hospital should be visited
Test Steps Select the patient registration form.
Enter the wrong form details.
Click “submit”.
Expected Results Message appears “unsuccessful registration please
enter correct details”
Actual Result
ID 5
Priority 4
Title Doctor Registration Successful
Pre-Conditions Login as employee of hospital.
Test Steps Select the doctor registration form.
Enter the form details.
Click “submit”.
Expected Results Message appears “unsuccessful registration please
enter correct details”
Actual Result
ID 6
Priority 6
Title Doctor Registration Unsuccessful
Pre-Conditions Login as employee of hospital.
Test Steps Select the doctor registration form.
Enter the wrong form details.
Click “submit”.
Expected Results Message appears “unsuccessful registration please
enter correct details”
Actual Result
Assignment – 8
Study of any any two Open source tools in DevOps for Infrastructure
Automation, Configuration Management, Deployment Automation,
Performance Management, Log Management. Monitoring. (Behat , Watir,
Chef, Supergiant, SaltStack, Docker, Hudson etc).
-----------------------------------------------------------------------------
DOCKER
What is Docker?
Docker is an open source software platform used to create, deploy and manage virtualized application
containers on a common operating system (OS), with an ecosystem of allied tools. Docker container
technology debuted in 2013. At that time, Docker Inc. was formed to support a commercial edition of
container management software and be the principal sponsor of an open source version. Mirantis
acquired the Docker Enterprise business in November 2019.
Docker gives software developers a faster and more efficient way to build and test containerized
portions of an overall software application. This lets developers in a team concurrently build multiple
pieces of software. Each container contains all elements needed to build a software component and
ensure it's built, tested and deployed smoothly. Docker enables portability for when these packaged
containers are moved to different servers or environments.
Continuously deploying software. Docker technology and strong DevOps practices make it possible to
deploy containerized applications in a few seconds, unlike traditional bulky, monolithic applications
that take much longer. Updates or changes made to an application's code are implemented and
deployed quickly when using containers that are part of a larger continuousintegration/continuous
delivery pipeline.
Building a microservice-based architecture. When a microservice-based architecture is more
advantageous than a traditional, monolithic application, Docker is ideal for the process of buildingout
this architecture. Developers build and deploy multiple microservices, each inside their own container.
Then they integrate them to assemble a full software application with
Migrating legacy applications to a containerized infrastructure. A development team wanting to
modernize a preexisting legacy software application can use Docker to shift the app to a containerized
infrastructure.
Enabling hybrid cloud and multi-cloud applications. Docker containers operate the same way whether
deployed on premises or using cloud computing technology. Therefore, Docker lets applications be
easily moved to various cloud vendors' production and testing environments. A Docker app that uses
multiple cloud offerings can be considered hybrid cloud or multi-cloud.
Other components and tools in the docker architecture include the following:
Docker Hub. This software-as-a-service tool lets users publish and share container-based applications
through a common library. The service has more than 100,000 publicly available applications as well
as public and private container registries.
Trusted Registry. This is a repository that's similar to Docker Hub but with an extra layer of control
and ownership over container image storage and distribution.
Docker Swarm. This is part of the Docker Engine that supports cluster load balancing for Docker.
Multiple Docker host resources are pooled together in Swarm to act as one, which lets users quickly
scale container deployments to multiple hosts.
Universal Control Plane. This is a web-based, unified cluster and application management interface.
Compose. This tool is used to configure multi container application services,view container statuses,
stream log output and run single-instance processes.
Content Trust. This security tool is used to verify the integrity of remote Docker registries, through
user signatures and image tags.
In recent years, Docker was supplanted by Kubernetes for container orchestration. However, most Kubernetes
offerings actually run Docker behind the scenes.
Docker security :-
A historically persistent issue with containers -- and Docker, by extension – is security. Despite
excellent logical isolation, containers still share the host's operating system. An attack or flaw in the
underlying OS can potentially compromise all the containers running on top of the OS. Vulnerabilities
can involve access and authorization, container images and network traffic among containers. Docker
images may retain root access to the host by default, although this is often carried over from third-party
vendors' packages.
Docker has regularly added security enhancements to the Docker platform, such as image scanning,
secure node introduction, cryptographic node identity, cluster segmentation and secure secret
distribution. Docker secrets management also exists in Kubernetes as well as CISOfy Lynis, D2iQ and
HashiCorp Vault. Various container security scanning tools have emerged from Aqua Security, SUSE's
NeuVector and others.
Some organizations run containers within a VM, although containers don't require virtual machines.
This doesn't solve the shared-resource problem vector, but it does mitigate the potential impact of a
security flaw.
Another alternative is to use lower-profile or "micro" VMs, which don't require the same overhead as
a typical VM. Examples include Amazon Firecracker, gVisor and Kata Containers. Above all, the most
common and recommended step to ensure container security is to not expose container hosts to the
internet and only use container images from known sources.
Security was also the main selling point for Docker alternatives, particularly CoreOS' rkt, pronounced
rocket. However, Docker has made strides to improve its security options while, at the same time,
momentum for those container alternatives has faded.
Behat:-
Behat is a test framework for behaviour driven development, written in the PHP programming
language. It is free, open source and is hosted on GitHub. It supports developers by providing
continuous communication, deliberate discovery and test automation.
The ways to develop, deploy, test and rebuild software applications have changed a lot over the last
decade. Traditional software development processes have been enhanced by high-performance
DevOps practices.
DevOps is considered as a software development method by some, while others relate it to the tools
and technologies for continuous delivery and configuration management.
In general, DevOps is an initiative or movement by which the phases of software development, as well
as the involvement of professionals, are integrated with real- time collaboration and communication
with version control.
One of the key goals of using DevOps is to strengthen the link between the software development
process (Dev) and the operations of information technology (Ops) so that the complexities and
timelines of the overall system development life cycle can be reduced.
Installation of the Behat framework :- According to the official documentation, to work with the
Behat framework, the minimum version of PHP is 5.3.3.
$ php behat.phar -V
Here, the .phar file refers to the PHP archive. It is a package format, so that the bundle of files and
libraries can be created and distributed.
If a user is willing to search the tutorials on ‘Blockchain Programming’ on the Internet, then follow the
sequence below:
1. The Internet should be connected
2. Open the Web browser
3. Open a search engine
4. Enter the search string or keyword ‘Blockchain Programming’
5. View results