DevSecOps Lecture Day1 Presentation
DevSecOps Lecture Day1 Presentation
DevSecOps Lecture Day1 Presentation
2
3
4 The DevOps is a combination of two words, one is software Development, and second is
Operations. This allows a single team to handle the entire application lifecycle, from
development to testing, deployment, and operations. DevOps helps you to reduce the
disconnection between software developers, quality assurance (QA) engineers, and system
administrators.
5
6 DevOps promotes collaboration between Development and Operations team to deploy code to
production faster in an automated & repeatable way.
7
8 now, DevSecOps is a strategy for incorporating safety protocols into the DevOps
procedure. It fosters and encourages collaboration among security staff and launch
technicians based on the 'Security as Code' ideology. Considering the ever-increasing
vulnerabilities to software programs, DevSecOps has increased in popularity and
significance.
9
10 It is identified as "Development Security Operation." DevSecOps is a recursive system
that integrates protection into your product pipeline. It extensively integrates safety
into the majority of the DevOps methodology.
11
12 It is crucial for software development teams to evaluate for potential threats and
weaknesses. Until the alternative can be implemented, security professionals must
tackle problems. This incremental methodology guarantees that security flaws are
highlighted.
13
14 Once safety is typically considered among the last characteristics during the
development phase. When safety problems occur near release, if protection is kept after
the development pipeline, you will discover yourself back at the beginning of lengthy
development processes.
15
16 Thus, Security people must also be involved early in the generation procedure to
improve total software security from beginning to end. You must consider facilities and
vulnerability scanning from the start. Companies can better follow instructions and
make sure client and end-user satisfaction by implementing this procedure.
17
18 -----------------------------------------------------------------------------------------
-------------------------------------
19 No we will look at what are actually CICD pipelines. okay, so before moving towards
CICD pipeline let's take a typical example. so let's say I am creating a Java based web
application. so what are all the things that I'll need is an Eclipse IDE, I'll code the
java application in it.
20 I'll write the JSP or the servlets, anything that is based upon the product
requirements. I'll write all my code there.
21
22 And I'll install a local Tomcat server on my machine and when in the Eclipse I'll just
say hey run, please run it as a server in the server in this particular Tomcat server,
the Eclipse will go ahead and host the web application on the localhost and in the
browser we can go to that particular localhost and the mentioned port number and we can
access the whole application that we have actually designed.
23
24 So now let's talk about how these things typically happen in an enterprise application.
now the first thing is, in the way where we were creating the file the project on
Eclipse as a single user, this doesn't happen in enterprises right. so maybe there are
lots of developers. There's a whole team that is working on a project and let's say you
have a code repository all your developers have the access to those codes and each of
the developer is responsible for writing let's say some kind of module. And when all
the modules are completed the branches are merged. the code that is written by
different developers that is merged and a final build is made out of that particular
source code. Now when I say the build is made I actually mean that the code is being
compiled.
25
26 so let's say when I was coding on my single machine there was some kind of dependency
that I used. I importedvsome kind of library for my usage and for an example, let's say
I downloaded the HTTP client library to make a requests. so I usually downloaded the
jar file, installed it in my Eclipse, configured the path and did everything and then I
was ready to use that particular library. But now whenever code is committed for the
main branch, our source code repository is not knowing that which particular jar file
you have installed in the in your local Eclipse project.
27
28 so what is actually done is, you write all your dependencies somewhere that, hey my
project uses these these particular libraries and then you use a dependency manager for
example if you are on node.js you are using NPM for Java you are using maven for if you
are using something on Python you can use Pip and with all these things you try to
import all the dependencies that are required to successfully compiled or build your
project.
29
30 okay so when we talk about this, this was the thing that was related to when a single
person is coding and when the things are being going on in a whole team so when every
developer is working on the code. so now if we get a little bit into nitty-gritty
details, so let's say all your developers committed the code the final code is merged
and your main branch is committed. now you have already written some kind of code that
whenever there is a commit on the main branch it automatically takes out and makes a
build out of it. That means an executable file out of that particular source code.
31
32 once the executable has been made, it is ready to test and if it is passed it needs to
be deployed upon the server. so this is the kind of automation that a CI-CD provides
you. These three who provides the CI part, that is called as the continuous integration
that means whenever that your code is being committed you can set up some of the hooks
and you can perform post build automation tasks or pre build automation tasks and make
a final executable and use that particular executable to deploy it on the server. if
you do it the whole process in an automated fashion then these first three part makes
the CI pipeline and these three makes it the CD. continuous integration and continuous
deployment or the continuous delivery.
33
34 now the only simple difference between continuous delivery and a continuous deployment
is if you are taking your final executable file and you are taking a manual
intervention whether to approve that build or not. Now this comes under the continuous
delivery part. and if you are automating in such a fashion that you are not requiring
any of the manual intervention in approving the build then it is called as the
continuous deployment.
35
36 So the CICD pipeline make sure that the developer and the product team is concentrated
upon writing the quality code and the operation teams has already built this kind of
pipeline that whenever they want to make a build out of it they can easily make a build
and deploy it on the server using the CI CD pipeline.
37
38 -----------------------------------------------------------------------------------------
---------------------------------------------------
39
40 Now, we will look at what are all the different DevOps tools that are usually in
practice in the enterprise and where they are actually being used in the different
stages of SDLC process. Then we will move on to and learn about how can we find the
different opportunities to integrate the security checks in the different stages of
pipeline. Then we'll have a demo of how a basic pipeline works, a basic build, a basic
cacd pipeline and then we will get to know what is the aim, what is the final thing
with this training program.
41
42 okay so here is the list of many tools that comes under DevOps and it's really not
surprising to not get scared of so many of the tools and when you, when we don't know
even most of them how they actually works. but what I think is being a security
professionals it's not necessary to learn or to master all these tools but yes one
should have a fair understanding of when these different tools come in practice and how
can we find an opportunity to integrate security within them. so if we talk about the
different stages in the SDLC, if you can see in the top that comes as collaboration,
then the second is the build stage, then the third is test, deploy and finally running
of the web application or the product .
43
44 Collaboration:
45 ------------------
46 okay, so when we talk about collaboration what actually collaboration means is. so they
are different set of tools in DevOps where which actually helps in a team to
communicate each other to share knowledge among each other to better plan about the
product, to better plan the process and the various milestones regarding a product. so
these are all the things that comes under the collaboration stage. and if we subdivide
this can be further divided as application lifecycle management. so here some of the
tools like JIRA which is one of the bug reporting tool. JIRA is basically used to
create tickets of various tasks and those tickets are used to to track different
progress of each separate ticket. so it's basically a project management tool that
allows you to track your bugs and monitor your progress provides some kind of matrix.
similarly Trello is like one of the free tools that you can use for project planning
and different tasks.
47
48 Then when we come to communication and chatOps,there are things like slack channels, we
can have we can have Microsoft teams to communicate between the teams for knowledge
sharing purpose, we can have confluence or the github pages. now all these tools depend
upon where you are working. now let's say if you are working in a big projects , where
you have lot of budget to implement many of the paid tools then surely they will go
with that. now if you are working in a small budget project, you have just started
going through agile methodologies you can simply choose one some of the open source
tools that actually achieve the same kind of tasks. so it's all about the collaboration
thing how the team is going to interact within each other with each of the different
resources within the team.
49
50 build
51 -------
52 now the second one is the build stage, and in the build stage let's say all the
developers that they are working on how are they actually going to contribute to a
similar project by sitting on different locations maybe on different offices, different
geographical areas. so here comes SCM to rescue us. SCM refers to source control
management. it acts as a repository where your final code is being stored and your
whole team can work upon that particular repository and contribute each of their code
into the repository. once everyone has committed collaborated to the code a final build
can be made out of it. so the SCM part here it actually represents the source control
management and now one can use github, gitlab, bitbucket and there are lots of options
for the same.
53
54 now once we have the design code repository in place we need some kind of automation
tool that actually takes out of a source code makes an executable out of it and that
executable can be used to deploy on over let's say your UAT servers, production servers
or any of the thing. so this part comes under the CI that is called as the continuous
integration and there are various tools that provides you the ability to write
different build automation scripts based upon the requirements of your product.
55
56 now for our case in the SCM section we will be using gitlab for the continuous
integration part, we will be using the Jenkins server. as I said that it totally
doesn't depends upon the tools we are using, it depends upon the approach that we are
following to integrate various things. okay, so CI refers to the tasks that are related
to the build automation and then comes the build part and this simply refers of
collecting all the dependencies so if you are using Java and you let's say you have
used thousands of third party libraries in your project now instead of going one by one
and copying pasting all the jar files in the production what you can do is you can
simply use for Java let's say maven .now maven will take the heavy duty task of taking
all the libraries fetching them and providing it to you at the run time so that your
code doesn't provide any of the errors and it runs successfully.then similarly in the
build the database management can also be covered, how the database is going to be
managed throughout the product.
57
58 Test
59 -------
60 Now once our build cycle is complete then we comes to the test stage so it depends on
what kind of a product you are using. so let's say if you are having a web application
you can have unit tests, functional tests that are based on selenium and you can, for
let's say security purpose you can have GAUNTLT framework that is integrated in your
test environment. Or for performance testing let's say you can use jmeter so the test
stage basically defines most of the tasks that the QA engineering team does and that
refers to the functional test cases the UI test cases and any of the test cases that
runs after the build has been made from the source code.
61
62 Deploy
63 -------
64 Once all the tests are passed and no such failure is there then the executable is
finally deployed. so these are some of the things which can comes under deployment.
chef, ansible, puppet all these are configuration management languages,..mmm.. I think
frameworks, like vagrant is used to automate the VirtualBox tasks. there are form so
all these things helps in actually deployment of the product in an automated fashion.
so once you have the final build ready and the executable that needs to be deployed you
also need to store it in some kind of artifactory where you have a record of what are
all the versions that were in your production. so that can be docker hub, docker
registry and so on. now when you have decided how to deploy the product then it comes
to the production thing and then you need a cloud. one of the cloud provider let's say
it can be AWS, GCP or Azure then you use some kind of orchestration platforms so let's
say you are using kubernetes or docker swarm to actually deploy your web application
into the production, so to make it easy for the scalability purpose and for the
management tasks also.
65
66 Monitor
67 -------
68 so now once your application is deployed on the production and is running what you need
is monitoring of the application so these are the different tools that helps in devops
to monitor the different servers, the applications and the health states of different
servers etc. so this is an overview of how things work under these DevOps and how
DevOps culture helps the production teams and the product and the developers to
actually build and deploy their product in a very easy and an automated fashion.
69
70 okay so if we talk about what needs to be understood being a security professional and
when we talk about devsecops so we need to think about what is the particular step that
needs a security check and then we need to find out how that security check can be
integrated. now there can be a number of tools and that performs the same function. now
it absolutely depends upon the criteria or the product that we are designing what kind
of tools that we want to use. if it is an organization where we can't use open source
tools then we definitely need to go with the purchase license one, commercial ones. but
for the purpose of this training we will be using open source tools for most of the
tasks.
71
72 the another thing is, we need to find out where is the actually opportunity where we
can automate things to make it simple. so for example many of the security tools do
provide false positives and that is something that needs some manual analysis. so you
can't rely on tools and just say, ok fail the build if any of the critical
vulnerability is found. so it actually some kind of manual intervention is required to
analyze the false positive and especially in the ground zero stage. and when I say
ground zero I mean the initial stage when we are just going to adopt DevSecOps
methodologies in our lifecycle.
73
74 one of the thing is metrics is important. now there is no point of integrating many of
the security checks until if I ask you okay so how many of the critical vulnerabilities
have you find in the last six month. if you are not able to answer this particular
question then it doesn't make sense to spend much of the things for devsecops process.
so you should have some kind of a measurement some kind of metric set up that actually
monitor the progress how many vulnerabilities are being found through this automated
process how many have been remediated and what is the whole next process be like.
75
76 Then the last point is , even if you integrate many of the security checks in your
pipeline this definitely doesn't means that it can replacement testing. so one of the
thing is if you ask someone okay so how can you make sure that your product is
vulnerability free, so the simple answer is hey we just give our product for let's
SAST, DAST and Pentest when it's completed. so the thing to understand here is that
security is nothing that you can use at the end just for an evaluation purpose it is
something that should be built within the product itself and this training session is
dedicated to understand this particular concept like how can we integrate security
built in within our secure software development lifecycle process.
77
78 =========================================================================================
=========================
79
80 Today we want to discuss on Docker. It is important to know the concept of docker when
we are discussing on Devops.
81
82 Now before we actually can go ahead and explain what exactly is Docker let's quickly
understand the evolution of application deployment.
83 So in the past let's say a company had three applications that they want to deploy.
They have or provisioned one physical server. So remember this is before evolution of
virtual machines.
84
85 So let's say that they deploy all of the applications (click,click,click) on this
physical server. Now these applications could also have dependencies. So let's say that
two applications have the same set of dependencies (click, click). So this could be
modules, this could be libraries pertinent to OS itself. And maybe you have the other
application having another set of dependencies (click).
86
87 Now on this physical server where all the three applications running. Now let's say
that you make a change to App2. Now when you're making a change to this app2
application, as part of the change you are making a change to one of its dependencies.
88
89 Now since this app1 has the same set of dependencies, so the change you have made to
app2 has now cost an issue with app1 application. So this was a very common scenario.
90
91 Making changes to an application on a machine which has a set of applications was
always an issue. because you don't know that making a change on what application, what
effect it would have on the other applications running on this physical server. So this
was always an issue.
92 -----------------------------------------------------------------------------------
93
94 We then came to the evolution of having something known as virtual machines. So on one
physical box you could create isolated environments, so different virtual machines over
here(click).
95
96 Now what are the benefit of this is that each virtual machine could be of a different
operating system. So on the underlying physical server you could install hypervisor
based tools. so maybe you could install a tool known as VM ware (click) and on this you
could host a set of virtual machines.
97
98 These virtual machines (click) will do a good job of utilizing the underlying resources
on the physical server. So it'll make good use of the available amount of CPU, the
available amount of RAM, the amount of external storage or internal storage of the
physical server.
99
100 You can then deploy each application separately (click) on each virtual machine. So
even if you had to make a change on one application it will not break the other
applications running on the same physical box. Because each of them would be running in
a separate virtual machine. So this was the next evolution wherein you had virtual
machines that could distribute your application or isolate your application deployment.
101
102 -------------------------------------------------------------------------------
103 hypervisor - is software that creates and runs virtual machines (VMs)
104
105 And now we have the next generation of application deployment which is basically your
container deployment. So again you have your physical box. The physical box again you
install your hypervisor again your virtual set of tools. On this again, you can host
your virtual machines.
106
107 Now on the virtual machines itself you could deploy your application within or as a
container. So your application will be embedded inside a container. And this container
could run on a virtual machine.
108
109 Now what were the core benefits of this? Well firstly is that, the container is
basically, you know, a package of your OS your operating system. maybe any system
libraries that you require for your application to run and then your app itself.
110
111 Now you could run many containers on a virtual machine. So instead of actually having
separate virtual machines for your application you could have your application packaged
as containers and all of these containers could run on one virtual machine. So instead
of having one application running on separate virtual machines you just run the
containers on a single virtual machine.
112
113 If you look at the size of the container itself you would think that what's the
difference between running the application as a container on a virtual machine because
the container itself contains the OS so wouldn't the container size be big.
114
115 Well... no!
116
117 If we look at the size of the OS on a container it could be just in the size of
megabytes. So the OS which is part of your container is just kind of a lightweight OS.
It's just like the bare essentials often operating system which is required to run your
application. So when you compare the size of a container with the size of the OS it's
much less.
118
119 Apart from that we have the advantage of running multiple containers on a single
virtual machine. And the other benefit of containerising your applications is that you
can actually move these containers on different virtual machines.
120
121 Now how can you accomplish this ? How do containers run on a virtual machine? Well, you
have to install (click) something else on the virtual machine and that's known as a
container tool set.
122
123 Now the most popular container tool set is Docker (click). so you have to install the
docker engine on the virtual machines and this docker engine is then responsible for
running your containers.
124
125 =========================================================================================
===========
126
127 Docker is an open platform for developing, shipping, and running applications. Docker
enables you to separate your applications from your infrastructure so you can deliver
software quickly. With Docker, you can manage your infrastructure in the same ways you
manage your applications. By taking advantage of Docker’s methodologies for shipping,
testing, and deploying code quickly, you can significantly reduce the delay between
writing code and running it in production.
128
129 Docker provides the ability to package and run an application in a loosely isolated
environment called a container. The isolation and security allow you to run many
containers simultaneously on a given host. Containers are lightweight and contain
everything needed to run the application, so you do not need to rely on what is
currently installed on the host. You can easily share containers while you work, and be
sure that everyone you share with gets the same container that works in the same way.
130
131 docker has a client-server architecture. let us understand this in a very easy way. In
docker command-line interface is the client and we have the docker Host or the docker
daemon which will have all the containers and images. The docker server receives
commands from the docker client in the form of commands or a REST API request. And all
the components of docker client and server together forms the docker engine. so the
docker daemon or server receives the commands from the docker client through REST API
or command-line interface and the docker client or docker daemon can be present on the
same platform or can be on different machines.
132
133 A Docker registry stores Docker images. Docker Hub is a public registry that anyone can
use, and Docker is configured to look for images on Docker Hub by default. You can even
run your own private registry.
134
135 When you use the docker pull or docker run commands, the required images are pulled
from your configured registry. When you use the docker push command, your image is
pushed to your configured registry.
136
137 -----------------------------------------------------------------------------
138
139 let's look at a general workflow of docker. so in the docker workflow a developer will
define all the application and its dependencies and requirements in a file which is
called as docker file. now this docker file can be used to create docker images. so in
a docker image you will have all the application its requirements and dependencies and
when you run a docker image you get docker containers.
140
141
142 so docker containers are the runtime instances of a docker image and these images can
also be stored in an online cloud repository which is called as docker hub. So if you
go to docker hub, you will find a lot of publicly available images and you can store
your own docker image as well.
143
144 You can also store your docker image in your own repository or version control system.
now these images can be pulled to create containers in any environment. so you can
create a docker container in a test environment or on any other environment and we can
be sure that the application will run in the same way using docker containers.
145 ---------------------------------------------------------------------------
146 Next we will do a very interesting thing. We will go to fresco play and will do some
hands on with docker basic commands.
147
148 These very basic commands are very frequently used docker commands. so let's get
started and let me open fresco play.
149
150
151 so the very first command is ----> docker version
152 this command gives us the information about the docker client and the docker server. so
we get information about the version of docker client,so this is going to be a very
handy command if you want to look at your client and server versions.
153
154 the other command is -----> docker --help
155 now this is going to be a very useful command in your docker journey because you can
use this command to get information on any other command.
156
157 now let us come to docker images. so the very first command is ----> docker images
158 this gives us the list of all the images we have and of course we do not have anything
so we use the other command to pull the image.
159
160 docker pull ubuntu
161 here I can say docker pool . Here I can get any image of application. Let say I want to
get an Ubuntu image. you can see, it will use the latest add by default. let us just
wait for this download to get completed. and yes it is done now so let me just clear.
162
163 if I now say ------>docker images
164 you can see now it is showing us ubuntu
165
166 okay let me clear the terminal and the other command is ------> docker rmi
167 I will run this command to remove the image. docker rmi <image_id>
168 now if I say docker images I don't have any image .
169
170 okay let us now come to containers so for containers the very first command is ----->
docker ps
171 and if I say ----> docker PS --help to look at the working of this. it is used to list
the containers.
172 Spawn the hello-world Container
173 docker run hello-world
174
175 so I say this, what it will say is unable to find the image locally, it will start
pulling it from the library which is docker hub and then it will start the container.
so after downloading and extracting you can see the output.
176
177 If you say ---------> docker container ls
178 We are not able to see any Containers here because by default it will show only running
containers. Our hello-world container has already finished its work & stopped.
179
180 To view stopped containers as well, pass -a argument--------> docker container ls -a
181 Same can be done by using shorthand command as well----------->docker ps -a
182
183 Remove Container
184 To remove container we can use docker container rm <containerName_OR_containerId>
command. For example, if container name is abc you can remove it using docker container
rm abc
185
186 Now lets run the nginx image in background.
187 docker run --name nginxservice -d nginx
188 now a ID will be generated and this ID will be tagged with the container.
189
190 docker container ls
191 docker stop <container_id>
192 docker rm nginxservice
193
194 docker run --name nginxservice -p 80:80 -d nginx
195
196 Lets start multiple nginx servers using different ports
197
198 docker run --name nginxservice1 -p 8085:80 -d nginx
199 docker run --name nginxservice2 -p 8086:80 -d nginx
200
201 docker stop $(docker ps -a -q)
202
203 docker container prune
204
205
206 Docker is a platform that turns out to be a perfect fit for the DevOps ecosystem. It is
developed for software companies that are struggling to pace up with the changing
technology, business, and customer requirements. The benefits Docker offers to the
DevOps environment has made it an irreplaceable tool in the toolchain.
207
208 The reason for Docker being soo good for DevOps is its benefits and use cases of
containerizing the applications that support the development and quick release. DevOps
is primarily used to overcome 'Dev' and 'Ops' problems, and Docker seems to solve most
of them, with the main one being that it can work on any machine. Thus, it allows all
the teams to collaborate and work effectively and efficiently.
209 Docker allows you to make inevitable development, production, and staging environments,
thereby providing you seamless control overall changes.
210
211
212
213