Experiment 10 CCL

Download as pdf or txt
Download as pdf or txt
You are on page 1of 20

Experiment 10

Aim
Study and Implement containerization using Docker
Theory
1. What is containerisation
Containerization is the packaging together of software code with all it’s necessary
components like libraries, frameworks, and other dependencies so that they are isolated in
their own "container."

This is so that the software or application within the container can be moved and run
consistently in any environment and on any infrastructure, independent of that
environment or infrastructure’s operating system. The container acts as a kind of bubble
or a computing environment surrounding the application and keeping it independent of its
surroundings. It’s basically a fully functional and portable computing environment.

Containers are an alternative to coding on one platform or operating system, which made
moving their application difficult since the code might not then be compatible with the
new environment. This could result in bugs, errors, and glitches that needed fixing
(meaning more time, less productivity, and a lot of frustration).

By packaging up an application in a container that can be moved across platforms and


infrastructures, that application can be used wherever you move it because it has
everything it needs to run successfully within it.

2. Difference between containers and VM

S
N
Virtual Machines(VM) Containers
o
.

VM is piece of software that allows


you to install other software inside of While a container is a software that allows
1 it so you basically control it virtually different functionalities of an application
as opposed to installing the software independently.
directly on the computer.

Name: Riya Singh Roll no.59


Applications running on VM system While applications running in a container
2.
can run different OS. environment share a single OS.

While containers virtualize the operating system


3. VM virtualizes the computer system.
only.

While the size of container is very light; i.e. a


4. VM size is very large.
few megabytes.

VM takes minutes to run, due to large


5. While containers take a few seconds to run.
size.

6. VM uses a lot of system memory. While containers require very less memory.

7. VM is more secure. While containers are less secure.

VM’s are useful when we require all While containers are useful when we are required
8. of OS resources to run various to maximise the running applications using
applications. minimal servers.

Examples of VM are: KVM, Xen, While examples of containers are: RancherOS,


9.
VMware. PhotonOS, Containers by Docker.

3. Security in containers
The process of securing containers is continuous. It should be integrated into your development
process, automated to remove the number of manual touch points, and extended into the
maintenance and operation of the underlying infrastructure. This means protecting your build
pipeline container images and runtime host, platform, and application layers. Implementing
security as part of the continuous delivery life cycle means your business will mitigate risk and
reduce vulnerabilities across an ever-growing attack surface.
When securing containers, the main concerns are:
The security of the container host

Name: Riya Singh Roll no.59


Container network traffic
The security of your application within the container
Malicious behavior within your application
Securing your container management stack
The foundation layers of your application
The integrity of the build pipeline
The goal of cybersecurity is to ensure that whatever you build continuously works as intended,
and only as intended.
Related names
Get to know some of the names businesses are using for container needs: Docker®,
Kubernetes®, Amazon Web Services™ (AWS), and Microsoft®.
Securing Docker
Before you start securing your containers, you need to know the key players in the space.
Docker, a leader in the containerisation market, provides a container platform to build, manage,
and secure applications. Docker enables customers to deploy traditional applications and the
latest microservices anywhere. Like with any other container platform, you need to ensure you
have proper protection. Learn more about Docker container security.
Securing Kubernetes
Kubernetes is the next big name to get to know. Kubernetes provides a portable, extensible, open
source platform for handling containerised workloads and services. While Kubernetes offers
security features, you need a dedicated security solution that will keep you secure; there has been
an increase in attacks on Kubernetes clusters. Learn more about securing Kubernetes.
Amazon Web Services and container security
Next up, we have Amazon Web Services (AWS). AWS understands the need for containers to
empower developers to deliver applications faster and more consistently. That is why they offer
Amazon Elastic Container Service (Amazon ECS), a scalable, high-performing container
orchestration service that supports Docker containers. It removes the dependencies on managing
your own virtual machines and container environment and allows you to run and scale AWS
containerised applications with ease. However, like the rest of the key players above, you need
security to gain the full benefits of this service. Learn more about AWS container security.
Securing Microsoft Azure Container Instances
Last, but not least, we have Microsoft® Azure™ Container Instances (ACI). This solution
empowers developers to deploy containers on the Microsoft® Azure™ Public Cloud without the
need to run or manage an underlying infrastructure. You can simply spin up a new container
using the Microsoft® Azure™ portal, where Microsoft then automatically provisions and scales
the underlying computer resources. Azure Container Instances allows for great speed and agility,
but needs to be secured to properly reap all of the benefits.
Now that you know the major players, let’s get into how to secure them, or dive into the links
above for specifics on securing each solution. Learn more about Securing Microsoft Azure
Container Instances.

Name: Riya Singh Roll no.59


Securing the host
Securing the host starts with selecting its operating system. Whenever possible, you should use a
distributed operating system that is optimised to run containers. If you’re using stock Linux®
distributions or Microsoft® Windows®, you’ll want to make sure that you disable or remove
unnecessary services and harden the operating system in general. Then, add a layer of security
and monitoring tools to ensure that your host is running as you would expect. Tools like
application control or an intrusion prevention system (IPS) are very useful in this situation.
Once your container is running in production, it will need to interact with other containers and
resources. This internal traffic must be monitored and secured by ensuring all network traffic
from your containers passes through an IPS. This changes how you deploy the security control.
Instead of implementing a small number of very large traditional IPS engines on the perimeter,
you would implement the IPS on every host, which allows for all traffic to be effectively
monitored without significantly impacting performance.
Securing the application in the container
Once your container is running in production, it is constantly processing data for your
application, generating log files, caching files, etc. Security controls can help ensure that these
are ordinary activities and not malicious. The real-time anti-malware controls running on the
content in the container are critical to success.

An IPS plays a role here as well, in a usage pattern called virtual patching. If a vulnerability is
exposed remotely, the IPS engine can detect attempts to exploit it and drop packets to protect
your application. This buys you the time needed to address the root cause in the next version of
that container instead of pushing out an emergency fix.
Monitoring your application
When deploying your application into a container, a runtime application self-protection (RASP)
security control can help. These security controls run within your application code and often
intercept or hook key calls within your code. Besides security features like Structured Query
Language (SQL) monitoring, dependencies checking and remediation, URL verification, and
other controls, RASP can also solve one of the biggest challenges in security: root cause
identification.
By being positioned within the application code, these security controls can help connect the dots
between a security issue and the line of code that created it. That level of awareness is difficult to
compete with and creates a huge boost in your security posture.
Securing your container management stack
From a security perspective, the management stack helping to coordinate your containers is often
overlooked. Any organisation that is serious about its container deployment will inevitably end
up with two critical pieces of infrastructure to help manage the process: a privacy container
registry like Amazon ECS and Kubernetes to help orchestrate container deployment.

The combination of a container registry and Kubernetes allows you to automatically enforce a set

Name: Riya Singh Roll no.59


of quality and security standards for your containers before – and during – the redeployment into
your environment.
Registries simplify sharing containers and help teams build on each other’s work. However, to
ensure that each container meets your development and security baselines, you need an
automated scanner. Scanning each container for known vulnerabilities, malware, and any
exposed secrets before it is made available in the registry helps to reduce issues downstream.
Additionally, you’ll want to make sure the registry is well protected. It should be run on a
hardened system or a very reputable cloud service. Even in the service scenario, you need to
understand the shared responsibility model and implement a strong role-based approach to
accessing the registry.
On the orchestration side, once Kubernetes is running and deployed within your environment, it
offers a significant number of advantages that help ensure that your teams get the most out of
your environment. Kubernetes also provides the ability to implement a number of operational
and security controls, such as Pod (cluster level resources) and network security policies,
allowing you to enforce various options to meet your risk tolerance.
Building your application on a secure foundation: container scanning
You need a container image scanning workflow in place to ensure that the containers you used as
building blocks are reliable and secure against common threats. This class of tools will scan the
contents of a container, looking for issues before they are used as a building block for your
application. It will also perform a final set of checks before a container is deployed to
production.
When properly implemented, scanning becomes a natural part of your coding process. It’s a fully
automated process that can quickly and easily identify any issues made as you develop your
application and its containers.
Ensuring the integrity of the build pipeline
Attackers have started to shift their attacks towards earlier stages of your continuous
integration/continuous delivery (CI/CD) pipeline. If an attacker successfully compromises your
build server, code repository, or developer workstations, they can reside in your environment for
significantly longer. You need a strong set of security controls that are kept up to date.
Implement a strong access control strategy throughout the pipeline, starting at your code
repository and branching strategy, extending all the way to the container repository. You need to
ensure that you implement the principle of least privilege – only providing as much access as
needed to accomplish the required tasks – and audit that access regularly.
Tying things together
Securing your containers requires a comprehensive approach to security. You must ensure that
you’re addressing the needs of all teams within your organisation. Make sure your approach can
be automated to fit your DevOps processes, and that you can meet deadlines and deliver
applications quickly while protecting each group. Security can no longer be left out or show up
at the last minute with demands to change your workflow. Building trusted security controls and
automated processes from the start addresses security concerns and makes it easier to bridge the
gap between teams.

Name: Riya Singh Roll no.59


Activity
1. Demonstration of creating, finding, building, installing, and running
Linux/Windows application containers inside local machine or cloud platform.
Prerequisites
There are no specific skills needed for this tutorial beyond a basic comfort with the command
line and using a text editor. This tutorial uses git clone to clone the repository locally. If you
don't have Git installed on your system, either install it or remember to manually download the
zip files from Github. Prior experience in developing web applications will be helpful but is not
required. As we proceed further along the tutorial, we'll make use of a few cloud services. If
you're interested in following along, please create an account on each of these websites:
● Amazon Web Services
● Docker Hub
Setting up your computer
Getting all the tooling setup on your computer can be a daunting task, but thankfully as Docker
has become stable, getting Docker up and running on your favorite OS has become very easy.
Until a few releases ago, running Docker on OSX and Windows was quite a hassle. Lately
however, Docker has invested significantly into improving the on-boarding experience for its
users on these OSes, thus running Docker now is a cakewalk. The getting started guide on
Docker has detailed instructions for setting up Docker on Mac, Linux and Windows.
Once you are done installing Docker, test your Docker installation by running the following:
$ docker run hello-world

Hello from Docker.


This message shows that your installation appears to be working correctly.
...

HELLO WORLD
Playing with Busybox
Now that we have everything setup, it's time to get our hands dirty. In this section, we are going
to run a Busybox container on our system and get a taste of the docker run command.
To get started, let's run the following in our terminal:
$ docker pull busybox
Note: Depending on how you've installed docker on your system, you might see a permission
denied error after running the above command. If you're on a Mac, make sure the Docker engine
is running. If you're on Linux, then prefix your docker commands with sudo. Alternatively, you
can create a docker group to get rid of this issue.
The pull command fetches the busybox image from the Docker registry and saves it to our
system. You can use the docker images command to see a list of all images on your system.
$ docker images

Name: Riya Singh Roll no.59


REPOSITORY TAG IMAGE ID CREATED VIRTUAL
SIZE
busybox latest c51f86c28340 4 weeks ago 1.109 MB
Docker Run
Great! Let's now run a Docker container based on this image. To do that we are going to use the
almighty docker run command.
$ docker run busybox
$
Wait, nothing happened! Is that a bug? Well, no. Behind the scenes, a lot of stuff happened.
When you call run, the Docker client finds the image (busybox in this case), loads up the
container and then runs a command in that container. When we run docker run busybox, we
didn't provide a command, so the container booted up, ran an empty command and then exited.
Well, yeah - kind of a bummer. Let's try something more exciting.
$ docker run busybox echo "hello from busybox"
hello from busybox
Nice - finally we see some output. In this case, the Docker client dutifully ran the echo command
in our busybox container and then exited it. If you've noticed, all of that happened pretty quickly.
Imagine booting up a virtual machine, running a command and then killing it. Now you know
why they say containers are fast! Ok, now it's time to see the docker ps command. The docker ps
command shows you all containers that are currently running.
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
Since no containers are running, we see a blank line. Let's try a more useful variant: docker ps -a
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
305297d7a235 busybox "uptime" 11 minutes ago Exited (0) 11
minutes ago distracted_goldstine
ff0a5c3750b9 busybox "sh" 12 minutes ago Exited (0) 12
minutes ago elated_ramanujan
14e5bd11d164 hello-world "/hello" 2 minutes ago Exited (0) 2
minutes ago thirsty_euclid
So what we see above is a list of all containers that we ran. Do notice that the STATUS column
shows that these containers exited a few minutes ago.
You're probably wondering if there is a way to run more than just one command in a container.
Let's try that now:
$ docker run -it busybox sh
/ # ls

Name: Riya Singh Roll no.59


bin dev etc home proc root sys tmp usr var
/ # uptime
05:45:21 up 5:58, 0 users, load average: 0.00, 0.01, 0.04
Running the run command with the -it flags attaches us to an interactive tty in the container.
Now we can run as many commands in the container as we want. Take some time to run your
favorite commands.
Danger Zone: If you're feeling particularly adventurous you can try rm -rf bin in the container.
Make sure you run this command in the container and not in your laptop/desktop. Doing this will
make any other commands like ls, uptime not work. Once everything stops working, you can exit
the container (type exit and press Enter) and then start it up again with the docker run -it
busybox sh command. Since Docker creates a new container every time, everything should start
working again.
That concludes a whirlwind tour of the mighty docker run command, which would most likely be
the command you'll use most often. It makes sense to spend some time getting comfortable with
it. To find out more about run, use docker run --help to see a list of all flags it supports. As we
proceed further, we'll see a few more variants of docker run.
Before we move ahead though, let's quickly talk about deleting containers. We saw above that
we can still see remnants of the container even after we've exited by running docker ps -a.
Throughout this tutorial, you'll run docker run multiple times and leaving stray containers will
eat up disk space. Hence, as a rule of thumb, I clean up containers once I'm done with them. To
do that, you can run the docker rm command. Just copy the container IDs from above and paste
them alongside the command.
$ docker rm 305297d7a235 ff0a5c3750b9
305297d7a235
ff0a5c3750b9
On deletion, you should see the IDs echoed back to you. If you have a bunch of containers to
delete in one go, copy-pasting IDs can be tedious. In that case, you can simply run -
$ docker rm $(docker ps -a -q -f status=exited)
This command deletes all containers that have a status of exited. In case you're wondering, the -q
flag, only returns the numeric IDs and -f filters output based on conditions provided. One last
thing that'll be useful is the --rm flag that can be passed to docker run which automatically
deletes the container once it's exited from. For one off docker runs, --rm flag is very useful.
In later versions of Docker, the docker container prune command can be used to achieve the
same effect.
$ docker container prune
WARNING! This will remove all stopped containers.
Are you sure you want to continue? [y/N] y
Deleted Containers:
4a7f7eebae0f63178aff7eb0aa39f0627a203ab2df258c1a00b456cf20063
f98f9c2aa1eaf727e4ec9c0283bcaa4762fbdba7f26191f26c97f64090360

Name: Riya Singh Roll no.59


Total reclaimed space: 212 B
Lastly, you can also delete images that you no longer need by running docker rmi.
Docker Images
We've looked at images before, but in this section we'll dive deeper into what Docker images are
and build our own image! Lastly, we'll also use that image to run our application locally and
finally deploy on AWS to share it with our friends! Excited? Great! Let's get started.
Docker images are the basis of containers. In the previous example, we pulled the Busybox
image from the registry and asked the Docker client to run a container based on that image. To
see the list of images that are available locally, use the docker images command.
$ docker images
REPOSITORY TAG IMAGE ID CREATED
VIRTUAL SIZE
prakhar1989/catnip latest c7ffb5626a50 2 hours ago 697.9 MB
prakhar1989/static-site latest b270625a1631 21 hours ago 133.9
MB
python 3-onbuild cf4002b2c383 5 days ago 688.8 MB
martin/docker-cleanup-volumes latest b42990daaca2 7 weeks ago
22.14 MB
ubuntu latest e9ae3c220b23 7 weeks ago 187.9 MB
busybox latest c51f86c28340 9 weeks ago 1.109 MB
hello-world latest 0a6ba66e537a 11 weeks ago 960 B
The above gives a list of images that I've pulled from the registry, along with ones that I've
created myself (we'll shortly see how). The TAG refers to a particular snapshot of the image and
the IMAGE ID is the corresponding unique identifier for that image.
For simplicity, you can think of an image akin to a git repository - images can be committed with
changes and have multiple versions. If you don't provide a specific version number, the client
defaults to latest. For example, you can pull a specific version of ubuntu image
$ docker pull ubuntu:18.04
To get a new Docker image you can either get it from a registry (such as the Docker Hub) or
create your own. There are tens of thousands of images available on Docker Hub. You can also
search for images directly from the command line using docker search.
An important distinction to be aware of when it comes to images is the difference between base
and child images.
● Base images are images that have no parent image, usually images with an OS like
ubuntu, busybox or debian.
● Child images are images that build on base images and add additional functionality.
Then there are official and user images, which can be both base and child images.

Name: Riya Singh Roll no.59


● Official images are images that are officially maintained and supported by the folks at
Docker. These are typically one word long. In the list of images above, the python,
ubuntu, busybox and hello-world images are official images.
● User images are images created and shared by users like you and me. They build on base
images and add additional functionality. Typically, these are formatted as user/image-
name.
Our First Image
Now that we have a better understanding of images, it's time to create our own. Our goal in this
section will be to create an image that sandboxes a simple Flask application. For the purposes of
this workshop, I've already created a fun little Flask app that displays a random cat .gif every
time it is loaded - because you know, who doesn't like cats? If you haven't already, please go
ahead and clone the repository locally like so -
$ git clone https://github.com/prakhar1989/docker-curriculum.git
$ cd docker-curriculum/flask-app
This should be cloned on the machine where you are running the docker commands and not
inside a docker container.
The next step now is to create an image with this web app. As mentioned above, all user images
are based on a base image. Since our application is written in Python, the base image we're going
to use will be Python 3.
Dockerfile
A Dockerfile is a simple text file that contains a list of commands that the Docker client calls
while creating an image. It's a simple way to automate the image creation process. The best part
is that the commands you write in a Dockerfile are almost identical to their equivalent Linux
commands. This means you don't really have to learn new syntax to create your own dockerfiles.
The application directory does contain a Dockerfile but since we're doing this for the first time,
we'll create one from scratch. To start, create a new blank file in our favorite text-editor and save
it in the same folder as the flask app by the name of Dockerfile.
We start with specifying our base image. Use the FROM keyword to do that -
FROM python:3.8
The next step usually is to write the commands of copying the files and installing the
dependencies. First, we set a working directory and then copy all the files for our app.
# set a directory for the app
WORKDIR /usr/src/app

# copy all the files to the container


COPY . .
Now, that we have the files, we can install the dependencies.
# install dependencies
RUN pip install --no-cache-dir -r requirements.txt

Name: Riya Singh Roll no.59


The next thing we need to specify is the port number that needs to be exposed. Since our flask
app is running on port 5000, that's what we'll indicate.
EXPOSE 5000
The last step is to write the command for running the application, which is simply - python
./app.py. We use the CMD command to do that -
CMD ["python", "./app.py"]
The primary purpose of CMD is to tell the container which command it should run when it is
started. With that, our Dockerfile is now ready. This is how it looks -
FROM python:3.8

# set a directory for the app


WORKDIR /usr/src/app

# copy all the files to the container


COPY . .

# install dependencies
RUN pip install --no-cache-dir -r requirements.txt

# define the port number the container should expose


EXPOSE 5000

# run the command


CMD ["python", "./app.py"]
Now that we have our Dockerfile, we can build our image. The docker build command does the
heavy-lifting of creating a Docker image from a Dockerfile.
The section below shows you the output of running the same. Before you run the command
yourself (don't forget the period), make sure to replace my username with yours. This username
should be the same one you created when you registered on Docker hub. If you haven't done that
yet, please go ahead and create an account. The docker build command is quite simple - it takes
an optional tag name with -t and a location of the directory containing the Dockerfile.
$ docker build -t yourusername/catnip .
Sending build context to Docker daemon 8.704 kB
Step 1 : FROM python:3.8
# Executing 3 build triggers...
Step 1 : COPY requirements.txt /usr/src/app/
---> Using cache
Step 1 : RUN pip install --no-cache-dir -r requirements.txt

Name: Riya Singh Roll no.59


---> Using cache
Step 1 : COPY . /usr/src/app
---> 1d61f639ef9e
Removing intermediate container 4de6ddf5528c
Step 2 : EXPOSE 5000
---> Running in 12cfcf6d67ee
---> f423c2f179d1
Removing intermediate container 12cfcf6d67ee
Step 3 : CMD python ./app.py
---> Running in f01401a5ace9
---> 13e87ed1fbc2
Removing intermediate container f01401a5ace9
Successfully built 13e87ed1fbc2
If you don't have the python:3.8 image, the client will first pull the image and then create your
image. Hence, your output from running the command will look different from mine. If
everything went well, your image should be ready! Run docker images and see if your image
shows.
The last step in this section is to run the image and see if it actually works (replacing my
username with yours).
$ docker run -p 8888:5000 yourusername/catnip
* Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
The command we just ran used port 5000 for the server inside the container and exposed this
externally on port 8888. Head over to the URL with port 8888, where your app should be live.

Name: Riya Singh Roll no.59


Congratulations! You have successfully created your first docker image.
Docker on AWS
What good is an application that can't be shared with friends, right? So in this section we are
going to see how we can deploy our awesome application to the cloud so that we can share it
with our friends! We're going to use AWS Elastic Beanstalk to get our application up and
running in a few clicks. We'll also see how easy it is to make our application scalable and
manageable with Beanstalk!
Docker push
The first thing that we need to do before we deploy our app to AWS is to publish our image on a
registry which can be accessed by AWS. There are many different Docker registries you can use
(you can even host your own). For now, let's use Docker Hub to publish the image.
If this is the first time you are pushing an image, the client will ask you to login. Provide the
same credentials that you used for logging into Docker Hub.
$ docker login
Login in with your Docker ID to push and pull images from Docker Hub. If you do not

Name: Riya Singh Roll no.59


have a Docker ID, head over to https://hub.docker.com to create one.
Username: yourusername
Password:
WARNING! Your password will be stored unencrypted in
/Users/yourusername/.docker/config.json
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/credential-store

Login Succeeded
To publish, just type the below command remembering to replace the name of the image tag
above with yours. It is important to have the format of yourusername/image_name so that the
client knows where to publish.
$ docker push yourusername/catnip
Once that is done, you can view your image on Docker Hub. For example, here's the web page
for my image.
Note: One thing that I'd like to clarify before we go ahead is that it is not imperative to host your
image on a public registry (or any registry) in order to deploy to AWS. In case you're writing
code for the next million-dollar unicorn startup you can totally skip this step. The reason why
we're pushing our images publicly is that it makes deployment super simple by skipping a few
intermediate configuration steps.
Now that your image is online, anyone who has docker installed can play with your app by
typing just a single command.
$ docker run -p 8888:5000 yourusername/catnip
If you've pulled your hair out in setting up local dev environments / sharing application
configuration in the past, you very well know how awesome this sounds. That's why Docker is
so cool!
Beanstalk
AWS Elastic Beanstalk (EB) is a PaaS (Platform as a Service) offered by AWS. If you've used
Heroku, Google App Engine etc. you'll feel right at home. As a developer, you just tell EB how
to run your app and it takes care of the rest - including scaling, monitoring and even updates. In
April 2014, EB added support for running single-container Docker deployments which is what
we'll use to deploy our app. Although EB has a very intuitive CLI, it does require some setup,
and to keep things simple we'll use the web UI to launch our application.
To follow along, you need a functioning AWS account. If you haven't already, please go ahead
and do that now - you will need to enter your credit card information. But don't worry, it's free
and anything we do in this tutorial will also be free! Let's get started.
Here are the steps:
● Login to your AWS console.
● Click on Elastic Beanstalk. It will be in the compute section on the top left. Alternatively,

Name: Riya Singh Roll no.59


you can access the Elastic Beanstalk console.

● Click on "Create New Application" in the top right


● Give your app a memorable (but unique) name and provide an (optional) description
● In the New Environment screen, create a new environment and choose the Web Server
Environment.
● Fill in the environment information by choosing a domain. This URL is what you'll share
with your friends so make sure it's easy to remember.
● Under base configuration section. Choose Docker from the predefined platform.

Name: Riya Singh Roll no.59


● Now we need to upload our application code. But since our application is packaged in a
Docker container, we just need to tell EB about our container. Open the
Dockerrun.aws.json file located in the flask-app folder and edit the Name of the image to
your image's name. Don't worry, I'll explain the contents of the file shortly. When you are
done, click on the radio button for "Upload your Code", choose this file, and click on
"Upload".
● Now click on "Create environment". The final screen that you see will have a few
spinners indicating that your environment is being set up. It typically takes around 5

Name: Riya Singh Roll no.59


minutes for the first-time setup.
While we wait, let's quickly see what the Dockerrun.aws.json file contains. This file is basically
an AWS specific file that tells EB details about our application and docker configuration.
{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "prakhar1989/catnip",
"Update": "true"
},
"Ports": [
{
"ContainerPort": 5000,
"HostPort": 8000
}
],
"Logging": "/var/log/nginx"
}
The file should be pretty self-explanatory, but you can always reference the official
documentation for more information. We provide the name of the image that EB should use
along with a port that the container should open.
Hopefully by now, our instance should be ready. Head over to the EB page and you should see a
green tick indicating that your app is alive and kicking.

Name: Riya Singh Roll no.59


Go ahead and open the URL in your browser and you should see the application in all its glory.
Feel free to email / IM / snapchat this link to your friends and family so that they can enjoy a few
cat gifs, too.

Conclusion
Advantages and limitations of containers

Even if containerization conveys numerous advantages to freight distribution, it does not come
without challenges. The main advantages of containerization are:
● Standardization. The container is a standard transport product that can be handled
anywhere in the world (ISO standard) through specialized modes (ships, trucks, barges,
and wagons), equipment, and terminals. Each container has a unique identification
number and a size type code allowing to be a unique transport unit that can be managed
as such.
● Flexibility. Containers can be used to carry a wide variety of goods such as commodities
(coal, wheat), manufactured goods, cars, and refrigerated (perishable) goods. There are
adapted containers for dry cargo, liquids (oil and chemical products), and refrigerated

Name: Riya Singh Roll no.59


cargo. Discarded containers can be recycled and reused for other purposes.
● Costs. Container transportation offers lower transport costs due to the advantages of
standardization. Moving the same amount of break-bulk freight in a container is about 20
times less expensive than conventional means. Containers enable economies of scale at
modes and terminals that were not possible through standard break-bulk handling. The
main cost advantages of containerization are derived from lower intermodal transport
costs.
● Velocity. Transshipment operations are minimal and rapid, and ship port turnaround
times have been reduced from 3 weeks to about 24 hours. Because of this transshipment
advantage, transport chains involved containers are faster. Container shipping networks
are well connected and offer a wide range of shipping options. Containerships are also
faster than regular cargo ships and offering a freqency of port calls allowing a constrant
velocity.
● Warehousing. The container is its own warehouse, protecting the cargo it contains. This
implies simpler and less expensive packaging for containerized cargoes, particularly
consumption goods. The stacking capacity on ships, trains (double-stacking), and on the
ground (container yards) is a net advantage of containerization. With the proper
equipment, a container yard can increase its stacking density.
● Security and safety. The container contents are unknown to carriers since it can only be
opened at the origin (seller/shipper), at customs, and the destination (buyer). This implies
reduced spoilage and losses (theft).
The main drawbacks of containerization are:
● Site constraints. Containers are a large consumer of terminal space (mostly for storage),
implying that many intermodal terminals have been relocated to the urban periphery.
Draft issues at the port are emerging with the introduction of larger containerships,
particularly those of the post-Panamax class. A large post-Panamax containership
requires a draft of at least 13 meters.
● Capital intensiveness. Container handling infrastructures and equipment (giant cranes,
warehousing facilities, inland road, rail access) are important capital investments that
require large pools of available capital. This requires the resources of large corporations
or financial institutions. Further, the push towards automation is increasing the capital
intensiveness of intermodal terminals.
● Stacking. The complexity of the arrangement of containers, both on the ground and
modes (containerships and double-stack trains), requires frequent restacking, which
incurs additional costs and time for terminal operators. The larger the load unit or the
yard, the more complex its operational management.
● Repositioning. Because of trade imbalances, many containers are moved empty (20% of
all flows). However, either full or empty, a container takes the same amount of space.
The observed divergence between production and consumption at the global level
requires the repositioning of containerized assets over long distances (transoceanic).

Name: Riya Singh Roll no.59


● Theft and losses. High-value goods and a load unit that can forcefully be opened or
carried away (on a truck) implied a level of cargo vulnerability between a terminal and
the final destination. About 1,500 containers are lost at sea each year (fall overboard),
mainly because of bad weather.
● Illicit trade. The container is an instrument used in the illicit trade of goods, drugs, and
weapons, as well as for illegal immigration (rare).

REFERENCES

https://transportgeography.org/contents/chapter5/intermodal-transportation-
containerization/containerization-advantages-drawbacks/
https://docker-curriculum.com/#getting-started

Name: Riya Singh Roll no.59

You might also like