Cloud Computing Lab 3

Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

ECS781P: Cloud Computing

Lab Instructions 3
Docker, Dockerfile, deploying a Flask (python) web app

Dr. Sukhpal Singh Gill


Muhammed Golec

Overview
In the previous lab, we saw how to create a virtual machine (VM) with specifica-
tions of our choice in a cloud service provider. This cloud service pattern corresponds
to IaaS: Infrastructure-as-a-Service. A VM can be used to run any applications, in
particular, we used it to run a webserver, serving a single html page.
And now for this lab, the aims are as follows:

• Introduction to Application Containers and contrasting them with VMs

• Familiarity with docker, as the most popular application container technology

• Learning how to create a custom application container image from a base image

• How to work with a container image repository

• How to create container instances from a container image

• How to deploy a single container in the cloud

Our container will run a basic Flask application. Next lab, we expand on this appli-
cation and deploy it in a container cluster as opposed to inside a VM.

Application Containers: Concepts


Application containers, or simply, containers are closely related to virtual machines,
but have some important distinctions. A VM has its own entire (guest) OS, including
all the functionalities of the kernel along with all the resources (CPU, RAM, Storage,

1
Networking, I/O), that are obtained by virtualizing the host resources (through a
“hypervisor”). This is good when our purpose is exactly to have a working machine
that is not tied up to its underlying infrastructure and to provide isolation between
multiple machines on the same hardware stack. The disadvantage of using VMs is
that they can have a high overhead on the underlying resources. The “image” of a
VM can also requite a large storage space. why we need container

The advantage of packaging applications that will run on any hardware and OS, pre-
packaged with its required environment was very attractive. So a lighter approach
was developed called application containers: While a virtual machine provides an
abstract infrastructure (a distinct OS, CPU, RAM, storage, networking, I/O), the con-
tainer provides an abstract OS. In particular, while a VM can run any application
compatible with the machine it is emulating, a container of a program contains only
a subset of an operating system (programs and libraries) and system resources (CPU,
RAM, Storage, etc) that are needed to run that specific program. In particular, the
OS kernel is shared between the host machine and the containers.1 This increases
the utilization of the underlying resources, allows us to pack more applications onto
the same physical resources. The “image” of an application container also becomes
smaller than a corresponding “image” of a VM with the same application package.
The main company pushing the idea of containers was Docker, Inc. The graph
of number of pulls per year in Figure 1 should give you an idea of its explosive pop-
ularity. It is actually pretty outdated by now: Google alone “start over two billion
containers in each week” (Ref: https://cloud.google.com/containers/)!

Task-1: Container Concepts


There are many good online resources and documentation on docker, including docker’s
own documentation: https://docs.docker.com/, that you can read in your spare time.

1
In fact, this ‘shared kernel’ was a source of some security issues in the early days of containerisation,
since the “isolation” is not fundamentally complete and container escaping due to bugs in the
container runtime was a threat.

2
Figure 1: Ref.: https://www.docker.com/what-container (Accessed
30/01/2018)

Docker: practical concepts


An important concept is the difference between a container “instance” and a con-
tainer “image”: An image for an application is the template and essential binaries
and libraries that can be used to build a self-sufficient environment for that applica-
tion. So the images are not the encapsulated environment themselves; rather, they
can be used to create as many actual instances of the encapsulated environments as
desired. Each of these actual instances created from an image is a container. To
use an object-oriented-programming analogy: you can think of an “image” as a class,
which can be used to create “containers” as object instances of that class. We will see
that the analogy goes further: we can in fact create our own images by layering base
images on top of each other, not too different from when we “extend” a class! Indeed,
we had the same concept in the context of virtual machines as well: an image of a
virtual machine is like the class, from which as many VM “instances” can be created
(destroyed, then re-created, etc.).
The programme that provides the virtualisation, creating docker container instances
from images, running and managing them, is called the docker engine. It is composed
of a docker daemon2 dockerd and a friendly application interface (as a command line
interface) called docker cli, which is invoked simply by docker command. People may
just say docker to refer to either one, and which one they actually mean should be
understood from the context. It is worth reviewing these terminologies and concepts
clear at least once: Docker-Overview.

2
Daemons are just long-running processes in Linux.

3
Preparation
Here is the big-picture task of this lab: on a “local” machine, we will create a toy
web app in python and package it inside a docker image. Then we run the container
“locally” and if it is all well, we deploy the container inside a VM on the GCP platform.

1. Login to GCP platform via the link in your email


2. Connect to GCP via SSH protocol using terminal. (See first lab for detailed
information)
3. You will need to install docker to use it so run
sudo apt update

and
sudo apt install docker.io

to do this.

Note: In the rest of the lab, try NOT TO copy/paste the provided codes/instruc-
tions, as some characters may not come out correctly, and you will face some unex-
pected errors. Instead, type them in yourself!

Basic operations: search, pull, images, run, logs, ps,


inspect, stats, pause, stop, rm, rmi
Before we get to some interesting exercises with Docker, we need to get comfortable
with some basic docker commands. If you wish to get more information about each of
these commands, including their possible “options”, you can issue
sudo docker <command> -help
E.g., docker ps --help will display a short help about docker ps command.3

1. search: All existing docker images at https://hub.docker.com/explore/


(called the Docker Hub) can be searched for by using the command
docker search <image-name>. For example, to find an image for nginx, we
would use:
sudo docker search nginx

What would be the command to search for an image of redis? what about
ubuntu?
2. pull: Once you find an image, it can be downloaded using
docker pull <image-name>. Pull the nginx image. Note: using the de-
fault name pulls the latest version by default. If we want a specific version
(which is always a much preferred approach) we should explicitly specify it:
sudo docker pull nginx:x.y
3
Again, we are going to run these commands inside the google shell. If you wish to install docker on
your own machine, follow this link: https://docs.docker.com/install/, and after installation, follow
these steps: https://docs.docker.com/install/linux/linux-postinstall/.

4
3. images: to see a list of locally available images (images pulled onto our host
machine), you can issue:
sudo docker images

or equivalently,
sudo docker image ls

See the list of our pulled images. What do you see? There are multiple columns,
showing the attributes of the image. What does each represent?

4. run: to create container instances from an image, we use:


sudo docker run <options> <image name or ID>

For a bit of educational fun, execute the following command, and importantly,
read the logs carefully: docker run hello-world. Note that hello-world
is not a “running” container: it just displays a message (if everything is fine)
and exits. This is typically used to check the correctness of installation! Also
note that we did not have that image locally, so how could the container be made
from it (the answer is in the displayed logs!)
If we don’t use any of the options with docker run, then by default the con-
tainer will run in the “foreground”, i.e., the output messages of the application
will show in the terminal, and the application also receive inputs from the ter-
minal (e.g. Ctrl+C) will kill the application!
If you would like the containerised application to be running as a background
process, the instruction is docker run -d <image name or ID>, or equiv-
alently, docker run --detach <image name or ID>. This is usually how
we want the containers to be running anyway (in the background), because we
do not want the standard input/output of the host machine (the terminal) to be
tied to a running container. Just note that in such a case (when the container
is running in the background), the "logs" (debug/info/error messages, etc.) of
the application inside the container are not going to be shown in the standard
output of the host machine (in the terminal). If you want to access them, you
can issue the command:
sudo docker logs <container name or ID>

If the logs are too long and you only wish to see the last few lines (“tail”) of
them, use docker logs --tail <container name or ID>. If you want to
continually see the logs as they are generated, use:
docker logs --follow <container name or ID>
docker run has many options some of which are too advanced for our pur-
poses. To see a list of them, type in docker run --help.

5. ps: to see the list of “running” containers and their main attributes, you can
use docker ps. Issue this, and analyse what you see. What does each column
represent? Specially, what is the funny name in the last column? Also, what do
optional arguments like docker ps -a, or equivalently docker ps --all,
and docker ps -aq or equivalently docker ps --all --quiet do?

5
6. inspect: To inspect a container or image (return low-level information on
them), we can use:
sudo docker inspect <container or image ID or name>

Use it to gather information on our containers, and try to parse the output as
much as you can.

7. stats: To get a live stream of resource usage statistics (CPU, memory, I/O,
etc.) about a container (or a list of containers), you can issue:
sudo docker stats <container-names>

To get stats of all containers, you can use docker stats --all

8. To get a summary of system-wide information about your docker (including the


number of images and containers), we can simply issue docker info.

9. stop: To stop a running container (or many containers) you can issue
sudo docker stop <container name(s) or IDs>

10. rm: Stopping a container does not remove it. For removing a container (or many
containers), you can use:
sudo docker rm <container name(s) or ID(s)>

Note that you cannot remove a “running” container. For that, you either have
to fist stop them, or use the -f (or equivalently --force) option.

11. rmi: While rm removes a container, if you want to remove (delete) an image (or
many images), you need to issue:
sudo docker rmi <image name(s) or ID(s)>

Note that you cannot remove an image if a container instance is made from it
(even if the container is stopped). For that, either you need to remove those
containers, or use the -f (or equivalently, --force) option.

12. To find out the port-mapping(s) for a container, we can issue:


sudo docker port <container-name or ID>

What does “port mapping” (also known as “port forwarding” or “port binding”)
mean? Explanation in the next item!

Container port binding (a.k.a. “port forwarding” or “port mapping”)


When a container is created, the network ports of the application inside the con-
tainer are not accessible from outside of the container: the containers are completely
isolated and sand-boxed. That is, the container does not “publish” any of its ports to
the outside world by default.
So in order to communicate with these applications through the host (send and re-
ceive requests/traffic to/from them), we need to “publish” their ports, by binding them
to some network port of the host. For instance, to bind the port of the nginx container
to our host machine’s port using the following:

6
sudo docker run -d -p 80:80 nginx
or equivalently: docker run --detach --publish 80:80 nginx. You should
now be able to see a default web page if you put the public DNS of the instance into
a browser in a similar fashion to last lab.

• Question 1: Does the first 80 pertain to our host machine or the container?
What about the other 80?

• Question 2: What is the IP address on our host machine on which this port
number is used for binding? (Hint: you can use docker ps to gather such
information.

• (optional – advanced) Question 3: How can we bind a specific port on a specific


IP address of the host with the container? (hint: read the docker documents on
container-networking)

7
Creating our Web Application
We will be using Python to develop our app, as it seems to be the friendliest choice,
given your exposure in the big-data module. In particular, we will use a Python’s
mini-framework for web applications called flask (for its documentation click here.)

1. Inside our GCP instance, create a directory for our app (let’s call it, lab2), and
change into it:
cd ~
mkdir lab2
cd lab2

(Recall that cd ~ is for “change directory” to the “home” directory in Linux (~


designates “home” directory). This is equivalent to cd $HOME. To see the full
path of home directory, you can always issue echo $HOME. Also mkdir clearly
stands for “make directory”, another self-explanatory command of Linux.)

2. From there, create app.py file with the following content (this is the code of our
app). You can use nano text editor to do so, by executing: nano app.py. Once
you are done typing in the following code, you can use F2 key to save and exit.
from flask import Flask

app = Flask(__name__)

@app.route("/")
def hello():
html = "<h1>Hello {name}!</h1>".format(name="Arman")
return html

if __name__ == "__main__":
app.run(host='0.0.0.0', port=80)

Note: the indentation (the tab spaces at the beginning of some lines) is crucial
in Python. They are used to specify the grouping of statements or code-blocks
in Python (there is no begin and end as in Pascal, or curly brackets { } as in
Java/C/C++/JavaScript, or parentheses as in Lisp, etc.). So don’t omit them!

Description of the code: The first line imports the Flask class from the mod-
ule (package/library) of flask. So we need to make sure this python module
(flask) is installed somehow (we will see how this is done soon).
In the second line, we create an object instance from the Flask class, which
we called it app. The next line @app.route("/") just means that if the url of
a request is just / (i.e., the root url, e.g. 0.0.0.0/ or just 0.0.0.0), call the
function that comes immediately after it, which we have named hello(). What
does this function do? It just creates a piece of string which we named html, and
returns it! Your browser then turns this returned text into the visual you see,

8
using the markup tags like <h1> </h1> for header (large font) etc. For more
information on the code, visit here: http://flask.pocoo.org/docs/1.0/quickstart/.
3. Create a requirements.txt: As we said, we are using a python module in our
application called flask, so this has to be installed on wherever our app is going
to be run. We put the list of required modules/packages (the “dependencies”)
inside a text file typically named requirements.txt. In the same directory,
create this file (e.g. nano requirements.txt) with the following content (just
one word!) (and save and exit with F2):
Flask

Creating a container image for our app and running it


1. So far, we saw how to pull docker images that are made by others. Here, we see
how to build our own image. To build a custom image, we need a Dockerfile,
which has the directives (the recipe!) to create a docker image. So in the same
directory, create a file with the name of Dockerfile (no extensions!), and fill it
with the following instructions (e.g. nano Dockerfile):
FROM python : 3 . 7 − a l p i n e
WORKDIR / myapp
COPY . / myapp
RUN pip i n s t a l l −r requirements . t x t
EXPOSE 80
CMD [ " python " , " app . py " ]

When done, save and exit (F2 in nano).

Explanation of the Dockerfile:


FROM python:3.7-alpine The FROM directive tells docker what the “base im-
age” (or parent image) is. Custom images are built by starting from a base
image and modifying it (installing new things in it, setting up its environ-
ment variables, copying new codes in it, etc.) We would like our base image
to be as lightweight as possible (to not include any unnecessary extra stuff).
Because we are creating a python-3 app, we have specified the base image
to be python:3.7-alpine. The alpine here means that this python im-
age itself is built on top of a very lightweight Linux image (about 8MB in
size!) as its base called alpine.
WORKDIR /myapp the instruction WORKDIR sets the “working directory” of the
image. That is, from now on, all the relative directory paths in the rest of
the instructions in this Dockerfile (e.g. COPY, RUN, CMD, etc.) will be with
respect to /myapp. We can still specify an absolute directory paths if we
start it with a forward-slash /, e.g. /home
COPY . /myapp This instruction tells docker to “copy” everything in the “cur-
rent directory” of our machine (host) to the /app directory inside the im-
age. Note that in Linux, the current directory is designated by a dot.
Here, the files that will be copied are requirements.txt and app.py
and Dockerfile, to the image in path /myapp.

9
RUN pip install -r requirements.txt the instruction RUN runs the provided com-
mand on the image. So here, we are saying: install (recursively) all the
python modules that are listed in each line of our requirements.txt in
the image. Note that any command in front of RUN will be executed only
once when the image is being built, and not when a container instance is
created from that image.
EXPOSE 80 : the instruction EXPOSE makes a specific port (here: port 80) avail-
able to the docker network space. Note that the port is still not accessible
from the “outside”; for that, we need to “publish” this port by binding them
to a port of the host when a container instance of this image is created.
CMD ["python", "app.py" ]: the instruction CMD tells docker that the specified
command should be executed upon running a container of this image. Here,
the command is python app.py, which is run in the working-directory of
the image which we specified in the WORKDIR directive (so, this is equiva-
lent to python /myapp/app.py in our example). Note that the CMD com-
mands ar NOT executed when the image is built, but rather when an in-
stance of the image is created (using docker run <our-image-name>)
2. Before running the new python container make sure you stop your nginx con-
tainer. This can be done using the command
sudo docker stop <containerName>

The container name can be found using the command


sudo docker ps

If the results of the command was the output in the Figure below the command
would be
sudo docker stop compassionate_turing

3. Build the docker image with a tag of your choice, by running (still from the same
directory):
—tag (2 times “-“ )
sudo docker build . -tag=my_first_app_image:v1

Again note the dot there! As before, the dot means the current directory. Here,
this is where docker build looks for a Dockerfile to read the instructions
from to build an image). Pay a close attention to the messages to ensure all
went well – if there is any error, you should fix it now! You can then issue
docker images to see that our image is indeed created.
4. Create an instance of the container and run it with proper port-binding:

sudo docker run -p 80:80 my_first_app_image:v1

where of course you should replace for the tag of your docker image if different.
sudo lsof -i :80 => find the process using port 80, and then use “sudo kill -9 <PID>”

10
5. If all has gone well, we should now be able to see a different web page displayed
when we enter the external IP of the instance into a browser4 .

4
If you still get the old page try a private browsing or incognito tab as the page may have been cached
by the browser

11
Cleaning UP
1. To remove all your local container instances:
docker rm −− f o r c e $ ( docker ps −− a l l −− q u i e t )

or equivalently docker rm -f $(docker ps -aq)

2. To delete all your local docker images:


docker rmi −− f o r c e $ ( docker images −− a l l −− q u i e t )

3. Finally, don’t forget to shutdown or terminate your instance, and logout from
the GCP console!

12

You might also like