Docker For Developers
Docker For Developers
org
Docker for Developers
Rafael Gomes
This book is for sale at http://leanpub.com/docker-for-developers
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Best regards, . . . . . . . . . . . . . . . . . . . . . . . . 2
Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 4
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . 6
What is Docker? . . . . . . . . . . . . . . . . . . . . . . . . 14
Set up . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Setting up on GNU/Linux . . . . . . . . . . . . . . . . . 17
Setting up on MacOS . . . . . . . . . . . . . . . . . . . 20
Setting up on Windows . . . . . . . . . . . . . . . . . . 21
Basic commands . . . . . . . . . . . . . . . . . . . . . . . . 25
Running a container . . . . . . . . . . . . . . . . . . . . 25
Checking the list of containers . . . . . . . . . . . . . . 29
Managing containers . . . . . . . . . . . . . . . . . . . 30
www.dbooks.org
CONTENTS
Codebase . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Dependencies . . . . . . . . . . . . . . . . . . . . . . . . . 83
Config . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
Backing services . . . . . . . . . . . . . . . . . . . . . . . . 90
Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Concurrency . . . . . . . . . . . . . . . . . . . . . . . . . . 104
Disposability . . . . . . . . . . . . . . . . . . . . . . . . . . 112
Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
Apêndice . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
Container or virtual machine? . . . . . . . . . . . . . . 139
Useful commands . . . . . . . . . . . . . . . . . . . . . 144
Can I run GUI applications? . . . . . . . . . . . . . . . 144
www.dbooks.org
Preface
In software development it is usual to create good practices for
standards. Especially for web applications, certain concepts and
practices such as DevOps, cloud infrastructure, Phoenix, immutable
and 12 factor apps are widely accepted theories that help on systems
productivity and maintenance. While these concepts are not new,
many are the tools and systems that can help to implement them.
But Docker is one of the first and most commented tools and
platforms that combine many of these concepts in a cohesive and
simple way. As any tool, Docker is an investment that provides
the best return when you understand its purpose and how to use
it properly.
There are several presentations, papers and documents about Docker.
However, there was the opportunity of a book connecting the the-
ory to the practice of the tool, in which the reader could understand
the motivations of Docker and also how to organize the application
in order to get the best from the tool.
I am very pleased that Rafael wrote this book that I believe it’s an
important contribution to our field.
Rafael is extremely engaged in Docker and Devops communities in
Brazil, and understands what people seek in terms of knowledge
regarding this subject. In this book you will be able to understand
the basics on Docker with a simple language and many practical
examples.
I hope this publication turns into one more step to boost your
journey. I wish you success and all the best.
Preface 2
Best regards,
www.dbooks.org
How to read this book
This material was divided in two big parts. The first one approaches
the most basic points of Docker. It is the exactly minimum necessary
that a developer needs to know to use this technology properly, that
is, knowing what happens exactly when executing each command.
In this first part, we will try not to approach the “low level” issues
of Docker, because they are more appealing for the infrastructure
team.
In case you don’t know anything about Docker, we strongly advise
you to read this first part, so you can go through the next part
that focus on building a web application on Docker following the
best practices, no pauses. In this book, we used the practices from
12factor1 .
The 12 factor will be detailed in the beginning of the second part,
but we can tell what we consider the “12 commandments for web
applications on Docker”, that is, once your application follows all
the good practices presented in this document, you will be possibly
using Docker at its full potential.
This second part is divided by each good practice of 12factor.
Therefore, we present a sample code in the first chapter that will
evolve while the book develops. The idea is that you can practice
with a real code, thus absorbing the content in a practical way. We
also put together some appendices with extra-important subjects
that don’t fit in the following chapters.
1
https://12factor.net/pt_br/
Acknowledgements
My first thanks go to the person who gave me the chance of being
here and to be able to write this book: my mother. The famous
Cigana, or Dona Arlete, a wonderful person and a role model.
I want also thank to my second mother, Dona Maria, who took so
much care of me when I was a kid while Dona Arlete was taking
care of her two other kids and a nephew. I feel lucky for having two
moms while many don’t have one.
I take this chance to thanks the person who introduced Docker to
me, Robinho2 , also known as Robson Peixoto. In a conversation
during the Linguágil meeting, in Salvador, Bahia, he told me: “Study
Docker!” And here I am finishing a book that transformed my life.
I truly thank you, Robinho!
Thanks to Luís Armando Bianchin, who started to write along with
me but was not able to go on for other reasons. I’m very grateful,
for your constant feedback kept me going on writing this book.
Thanks to Paulo Caroli who encouraged me to write the book and
introduced me to the Leanpub platform. If it wasn’t for him, this
book would not be here so quick.
Thanks to the amazing Emma Pinheiro3 , for the beautiful cover. I
also want to thanks a lot the incredible people from Raul Hacker
Club4 , who have strongly encouraged me this whole time.
Thanks for the mother of my son, Eriane Soares, who is an amazing
friend of mine and have encouraged me to write the book while we
were still living together!
2
https://twitter.com/robinhopeixoto
3
https://twitter.com/n3k00n3
4
http://raulhc.cc/
www.dbooks.org
Acknowledgements 5
• Gjuniioor gjuniioor@protonmail.ch
• Marco Antonio Martins Junior5 - Wrote the chapters “Can I
run GUI applications” and “Useful commands”.
• Jorge Flávio Costa6
• Glesio Paiva7
• Bruno Emanuel Silva8
• George Moura9
• Felipe de Morais10
• Waldemar Neto11
• Igor Garcia
• Diogo Fernandes12
www.dbooks.org
Why using Docker?
Docker has been a very commented subject lately, many articles
have been written, usually talking about how to use it, auxiliary
tools, integrations and the like, but many people still ask the most
basic question when it’s about the possibility of using any new
technology: “Why should I use this?” Or would it be: “What this
has to offer me that is different from what I have today?”
It is natural that people still doubt the Docker’s potential, some even
think that it’s about some hype13 . But in this chapter we intend to
show some good reasons to use Docker.
It’s important to highlight that Docker is not a “silver bullet” – it is
not intended to solve all the problemas, much less being the only
solution to several situations.
13
http://techfree.com.br/2015/06/sera-que-esse-modelo-de-containers-e-um-hype/
Why using Docker? 8
1 – Similar environments
www.dbooks.org
Why using Docker? 9
5 – Community
www.dbooks.org
Why using Docker? 11
Questions
www.dbooks.org
Why using Docker? 13
www.dbooks.org
What is Docker? 15
www.dbooks.org
Set up
Docker stopped being just a software to turn into a set of softwares:
an ecosystem. In this ecosystem we have the following softwares:
We are not mentioning Swarm31 and other tools because they’re not
lined up with the goal of this book: introduction to developers.
Setting up on GNU/Linux
We will explain the set up in the most comprehensive way, thus you
can install the tools in any GNU/Linux distribution you are using.
31
https://docs.docker.com/swarm/overview/
Set up 18
1 su - root
1 sudo su - root
[Pip](https://en.wikipedia.org/wiki/Pip_(package_manager) is a Python
package manager and, as Docker Compose is written on this lan-
guage, it is possible to set it up as it follows:
32
https://get.docker.com/
www.dbooks.org
Set up 19
1 su - root
1 sudo su - root
1 $ curl -L https://github.com/docker/machine/releases/do\
2 wnload/v0.10.0/docker-machine-`uname -s`-`uname -m` > /\
3 usr/local/bin/docker-machine && \
4 chmod +x /usr/local/bin/docker-machine
1 docker-machine version
Obs.: The previous example uses the latest version available when
this book was published. Check if there’s some updated version
verifying the official documentation33 .
33
https://docs.docker.com/machine/install-machine/
Set up 20
Setting up on MacOS
www.dbooks.org
Set up 21
Setting up on Windows
www.dbooks.org
Set up 23
www.dbooks.org
Basic commands
For using Docker it is necessary to know a few commands and
understand directly and clearly what they do, as well as some
examples of use.
We are not approaching the commands for creating image and
troubleshooting on Docker, because there are specific chapters on
these subjects.
Running a container
The images that appear are on your Docker host and do not require
any download from the Docker public cloud37 unless you wish to
update it. To update the image, just execute the command below:
Parameter Explanation
-d Running container on background
-i Interactive mode. Keeps the STDIN open
even without console attached
-t Allocates a pseudo TTY
–rm Automatically removes the container
after finishing (doesn’t work with -d)
–name Name the container
-v Volume mapping
-p Port mapping
-m Limit the use of RAM memory
-c Balance the use of CPU
www.dbooks.org
Basic commands 27
Volume mapping
To map the volume, just specify the origin of the data at the host
and where it should be set inside the container.
Port mapping
To map the ports, you just have to know which port will be
mapped on host and which one should get this connection inside
the container.
An example with the port 80 of the host to a port 8080 inside the
container holds the following command:
Basic commands 28
Managing resources
With the command above we are limiting this container to use only
512MB of RAM.
To balance the use of CPU by the containers, we specify weights for
each container; the lighter the weight, the less priority on use. The
weights can oscillate from 1 to 1024.
In case the weight of the container is not specified, it will use the
heaviest weight possible, in this case, 1024.
We will use the weight 512 as an example:
www.dbooks.org
Basic commands 29
Parameter Explanation
-a Lists all containers, including turned offs
-l Lists the last containers, including turned
offs
-n Lists the last N containers, including
turned offs
-q Lists only the containers’ ids, great for
using on scripts
39
https://docs.docker.com/engine/reference/commandline/ps/
Basic commands 30
Managing containers
40
https://docs.docker.com/engine/reference/commandline/stop/
41
https://docs.docker.com/engine/reference/commandline/start/
www.dbooks.org
Creating your own image
on Docker
Before we explain how to create your image, it’s important to bring
up a question that usually confuses Docker beginners: “Image or
container?”
Anatomy of an image
The official Docker images are those with no users in their names.
The image “Ubuntu:16.04” is official; on the other hand, the image
42
https://pt.wikipedia.org/wiki/Orienta%C3%A7%C3%A3o_a_objetos
Creating your own image on Docker 32
www.dbooks.org
Creating your own image on Docker 33
1 apt-get update
2 apt-get install nginx -y
3 exit
To run a test in you new image, let’s creat a container from it and
check if the nginx is installed:
If you want validation, run the same command on the official image
of ubuntu:
www.dbooks.org
Creating your own image on Docker 35
Dockerfile
1 touch arquivo_teste
1 FROM ubuntu:16.04
2 RUN apt-get update && apt-get install nginx -y
3 COPY arquivo_teste /tmp/arquivo_teste
4 CMD bash
Such command has the option “-t”, that also works to inform the
name of the image that is going to be created. In this case, is going
to be meuubuntu:nginx_auto and the “.” at the end, informing
which context must be used in this image building. All files from the
current folder will be send to the Docker service and only they can
be used to manipulations on Dockerfile (example of using COPY).
www.dbooks.org
Creating your own image on Docker 37
1 FROM ubuntu:16.04
2 RUN apt-get update
3 RUN apt-get install nginx
4 RUN apt-get install php5
5 COPY arquivo_teste /tmp/arquivo_teste
6 CMD bash
If we modify the third line of the file and, instead of installing nginx,
we change it to apache2, the instruction that updates on apt will not
be executed again, but rather the installation of apache2, because it
just entered the file, as well as the php5 and the file copy, because
all of them are subsequent to the modified line.
As we noticed, holding the Dockerfile file enables us to have
the exact notion of which changes were made in the image, thus
recording the modifications in the version control system.
www.dbooks.org
Understanding storage on Docker 39
Removing a file
Performance issues
Using volumes
In this model the user choses a specific folder of the host (ex.:
/var/lib/container1) and maps it into a container’s inside folder (ex.:
/var). What is written in the folder /var of the container is also
written in the folder /var/lib/container1 of the host.
Here’s the sample of the command used to this mapping model:
www.dbooks.org
Understanding storage on Docker 41
This model is not portable. It needs the host to have a specific folder
so the container works properly.
Now the container db2 has a folder /dbdata that is the same as
the one from the container dbdata, making this model completely
portable.
A disadvantage is the need of keeping a container just to this,
because in some environment containers are removed with some
frequency, making it necessary to take special care with special con-
tainers. In a certain way, it’s an additional management problem.
Understanding storage on Docker 42
Mapping volumes
This model is the most indicated since the release, because it gives
you portability. It is not removed easily when this container is
deleted and, still, is very easy to manage.
www.dbooks.org
Understanding the network
on Docker
What Docker calls network, in fact, is an abstraction created to
ease the data communication management between containers and
untie the external knots of the Docker environment.
Don’t mistake the Docker network with the already known network
used to group the IP addresses (ex.: 192.168.10.0/24). Therefore,
every time we mention this second type of network, we’ll use “IP
network”.
Bridge
www.dbooks.org
Understanding the network on Docker 45
None
Host
This network has the objective of delivering into the container all
the interfaces existent on Docker host. In a way, it can speed up the
package delivery, once there’s no bridge on the way of the messages.
But usually this overhead is minimum and the use of a bridge can
be important to security and management of traffic.
54
http://techfree.com.br/2015/12/entendendo-armazenamentos-de-dados-no-docker/
www.dbooks.org
Understanding the network on Docker 47
Bridge
This is the network driver more simple to use, for requires little
configuration. The network created by the user using the bridge
driver is similar to the Docker standard network named “bridge”.
The networks created by the user with the bridge driver have all
features described in the standard network, called bridge. However,
it has additional features.
Amongst one of the features: the network created by the user
doesn’t need to user the old “-link” option. Because every network
created by the user with the bridge driver will be able to user the
Docker internal DNS that automatically associates every container
names of this network to its respective IPs from the corresponding
IP network.
To make it clearer: the containers using the standard bridge network
will not be able to enjoy the Docker internal DNS feature. In
case you are using this network, it is necessary to specify the “-
link” legacy for translating the names in IP addresses dynamically
allocated on Docker.
Understanding the network on Docker 48
Rede isolada
www.dbooks.org
Understanding the network on Docker 49
Overlay
www.dbooks.org
Understanding the network on Docker 51
The subject deserves a whole paper for itself. So, we’ll just show an
interesting link55 for furthers references on the subject.
Concluding
We realize that the use of networks defined by the user make the
option “-link” obsolete, as well as provides a new internal Docker
DNS service, making it easy for those who want to keep a big
and complex Docker infrastructure, as well as provide the network
isolation of its services.
To know and to use well the new technologies is a good practice that
avoids future problems and facilitates building and maintaining big
and complex projects.
55
https://docs.docker.com/compose/networking/
Using Docker in multiple
environments
Docker host is the name of the active responsible for managing
Docker environments; in this chapter we will demonstrate how is it
possible to create them and manage them in distinct infrastructures,
such as virtual machines, cloud, and physical machine.
www.dbooks.org
Using Docker in multiple environments 53
How it works
the kernel Linux; and the client, that we’ll call Docker client,
responsible for getting commands from the user and translating
them into management of Docker Host.
Each Docker client is configured to connect itself to a given Docker
host and, at this moment, Docker machine takes the action, for it
enables the automatization of access configuration choice of Docker
client to distinct Docker hosts.
The Docker machine enables the use of several different environ-
ments just changing the client configuration to the desired Docker
host: basically, modify some environment variables. Here’s the
example:
1 export DOCKER_TLS_VERIFY="1"
2 export DOCKER_HOST="tcp://192.168.99.100:2376"
3 export DOCKER_CERT_PATH="/Users/gomex/.docker/machine/m\
4 achines/default"
5 export DOCKER_MACHINE_NAME="default"
Modifying these four variables, the Docker client will be able to use
a different environments rapidly and with no need of restart any
service.
Creating environment
www.dbooks.org
Using Docker in multiple environments 55
Virtual machine
For this example, we will use the most common driver, the virtual-
box58 , that is, we need a virtualbox59 installed in our station so this
driver works properly.
Before creating the environment, let’s understand how does it work
the creation command on Docker machine:
docker-machine create –driver=<nome do driver> <nome do ambi-
ente>
Regarding the driver virtualbox, we have a few parameters that
can be used:
Parameter Explanation
–virtualbox-memory Specifies the amount of RAM
memory that the
environment can use. The
standard value is 1024MB
(always in MB).
–virtualbox-cpu-count Specifies the amount of CPU
cores that this environment
can use. The standard value
is 1.
–virtualbox-disk-size Specifies the size of the disk
that this environment can
use. The standard value is
20000MB (always in MB).
the virtual box. The machine will have 30GB of disk space, 1 core
and 1GB RAM memory.
To make sure the process happened as expected, just use the
following command:
1 docker-machine ls
1 docker-machine ls
www.dbooks.org
Using Docker in multiple environments 57
1 docker-machine rm teste-virtualbox
Cloud
For this example we will use the most common cloud driver, AWS60 .
For such, we need an AWS account to this driver61 works properly.
It is required that your credentials are in the file ∼/.aws/credentials
as it follows:
1 [default]
2 aws_access_key_id = AKID1234567890
3 aws_secret_access_key = MY-SECRET-KEY
In case you don’t want to put these information on file, you can
specify them via environment variables:
60
http://aws.amazon.com/
61
https://docs.docker.com/machine/drivers/aws/
Using Docker in multiple environments 58
1 export AWS_ACCESS_KEY_ID=AKID1234567890
2 export AWS_SECRET_ACCESS_KEY=MY-SECRET-KEY
Parameter Explanation
–amazonec2-region Says what AWS region is
used to host your
environment. The
standard value is
us-east-1.
–amazonec2-zone It’s the letter that
represents the region
used. The standard value
is “a”.
–amazonec2-subnet-id Says which sub-network
is used in this EC2
instance. Needs to be
created previously.
–amazonec2-security- Says which security
group group is used in this EC2
instance. Needs to be
created previously.
–amazonec2-use-private- It will be created an
address interface with private IP,
because as default it only
specifies an interface with
public IP.
62
http://blogs.aws.amazon.com/security/post/Tx3D6U6WSFGOK2H/A-New-and-
Standardized-Way-to-Manage-Credentials-in-the-AWS-SDKs
63
https://aws.amazon.com/ec2/
www.dbooks.org
Using Docker in multiple environments 59
Parameter Explanation
–amazonec2-vpc-id Says which VPC ID is
desired for this EC2
instance. Needs to be
created previously.
1 docker-machine ls
Check if the environment called teste-aws exists in the list; if so, use
the command below to change the environment:
1 docker-machine rm teste-aws
www.dbooks.org
Managing multiple Docker
containers with Docker
Compose
This article aims to explain in details, and with examples, how
does it work the process of managing multiple Docker containers,
because as your confidence in using Docker grows in you, your
need of using a bigger number of containers increases in the same
proportion, and following the good practice of keeping only one
service per container commonly results in some extra-request.
Managing multiple Docker containers with Docker Compose 62
www.dbooks.org
Managing multiple Docker containers with Docker Compose 63
Anatomy of docker-compose.yml
The YAML standard uses the indentation to separate the code block
from the definitions; because of this the use of indentation is very
important, that is, if you don’t use it correctly, the docker-compose
will fail to execute.
Each line of this file can be defined as a value key or a list. Let’s see
the examples to make the explanation clearer:
65
https://en.wikipedia.org/wiki/YAML
66
https://docs.docker.com/compose/compose-file/
Managing multiple Docker containers with Docker Compose 64
1 version: '2'
2 services:
3 web:
4 build: .
5 context: ./dir
6 dockerfile: Dockerfile-alternate
7 args:
8 versao: 1
9 ports:
10 - "5000:5000"
11 redis:
12 image: redis
In the file above we have the first line that define the version of
docker-compose.yml; in this case we will use the latest version. If
you want to compare the difference amongst versions, check out
this link67 .
1 version: '2'
1 version: '2'
2 services:
On the second indentation level (here it’s done with two spaces) we
have the name of the first service of this file, that gets the name
web. It opens the service definitions block, that is, from the next
level of indentation everything that is defined is going to be part of
this service.
67
https://docs.docker.com/compose/compose-file/#versioning
www.dbooks.org
Managing multiple Docker containers with Docker Compose 65
1 version: '2'
2 services:
3 web:
On the next level of indentation (done again with two more spaces)
we have the first definition of the web service that, in this case, is
the build68 that informs that this service will be created not from an
existent image, but it will be necessary to build your image before
executing it. It also opens a new block of code to parameterize the
operation of this image build.
1 version: '2'
2 services:
3 web:
4 build: .
On the next level of indentation (done again with two more spaces)
we have a build parameter that, in this case, is the context. It is
responsible for informing which file context will be used to build the
given image; in other words, only files that exist inside this folder
will be able to be used in the image building. The context chosen
was the ”./dir”, that is, this indicates that a folder named dir, that is
in the same file system level of docker-compose.vml or of the place
where this command will be executed, will be used as a context
in the creation of this image. When, soon after the key, a value is
provided, this indicates that no block of code will be opened.
1 build: .
2 context: ./dir
indicates the name of the file that will be used to build the given
image. It would be the equivalent to the parameter “-f”69 of the
docker build command. If this definition didn’t exist, the docker-
compose, as a standard, would look for a file called Dockerfile
inside the folder informed in the context.
1 build: .
2 context: ./dir
3 dockerfile: Dockerfile-alternate
1 build: .
2 context: ./dir
3 dockerfile: Dockerfile-alternate
4 args:
5 versao: 1
Going back two indentation levels (four spaces less in relation to the
previous line), we have the ports definition, that would be similar
69
https://docs.docker.com/engine/reference/commandline/build/#specify-dockerfile-f
70
https://docs.docker.com/engine/reference/commandline/build/#set-build-time-
variables-build-arg
www.dbooks.org
Managing multiple Docker containers with Docker Compose 67
1 web:
2 build: .
3 ...
4 ports:
5 - "5000:5000"
Going back one indentation level (two spaces less in relation to the
previous line), we leave the block of code of the web service; this
indicates that no definition informed on this line will be applied to
this service, that is, we need to start a block of code of a new service,
that in our example will be named redis.
1 redis:
2 image: redis
On the next indentation level (done again with two more spaces),
we have the first definition of the redis service, that in this case is
the image, that is responsible for showing which image will be used
to initiate this container. This image will be found in the repository
configured on Docker host, that is hub.docker.com72 by default.
• build : Used to build all the services images that are described
with the definition build in their block of code.
• up : Initiates all the services are in the docker-compose.yml
file.
• stop : Stops all the services that are in the docker-com-
pose.yml file.
• ps : Lists all the services that were initiated from the docker-
compose.yml file.
www.dbooks.org
How to use Docker without
GNU/Linux
This article aims to explain, with details and examples, the use of
Docker in MacOS and Windows stations.
Docker Toolbox
This text is for people who already know Docker, but still don’t
know how Docker can be used in a “non Linux” station.
As we said previously, Docker uses specific resources of the host
operational system. Today, we have support for Windows and
GNU/Linux systems, This means that is not possible to initiate
Docker containers in a MacOS station, for instance.
How to use Docker without GNU/Linux 70
• Toolbox
• Docker For Mac/Windows
• Virtualbox76
• Docker machine77
• Docker client78
• Docker compose79
74
https://www.docker.com/products/docker-toolbox
75
https://www.docker.com/products/docker-toolbox
76
https://www.virtualbox.org/
77
https://docs.docker.com/machine/overview/
78
https://docs.docker.com/
79
https://docs.docker.com/compose/overview/
www.dbooks.org
How to use Docker without GNU/Linux 71
• Kitematic80
1 docker-machine ls
80
https://docs.docker.com/kitematic/userguide/
How to use Docker without GNU/Linux 72
1 docker-machine ls
www.dbooks.org
How to use Docker without GNU/Linux 73
1 docker ps
1 docker ps
We can see that the container created from the image “alpine” is
running. It’s important to emphasize that this process is executed
on Docker Host, in the machine created inside the virtual box that,
in this example, holds the IP 192.168.99.100.
To verify the machine’s IP address, just execute the command
below:
1 docker-machine ip
If the container exposes any port to the Docker Host, whether via
“-p” parameter of the “docker container run -p porta_host:porta_-
container” command or via “ports” parameter of the docker-com-
pose.yml, it’s good to remember that the IP to access the exposed
service is the IP address of the Docker Host; in the example, is
“192.168.99.100”.
At this moment, you must be asking yourself: how is it possible to
map a folder from the “non Linux” station into a container? Here
enters a new Docker artifice to overcome this problem.
Every machine created with the “virtual box” driver automatically
creates a mapping of typ “virtual box shared folders” from the user
folder to the Docker Host root.
To visualize this mapping, we access the virtual machine we’ve just
created in the previous steps:
www.dbooks.org
How to use Docker without GNU/Linux 75
1 sudo su
2 mount | grep vboxsf
1 touch teste
The line above executed the command “ls /tmp/teste” inside the
container named “test”, created in the previous step.
Now, access Docker Host with the command below, and verify if
the test file is in the user folder:
www.dbooks.org
Turning your application
into a container
We are continually evolving to deliver ever better applications, in
less time, replicable and scalable. However, the efforts and learnings
to reach this level of maturity, many times, are not so simple to
achieve.
Currently, we observe the rising of several platforms to facilitate
the deployment, configuration and scalability of the applications
we develop. However, to increase our maturity level we can not just
depend on the platform, we need to build our application following
the best practices.
Aiming to define a series of best practices common to modern web
applications, some developers from Heroku82 wrote the 12Factor
app83 manifesto, counting on a wide experience in developing web
applications.
www.dbooks.org
Codebase
Aiming to facilitate controlling the code changes, by enabling the
traceability of alterations, this best practice indicates that each
application must have only one code base and, from it, must be
deployed in different environments. It’s important to emphasize
that this practice is also part of the Continuous Integration (CI84 )
practices. Traditionally, most part of continuous integration sys-
tems have, as a starting point, a code base that is built and, later,
deployed to development, test and production.
For this explanation, we use the version control system Git and
the hosting service Github. We create and provide an example
repository85 .
See, every code is inside the repository, arranged by practice in
each folder, to facilitate the reproduction. Remember of entering
the corresponding folder at each best practice presented.
Docker holds the possibility of using the environment variable to
parameterize the infrastructure. Therefore, the same application
will behave differently based on the value of environment variables.
Here we use Docker Compose to compose different relevant services
for the application in time to execute. Thus, we must define the con-
figuration of these distinct services and the way they communicate.
84
https://www.thoughtworks.com/continuous-integration
85
https://github.com/gomex/exemplo-12factor-docker.git
Codebase 80
www.dbooks.org
Codebase 81
1 version: '2'
2 services:
3 web:
4 build: .
5 ports:
6 - "5000:5000"
7 volumes:
8 - .:/code
9 labels:
10 - 'app.environment=${ENV_APP}'
11 redis:
12 image: redis
13 volumes:
14 - dados_${ENV_APP}:/data
15 labels:
16 - 'app.environment=${ENV_APP}'
We can notice that the “redis” service is used from the official “redis”
image, with no modification. And the web service is generated from
the building of a Docker image.
In order to build the Docker image of the web service, we create the
following Dockerfile, using the official Python 2.7 image as a base:
1 FROM python:2.7
2 COPY requirements.txt requirements.txt
3 RUN pip install -r requirements.txt
4 ADD . /code
5 WORKDIR /code
6 CMD python app.py
After putting all files in the same folder, we start the environment
with the following command:
Codebase 82
www.dbooks.org
Dependencies
Moving in the list of the 12factor86 model, right after we approached
the code base in this article87 , we have ”Dependency” as the second
best practice.
should also avoid the need of manual work while preparing the
infrastructure that supports the application.
Automating the dependency installation process is the big secret
of success to attend this best practice. In case the infrastructure is
not automated enough to provide initialization without errors, the
attendance to this best practice is compromised.
These automated procedures help maintaining the integrity of the
process, for the name of dependency packages and their respective
versions are specified in the file located in the same repository of
the code that, in turn, is traced in a control version system. Thus,
we can conclude that nothing is modified without the due record.
Docker fits perfectly in the best practice. It’s possible to deliver a
minimum infrastructure profile for the application. In turn, it’s nec-
essary the explicit declaration of dependencies, so the application
runs in the environment.
The example application, written in Python, as we saw a little in
the code below, needs two libraries in order to work correctly:
www.dbooks.org
Dependencies 85
1 FROM python:2.7
2 ADD requirements.txt requirements.txt
3 RUN pip install -r requirements.txt
4 ADD . /code
5 WORKDIR /code
6 CMD python app.py
1 flask==0.11.1
2 redis==2.10.5
www.dbooks.org
Config 87
“-e”, in case you use the command “docker container run” or the
instruction “environment” on docker-compose.yml:
1 version: "2"
2 services:
3 web:
4 build: .
5 ports:
6 - "5000:5000"
7 volumes:
8 - .:/code
9 labels:
10 - 'app.environment=${ENV_APP}'
11 environment:
12 - HOST_RUN=${HOST_RUN}
13 - DEBUG=${DEBUG}
14 redis:
15 image: redis:3.2.1
16 volumes:
17 - dados:/data
18 labels:
19 - 'app.environment=${ENV_APP}'
20 volumes:
21 dados:
22 external: false
www.dbooks.org
Config 89
91
http://12factor.net/pt_br
www.dbooks.org
Backing services 91
As you can see in the code above, the application now gets environ-
ment variables to configure the host name and Redis service port.
In this case, it’s possible to configure the host and Redis port you
wish to connect. And this can and must be specified in the docker-
compose.yml that has also being through a change to suit this new
best practice:
1 version: "2"
2 services:
3 web:
4 build: .
5 ports:
6 - "5000:5000"
7 volumes:
8 - .:/code
9 labels:
10 - 'app.environment=${ENV_APP}'
11 environment:
Backing services 92
12 - HOST_RUN=${HOST_RUN}
13 - DEBUG=${DEBUG}
14 - PORT_REDIS=6379
15 - HOST_REDIS=redis
16 redis:
17 image: redis:3.2.1
18 volumes:
19 - dados:/data
20 labels:
21 - 'app.environment=${ENV_APP}'
22 volumes:
23 dados:
24 external: false
www.dbooks.org
Build, release, run
The next item of the list of 12factor92 model, “Build, launch, run”,
is the fifth best practice.
In the process of automating the software deployment infrastruc-
ture, we need to be careful so the process behavior is within the
expectations and so human errors have low impact in the whole
development process, from release to production.
The best practice points out that the application has explicit sepa-
rations at the
Build, Release and Run stages. Thus, every change in the ap-
plication code is build only once in the Build stage. Changes in
configuration don’t need a new build, so it’s only necessary the
release and run stages.
In such a way, it’s possible to create clear controls and processes in
each stage. In case something happens in the code build, a measure
can be taken or even the release can be canceled, so the code in
production wouldn’t be compromised due to a possible error.
The separation of duties makes possible to know in which stage the
problem happened, and fix it manually, if needed.
The artefacts produced must have a single release ID. It can be the
timestamp (like 2011-04-06-20:32:17) or an incremental number (like
v100). With the single artefact, it’s possible to guarantee the use of
the old version, whether for a rollback or even to compare behaviors
after changing the code.
In order to follow the best practice, we need to build the Docker
image with the application inside of it. It will be our artifact.
We will have a new script, here called build.sh, with the following
content:
www.dbooks.org
Build, release, run 95
1 #!/bin/bash
2
3 USER="gomex"
4 TIMESTAMP=$(date "+%Y.%m.%d-%H.%M")
5
6 echo "Construindo a imagem ${USER}/app:${TIMESTAMP}"
7 docker build -t ${USER}/app:${TIMESTAMP} .
8
9 echo "Marcando a tag latest também"
10 docker tag ${USER}/app:${TIMESTAMP} ${USER}/app:latest
11
12 echo "Enviando a imagem para nuvem docker"
13 docker push ${USER}/app:${TIMESTAMP}
14 docker push ${USER}/app:latest
1 version: "2"
2 services:
3 web:
4 image: gomex/app:latest
5 ports:
6 - "5000:5000"
7 volumes:
8 - .:/code
9 labels:
10 - 'app.environment=${ENV_APP}'
11 environment:
12 - HOST_RUN=${HOST_RUN}
13 - DEBUG=${DEBUG}
14 - PORT_REDIS=6379
15 - HOST_REDIS=redis
16 redis:
17 image: redis:3.2.1
18 volumes:
19 - dados:/data
www.dbooks.org
Build, release, run 97
20 labels:
21 - 'app.environment=${ENV_APP}'
22 volumes:
23 dados:
24 external: false
1 docker-compose up -d
Processes
Next in the list of 12factor95 model, we present ”Processes” as the
sixth best practice.
Nowadays, with the automated processes and the due intelligence
in maintaining applications, it is expected that the application can
respond to demand peaks with automatic initialization of new
processes without affecting its behavior.
The best practice says that 12factor application processes are state-
less (don’t store state) and share-nothing. Any data that need to
persist must be stored in stateful support service, usually used in a
database.
The final goal of this practice does not differentiates if the appli-
cation is executed in the developer’s machine or in production,
because in this case what changes is the amount of initialized
processes to respond the demands. In the developer’s machine is
only one process; in production this number can be higher.
12factor points out that the memory space or file system of the
server can be used briefly as a single transaction cache. For instance,
95
http://12factor.net
www.dbooks.org
Processes 99
the download of a big file, working over it and storing the results
in the database.
We highlight that a state should never be store between require-
ments, it doesn’t matter the processing status of the next require-
ment.
It’s important to emphasize: by following a practice, a application
doesn’t assume that any item stored in memory cache or in disk will
be available for a future requirement or job - with many different
processes running, higher are the chances of a future requirement
to be served by a different process, even by a different server. Even
when running in a single process, a restart (initiated by a code’s
deployment, changes in configuration, or the running environment
reallocating the process to a different physical location) usually will
end up with the local state (memory and file system, for instance).
Some applications require persistent sessions to store information
of the user session and so. Such sessions are used in future require-
ments from the same visitor. That is, if it’s stored with the processe,
it’s clearly violating the best practice. In this case, the advice is to
use a support service, such as redis, memcached or similar to this
type of job that is external to the process. With that, it’s possible
that the next process, no matter where it is, is able to get the update
information.
The application we are working on does not keep local data and
everything it need is stored on Redis. We don’t need to adequate
anything in this code to comply with the best practice, as we can
see:
Processes 100
www.dbooks.org
Port binding
According to the list of the 12factor97 model, the seventh best
practice is port binding.
It’s usual to find applications executed inside containers of web
servers, such as Tomcat or Jboss, for instance. Usually, these ap-
plications are deployed into the services so they can be access by
user externally.
97
http://12factor.net
Port binding 102
The best practice suggests that the given application would be self-
contained and depend on a application server, such as Jboss, Tomcat
and similar. The software must export a HTTP services and deal
with the requirements that come through it. This means that any
additional application is unnecessary for the code to be available to
the external communication.
Traditionally, the artifact deployments in an application server, such
as Tomcat and Jboss, requires the generation of an artifact, that is
sent to the given web service. But in the Docker’s container model
the ideia is that the artifact of the deployment process would be the
container itself.
The old artifact deployment process in an application server usually
didn’t have a fast return, overly increasing the process of deploying
a service, because each alteration required to send the artifact to
the web application service; the later was responsible for importing,
reading and executing the new artifact.
By using Docker, the application become self-contained easily. We
built a Dockerfile that describes what the application needs:
1 FROM python:2.7
2 ADD requirements.txt requirements.txt
3 RUN pip install -r requirements.txt
4 ADD . /code
5 WORKDIR /code
6 CMD python app.py
7 EXPOSE 5000
www.dbooks.org
Port binding 103
another one if you think it’s necessary. Here’s the part of the code
that approaches the subject:
1 if __name__ == "__main__":
2 app.run(host="0.0.0.0", debug=True)
www.dbooks.org
Concurrency 105
1 version: "2"
2 services:
3 web:
4 container_name: web
5 build: web
6 networks:
7 - backend
8 ports:
9 - "80:80"
10
11 worker:
12 build: worker
13 networks:
14 backend:
15 aliases:
www.dbooks.org
Concurrency 107
16 - apps
17 expose:
18 - 80
19 depends_on:
20 - web
21
22 redis:
23 image: redis
24 networks:
25 - backend
26
27 networks:
28 backend:
29 driver: bridge
1 FROM nginx:1.9
2
3 COPY nginx.conf /etc/nginx/nginx.conf
4 EXPOSE 80
5 CMD ["nginx", "-g", "daemon off;"]
1 user nginx;
2 worker_processes 2;
3
4 events {
5 worker_connections 1024;
6 }
7
8 http {
9 access_log /var/log/nginx/access.log;
10 error_log /var/log/nginx/error.log;
11
12 resolver 127.0.0.11 valid=1s;
13
14 server {
15 listen 80;
16 set $alias "apps";
17
18 location / {
19 proxy_pass http://$alias;
20 }
21 }
22 }
www.dbooks.org
Concurrency 109
“apps” is the name of the network specified inside the file docker-
compose.yml. In this case, the balancing is made for the network,
and every new container that enters this network is automatically
added to the load balancing.
To build the worker we have the directory worker containing the
Dockerfile files (responsible for creating the image used), app.py
(application used in all chapters) and requirements.txt (describes
the dependencies of app.py).
Below is the content of the file app.py that was modified for the
practice:
1 flask==0.11.1
2 redis==2.10.5
1 FROM python:2.7
2 COPY requirements.txt requirements.txt
3 RUN pip install -r requirements.txt
4 COPY . /code
5 WORKDIR /code
6 CMD python app.py
In the redis services there’s not building the image, we’ll use the
official image to exemplify.
To test what was presented so far, clone the repository (https://github.com/gomex/exe
12factor-docker100 ) and access the folder factor8, executing the
command below in order to initiate the containers:
1 docker-compose up -d
Access the containers through the browser at the port 80 from the
localhost address. Refresh the page and see that only one name
appears.
As a standard, Docker-Compose executes only one instance of each
service explicit on docker-compose.yml. To increase the amount of
worker containers from one to two, execute the command below:
Refresh the page and see that the name of the host alternates be-
tween two possibilities, that is, the requirements are being balanced
to both containers.
100
https://github.com/gomex/exemplo-12factor-docker
www.dbooks.org
Concurrency 111
www.dbooks.org
Disposability 113
16
17 if __name__ == "__main__":
18 def server_handler(signum, frame):
19 print 'Signal handler called with signal', sign\
20 um
21 server.terminate()
22 server.join()
23
24 signal.signal(signal.SIGTERM, server_handler)
25
26 def run_server():
27 app.run(host="0.0.0.0", debug=True)
28
29 server = Process(target=run_server)
30 server.start()
www.dbooks.org
Disposability 115
access the folder factor8 (that’s right, number 8; let’s show the
difference relating to factor9), executing the command below to
initiate the containers:
1 docker-compose up -d
1 docker-compose stop -t 5
1 docker-compose up -d
Notice that the worker process finished up faster, for it got the
SIGTERM signal. The application shut down by itself and didn’t
need to receive a SIGKILL signal to be effectively shut down.
Development/production
parity
Next on the 12factor104 model list, we have “Development/produc-
tion parity” as the tenth best practice.
www.dbooks.org
Development/production parity 117
www.dbooks.org
Logs 119
1 docker-compose up
107
http://12factor.net
www.dbooks.org
Admin processes 121
1 docker-compose up
Access the application in the browser. In case you are using GNU/Linux
or Docker For Mac and Windows, access the address 127.0.0.1. You’ll
see the following sentence:
Admin processes 122
Access the application a couple more times so the counter goes up.
Then, execute the admin command from the worker service:
www.dbooks.org
Tips for using Docker
If you read the first part of the book, you already know the basics on
Docker; but now that you intend to start using it more frequently
some issues may rise, because as in any tool, Docker has its own set
of best practices and tips.
The goal of this article is to present some tips for better use Docker.
That doesn’t mean that your way of using is necessarily wrong.
Every tool requires some best practices to make its use more
effective and less likely to show future problems.
This chapter is divided in two sections: tips for running (‘docker
container run’) e best practices in image building (‘docker build’/‘Dockerfile’).
Disposable containers
Notice that ‘-it’ means ‘–interactive —tty’. It’s used to fix the com-
mand line to the container, thus, after this ‘docker container run’,
every command are executed by the ‘bash’ inside the container. To
exit, use ‘exit’ or press ‘Control-d’. These parameters are very useful
to execute a container in the foreground.
Logs
www.dbooks.org
Tips for using Docker 125
Notice yet the argument ‘-f’ to follow up the next log messages
interactively. If you want to stop, press ‘Ctrl-c’.
Backup
Docker container data are exposed and shared via volume ar-
guments used while creating and starting the container. These
volumes don’t follow the rules from Union File System109 , because
the data persist even when the container is removed.
To create a volume in a given container, execute as it follows:
By executing this container, we’ll have the Nginx service that uses
the volume created to persist its data; the data will persist even after
the container is removed.
It a system admin best practice to do periodic backups; to execute
this activity (extract data), use the command:
109
https://docs.docker.com/engine/reference/glossary/#union-file-system
Tips for using Docker 126
www.dbooks.org
Tips for using Docker 127
Obs.: Let’s not get too deep in Docker’s low level concept and say
that dangling images are simply images with no tags, therefore
unnecessary for conventional use.
Depending on the type of application, logs can occupy some volume
too. The management depends a lot on which driver114 is used.
In the standard driver (‘json-file’), the cleaning can be done by
executing the following command inside Docker Host:
Aliases
www.dbooks.org
Tips for using Docker 129
22 "$1"
23 }
24
25 function docker-get-state {
26 # Usage: docker-get-state (friendly-name)
27 [ -n "$1" ] && docker inspect --format "{{ .State.Run\
28 ning }}" "$1"
29 }
30
31 function docker-memory {
32 for line in `docker ps | awk '{print $1}' | grep -v C\
33 ONTAINER`; do docker ps | grep $line | awk '{printf $NF\
34 " "}' && echo $(( `cat /sys/fs/cgroup/memory/docker/$li\
35 ne*/memory.usage_in_bytes` / 1024 / 1024 ))MB ; done
36 }
37 # keeps the commmand history when running a container
38 function basher() {
39 if [[ $1 = 'run' ]]
40 then
41 shift
42 docker container run -e HIST_FILE=/root/.bash_h\
43 istory -v $HOME/.bash_history:/root/.bash_history "$@"
44 else
45 docker "$@"
46 fi
47 }
48 # backup files from a docker volume into /tmp/backup.ta\
49 r.gz
50 function docker-volume-backup-compressed() {
51 docker container run --rm -v /tmp:/backup --volumes-f\
52 rom "$1" debian:jessie tar -czvf /backup/backup.tar.gz \
53 "${@:2}"
54 }
55 # restore files from /tmp/backup.tar.gz into a docker v\
56 olume
Tips for using Docker 130
57 function docker-volume-restore-compressed() {
58 docker container run --rm -v /tmp:/backup --volumes-f\
59 rom "$1" debian:jessie tar -xzvf /backup/backup.tar.gz \
60 "${@:2}"
61 echo "Double checking files..."
62 docker container run --rm -v /tmp:/backup --volumes-f\
63 rom "$1" debian:jessie ls -lh "${@:2}"
64 }
65 # backup files from a docker volume into /tmp/backup.tar
66 function docker-volume-backup() {
67 docker container run --rm -v /tmp:/backup --volumes-f\
68 rom "$1" busybox tar -cvf /backup/backup.tar "${@:2}"
69 }
70 # restore files from /tmp/backup.tar into a docker volu\
71 me
72 function docker-volume-restore() {
73 docker container run --rm -v /tmp:/backup --volumes-f\
74 rom "$1" busybox tar -xvf /backup/backup.tar "${@:2}"
75 echo "Double checking files..."
76 docker container run --rm -v /tmp:/backup --volumes-f\
77 rom "$1" busybox ls -lh "${@:2}"
78 }
Sources:
• https://zwischenzugs.wordpress.com/2015/06/14/my-favourite-
docker-tip/117
• https://website-humblec.rhcloud.com/docker-tips-and-tricks/118
117
https://zwischenzugs.wordpress.com/2015/06/14/my-favourite-docker-tip/
118
https://website-humblec.rhcloud.com/docker-tips-and-tricks/
www.dbooks.org
Tips for using Docker 131
• Documentação oficial119
• Guia do projeto Atomic120
• Melhores práticas do Michael Crosby Parte 1121
• Melhores práticas do Michael Crosby Parte 2122
Use a “linter”
The basics
stop it, destroy it and replace it for a new container built with the
minimum effort.
It’s usual to put other files, such as documentation, in the same
directory of ‘Dockerfile’; to improve the building performance,
delete files and directories creating a dockerignore126 file in the
same directory. This file works similarly to ‘.gitignore’. Using
it helps to minimize the building context docker build.
Avoid adding packages and unnecessary extra dependencies to the
application and minimize complexity, image size, building time and
attack surface.
Also minimize the layer amount: whenever possible, group up
various commands. However, take in consideration the volatility
and maintenance of these layers.
In most cases, run only one process per container. Decoupling
applications in several container eases up horizontal scalability,
reuse and monitoring of containers.
The ‘ADD’ command exists since the beginning of Docker. It’s ver-
satile, and provides some tricks aside of simply copying files from
the building context, and that’s what makes it magical and hard
to understand. It allows to download url files and automatically
extract files of known formats (tar, gzip, bzip2, etc.).
On the other hand ‘COPY’ is a simpler command to put files and
folders of the building path inside the Docker image. Thus, choose
‘COPY’ unless you are absolutely sure that ‘ADD’ is necessary. For
more details, check here127 .
126
https://docs.docker.com/engine/reference/builder/
127
https://labs.ctl.io/dockerfile-add-vs-copy/
www.dbooks.org
Tips for using Docker 133
However, in case ‘debian’ is still too big, there are minimalist images
such as ‘alpine’132 or even ‘busybox’133 . Avoid ‘alpine’ if DNS is
required, for there are a few issues to be solved134 . In addition, avoid
it for languages that use GCC, such as Ruby, Node, Python, etc.;
because ‘alpine’ uses libc MUSL that can produce different binaries.
Avoid gigantic images such as ‘phusion/baseimage’135 . This image
is too big, it defeats the philosophy of process per container and
much of what makes it up is not essential for Docker containers;
read more here136 .
Other sources137
www.dbooks.org
Tips for using Docker 135
1 #!/bin/sh
2 set -e
3 datadir=${APP_DATADIR:="/var/lib/data"}
4 host=${APP_HOST:="127.0.0.1"}
5 port=${APP_PORT:="3306"}
6 username=${APP_USERNAME:=""}
7 password=${APP_PASSWORD:=""}
8 database=${APP_DATABASE:=""}
9 cat <<EOF > /etc/config.json
10 {
11 "datadir": "${datadir}",
12 "host": "${host}",
13 "port": "${port}",
14 "username": "${username}",
15 "password": "${password}",
16 "database": "${database}"
17 }
18 EOF
19 mkdir -p ${APP_DATADIR}
20 exec "/app"
www.dbooks.org
Tips for using Docker 137
Besides, while still creating the image (‘build’), don’t add data to
paths previously declared as ‘VOLUME’. This doesn’t work, the data
won’t be persisted, for datas in volumes are not committed into
images.
Read more at Jérôme Petazzoni’s explanation147 .
Ports EXPOSE
147
https://jpetazzo.github.io/2015/01/19/dockerfile-and-data-in-volumes/
www.dbooks.org
Apêndice
Container or virtual machine?
Virtual machine
While this model evolved, the softwares that implement the solu-
tion could offer more features, such as better interface to manage
virtual environments and high availability using several physical
hosts.
With the new features for managing environments on virtual
machines, it’s possible to specify the amount of physical resource
each virtual environment uses and even to gradually increase it if
necessary.
Currently, virtual machines are a reality for any organization
that requires TI environments, for it facilitates the management
of physical machines and sharing amongst several environments
www.dbooks.org
Apêndice 141
Container
www.dbooks.org
Apêndice 143
Conclusion
With the data presented we realized that the conflict point between
solutions is low. They can and usually will be adopted together.
You can provide a physical machine with a virtual machine server,
in which host virtual machines will be created that, in turn, will
have Docker installed. In this Docker, they will make available
environments and their respective services, each one in a container.
See that we’ll have several isolation levels. In the first one, the
physical machine, that was separated in various virtual machines,
that is, we already have our layer of operational systems interacting
with distinct virtual hardware, such as virtual network card, disks,
processor and memory. In this environment, we would only have
installed the basic operational system and Docker.
In the second isolation level, we have Docker downloading ready
images and providing running containers that, in turn, create new
isolated environments, at the level of processing, memory, disk and
network. In this case, we can have in the same virtual machine
a web application environments and a database. But in different
containers (and that wouldn’t be a best practice issue for service
management), a lot less security.
Apêndice 144
Useful commands
www.dbooks.org
Apêndice 145
How?
Adding audio:
Apêndice 146
Adding webcam:
www.dbooks.org
Apêndice 147
It works normally… Just mount the X11 socket and define the
environment variable on docker-compose.yml and will be possible
to start multiple application with only one command.
Apêndice 148
Mac OS X
#### Windows148
Install xming
Install o Docker for Windows
148
https://github.com/docker/docker/issues/8710#issuecomment-135109677
www.dbooks.org