Docker
Docker
Docker
3.3 Scenario One: Using Docker to Keep Local Environments Clean ................ 11
3.3.1 A Bit of Clarification ............................................................................... 13
3.4 Scenario Two: Using Docker to Test Your App in a Clean Environment ...... 14
1
Docker is one of the greatest innovations that happened in the last few
years. It has opened new horizons in software development and it spun
off many innovative solutions and projects. Docker images and containers
are rapidly becoming THE way to do software development.
We won’t go too deep into the Docker itself since it already has fantastic
documentation, and there are plenty of in-depth Docker guides
throughout the internet. There are some concepts you’ll need to
understand to be able to follow along since it gets progressively harder.
Our main goal will be to show off the tremendous power of Docker
and ASP.NET Core when combined.
We will use the example project from the book, so you have nothing to
fear. Once we finish, we’ll have a nice, dockerized application, ready to be
deployed to any server.
The technologies you’ll need to follow along with the complete series:
2
• Visual Studio 2022 or any other IDE of your choice
• Powershell (recommended)
We recommend using this stack since that is what we’ve used and we can
vouch for it to work for you too.
We’ve encountered and solved a few of those so you don’t have to.
Besides being a phenomenal technology and the one that can potentially
improve your career by a large margin. Learning Docker moves you one
step closer to a DevOps Engineer title and not only helps you personally,
but your employees will be thrilled that you learned it on your own.
According to Glassdoor, the DevOps Engineer job has been hitting the top
of the tech job lists for the three consecutive years right after Data
Scientist.
Here are the reports for 2017., 2018., 2019., 2020 and yes, even 2021.
respectively:
- https://www.glassdoor.com/List/Best-Jobs-in-America-2017-
LST_KQ0,25.htm
- https://www.glassdoor.com/List/Best-Jobs-in-America-2018-
LST_KQ0,25.htm
- https://www.glassdoor.com/List/Best-Jobs-in-America-2019-
LST_KQ0,25.htm
- https://www.glassdoor.com/List/Best-Jobs-in-America-2020-
LST_KQ0,25.htm
3
- https://www.glassdoor.com/List/Best-Jobs-in-America-
LST_KQ0,20.htm
The list has taken job satisfaction and median base salary into
account. So, not only will you get your six-figure job, but you will most
probably enjoy your position as well.
As you can see, businesses look for the best ways to scale up, and the
DevOps engineer is the right choice for the best companies out there like
Google, Amazon, Apple and so on.
So what are we waiting for, let’s take the first step towards a six-figure
dream job!
4
Before we delve deeper into the dockerization of our Web API project, we
need to make some changes.
If this seems confusing, just go with it now, it will be much clearer later
on. The important thing to know is that we can’t use the localhost inside
the docker container since the localhost is different in the container and
outside of it.
Instead of localhost, we will use the IP address 0.0.0.0. We’re also going
to remove https address for now because it needs some additional work
with certificates. We’ll add it back later. Let’s modify our file:{
{
"profiles": {
"CompanyEmployees": {
"commandName": "Project",
"dotnetRunMessages": true,
"launchBrowser": false,
"launchUrl": "weatherforecast",
"applicationUrl": "http://0.0.0.0:5000",
"environmentVariables": {
"ASPNETCORE_ENVIRONMENT": "Development"
}
}
}
}
Okay, that will do it for now.
5
To be able to dockerize our application and make it \work without relying
on external resources (database) we are going to make some changes to
the configuration. We are going to re-introduce MSSQL later in the series,
but for now, we are going to rely on an in-memory database to store our
data.
For testing purposes, we’re going to use the xUnit tool and Moq for
mocking our data.
The easiest way to add a testing project in ASP.NET Core is with the
dotnet CLI. Navigate to the root of the solution and type:
6
dotnet new xunit –o Tests
This will create the project using the xUnit template and restore the
needed packages.
And because we need to use the Moq library, we are going to navigate to
Tests project and add it too:
cd Tests
dotnet restore
Now we can easily add our first unit test to the project.
[Fact]
public void
GetAllCompaniesAsync_ReturnsListOfCompanies_WithASingleCompany()
{
// Arrange
var mockRepo = new Mock<ICompanyRepository>();
mockRepo.Setup(repo => (repo.GetAllCompaniesAsync(false)))
.Returns(Task.FromResult(GetCompanies()));
// Act
var result = mockRepo.Object.GetAllCompaniesAsync(false)
.GetAwaiter()
7
.GetResult()
.ToList();
// Assert
Assert.IsType<List<Company>>(result);
Assert.Single(result);
}
Now if you run dotnet test you should get something like this:
Excellent!
Everything is set for the next step, which is why we did all this
preparation. Now it’s time to dockerize our ASP.NET Core application.
8
To check if everything works alright, run dotnet build and dotnet
test. This is how we do it usually during the development phase. This
time around we’ll use docker images and containers to do the same thing
and bring our software development skills to another level.
9
Now that we’ve prepared our ASP.NET Core application, we are going to
learn how to install and use Docker on Windows 10, the reasons behind
using Docker, and some useful Docker CLI commands. Understanding
how the Docker CLI works is crucial, and you’ll most definitely have a
hard time proceeding further without getting a grasp of the basic
commands Docker offers through its powerful CLI.
But before that, let’s go through some very basic concepts regarding
Docker and ASP.NET Core.
10
On Windows 10, Docker lets you choose between Linux and Windows
containers, which is a very nice feature. In this series, we are going to
stick with the Linux containers, but the principles apply both to Windows
or Linux containers. The syntax and underlying architecture (eg. paths)
may vary a bit.
Understanding how the Docker CLI works will help you later when we
move on to Dockerfiles.
So, we have an application to play with now, it builds, the tests are
passing and we can run it on a local machine.
So we’ll try to explain why we use Docker and how to use the Docker CLI
with our ASP.NET Core app.
Let’s say you have a Java-oriented friend and you want to explain to
him why .NET Core kicks ass. We’ll call him Mike. Mike is quite a stubborn
lad and doesn’t want to install all that Microsoft mumbo-jumbo on his
machine, which is perfectly understandable.
11
.NET SDK and Visual Studio can take time and disk space to install and
Mike doesn’t want that. But, Mike has heard how awesome Docker is and
he even installed it at one time on his machine some time ago.
Next, navigate to the root of the project and build the project by typing
(works only in PowerShell):
Followed by:
And just like that we have built and run our application on Mike’s machine
even though he doesn’t have any Microsoft dependencies installed. (If this
command gives you trouble it’s probably because you’ve pasted it.
Completely normal. Try typing it instead.)
But those commands look really complicated. Let’s break them down.
12
3.3.1 A Bit of Clarification
These commands look scary when you see them for the first time but
they are not that complicated once you get familiar with the syntax.
The actual command is docker run, followed by a few options, and then
the image name, which is the name of the one we’ve built in the last step.
13
and build it in the container. You can learn more about Docker
volumes here.
The options are followed by the image name, and finally the command we
want to run inside the instantiated container.
So, what does this mean for Mike the Java programmer?
This means Mike can safely change the code on his machine using any
IDE, and then run the container to see the results of his work without
having had to install any of the dependencies on his machine.
You can now take a break and watch Mike and enjoy his conversion to
C#.
But since Mike is a good developer and knows about the “It works on my
machine” phenomenon, he wants to run the app in a clean environment
to test it out and make sure the app works every time.
But there might be a faster and easier way he can do that: spin up a
local docker container optimized for the ASP.NET Core runtime.
14
How can Mike do that?
But first, he needs to publish his app and make a small Dockerfile in the
CompanyEmployees folder that will instruct Docker to copy the contents
of the publish folder to the container.
FROM mcr.microsoft.com/dotnet/aspnet:6.0
WORKDIR /home/app
COPY bin/Debug/net6.0/publish .
ENTRYPOINT ["dotnet", "CompanyEmployees.dll"]
Build his own image using that Dockerfile (make sure not to forget to put
the dot at the end of the command): docker build -t
companyemployees .
And then simply run the image with: docker run -p 8080:80
companyemployees
That’s it, Mike has run the application on a clean environment and now he
can sleep peacefully because he knows that his app can run on more than
just his machine.
15
Mike has been working now for a few weeks on his ASP.NET Core app. But
he has already planned a nice trip with his family and he won’t be able to
continue working on his desktop machine. But he brings his Mac wherever
he goes.
Up until now, Mike has been working locally and using the local Docker
environment.
So, to continue working in the evening, after his long walks through the
woods, he needs a mechanism to persist his work and bring it with him.
16
To be able to work on the project, he needs to
use mcr.microsoft.com/dotnet/sdk:6.0 image, optimized to build
and run ASP.NET Core applications.
FROM mcr.microsoft.com/dotnet/sdk:6.0
WORKDIR /home/app
17
COPY . .
And then he can log in to his Docker Hub account and push the image
through the Docker CLI:
And now Mike can navigate to the “Tags” tab on the Docker Hub
repository to see his image:
Now Mike can safely pull the image to his Mac and work some more on his
app before he goes to sleep.
18
docker pull mikesaccount/mikesdemoapp
These scenarios are just a few of the unlimited number of possible Docker
usages. You have seen how powerful the Docker CLI can be but we’ve
just scratched the surface.
Docker’s full potential can only be seen when you deploy multiple
microservices and connect them.
Now that we are familiar with the Docker CLI, let’s see what we can do
with Dockerfiles.
19
Now we are going to focus on dockerizing our ASP.NET Core application
with Dockerfiles, and understanding how Dockerfile syntax works. We are
also going to spend a lot of effort into optimizing our images to achieve
the best results.
FROM mcr.microsoft.com/dotnet/sdk:6.0
WORKDIR /home/app
COPY . .
RUN dotnet restore
RUN dotnet publish ./CompanyEmployees/CompanyEmployees.csproj -o /publish/
WORKDIR /publish
ENV ASPNETCORE_URLS=http://+:5000
ENTRYPOINT ["dotnet", "CompanyEmployees.dll"]
20
- WORKDIR /home/app : The WORKDIR command simply sets the
current working directory inside our image. In this case, that is
the /home/app folder.
- COPY . . : The COPY command is pretty straightforward too. In this
case, it copies all the files from the local system to the current
working directory of the image. Since we don’t need to copy all the
files to build the project, we’re going to use a .dockerignore file to
which the COPY command will look up when it starts copying the
files.
- RUN dotnet restore : The RUN command runs any command in a
new layer and commits it to the base image. Concretely, in this
step, we are restoring the packages for our solution, as we would if
we run it locally, but this time it’s happening inside the image.
- RUN dotnet publish
./CompanyEmployees/CompanyEmployees.csproj -o
/publish/ : After we restore the packages, the next step is to
publish our application. Since the CompanyEmployees project is our
main app, we are going to publish it to the /publish folder inside
our image.
- WORKDIR /publish: We are switching our current directory
to /publish
- ENV ASPNETCORE_URLS="http://+:5000" : Since
launchSettings.json is a file that’s just used by our local IDE, it
won’t make it to the publish folder. This means we need to set the
application URL manually. Once again, we can’t use localhost or we
won’t be able to bind the port from the docker container to the local
environment.
- ENTRYPOINT ["dotnet", "CompanyEmployees.dll"]: The
ENTRYPOINT command allows us to configure the container to run
21
as an executable. In this case, after the project has been published
and we run the image, a container will spin up by firing dotnet
CompanyEmployees.dll command which will start our
application.
Since we are copying our files to the docker image on every build, we
should create a .dockerignore file and select which files and folders we
don’t want to copy every time. The advantages of using a .dockerignore
include faster image build, improving cache performance, and avoiding
potential conflicts when building an application.
Why is that?
Because besides not being important to the build process, that would
mean our COPY step will trigger every time we change the Dockerfile,
and that’s not something we want. So you want to put all the files that
you don’t want to trigger a build into a .dockerignore file.
For starters, we are going put these files and folders in our
.dockerignore file:
**/bin/
**/obj/
**/global.json
**/Dockerfile*
**/.dockerignore*
**/*.user
**/.vs/
22
The first time you run the docker build, it will take a while, since Docker is
fetching the base image. Give it some time. After the first build, every
following build will use that image from the local machine.
First build is fun and you get to see every step of our Dockerfile resolving
in real-time.
After the first build, if you don’t change the project files, all the steps will
be cached and you’ll get something like this:
As you can see every step is cached, and there is no need to rebuild the
image. If you want you can force the rebuild with the –no-cache flag.
If you run the docker images command, now you can find our image
on the list (we’ve cleaned it a bit with docker rmi command):
Finally, to run the application you can spin up the container by typing:
23
But, take a quick look at the image size. It’s the biggest so far. That’s
because we copied, restored and published our code in the same image.
So, knowing all this can you guess what we could have done better in our
Dockerfile?
Well, let's have a look at that third step once again: COPY . ., and then
we do RUN dotnet restore.
What this means is that whenever we make a change to the source code,
we need to run dotnet restore since that will break the cache.
So instead of copying all the files, we can just copy the project and
solution files, do the dotnet restore, and then copy the rest of the files:
FROM mcr.microsoft.com/dotnet/sdk:6.0
WORKDIR /home/app
COPY ./CompanyEmployees/CompanyEmployees.csproj ./CompanyEmployees/
COPY ./Contracts/Contracts.csproj ./Contracts/
COPY ./Repository/Repository.csproj ./Repository/
COPY ./Entities/Entities.csproj ./Entities/
COPY ./LoggerService/LoggerService.csproj ./LoggerService/
COPY ./Tests/Tests.csproj ./Tests/
COPY ./CompanyEmployees.sln .
RUN dotnet restore
COPY . .
RUN dotnet publish ./CompanyEmployees/CompanyEmployees.csproj -o /publish/
WORKDIR /publish
ENV ASPNETCORE_URLS=http://+:5000
ENTRYPOINT ["dotnet", "CompanyEmployees.dll"]
24
There, now the dotnet restore won't trigger whenever we change
something in our source code. We'll only need to re-publish the
assemblies.
FROM mcr.microsoft.com/dotnet/sdk:6.0
WORKDIR /home/app
COPY ./*.sln ./
COPY ./*/*.csproj ./
RUN for file in $(ls *.csproj); do mkdir -p ./${file%.*}/ && mv $file
./${file%.*}/; done
RUN dotnet restore
COPY . .
RUN dotnet publish ./CompanyEmployees/CompanyEmployees.csproj -o /publish/
WORKDIR /publish
ENV ASPNETCORE_URLS=https://+:5001;http://+:5000
ENTRYPOINT ["dotnet", "CompanyEmployees.dll"]
25
Now we don't need to change the Dockerfile even if the add more projects
to our solution or change project names.
There is one more tiny little thing we need to do. We need to run our unit
test to see if the project is even worth publishing!
FROM mcr.microsoft.com/dotnet/sdk:6.0
WORKDIR /home/app
COPY ./*.sln ./
COPY ./*/*.csproj ./
RUN for file in $(ls *.csproj); do mkdir -p ./${file%.*}/ && mv $file
./${file%.*}/; done
RUN dotnet restore
COPY . .
RUN dotnet test ./Tests/Tests.csproj
RUN dotnet publish ./CompanyEmployees/CompanyEmployees.csproj -o /publish/
WORKDIR /publish
ENV ASPNETCORE_URLS=https://+:5001;http://+:5000
ENTRYPOINT ["dotnet", "CompanyEmployees.dll"]
That’s it. Rebuild the image again and check out the result.
26
The test has failed, and the build has stopped. The publish step hasn’t
been triggered, which is exactly what we want.
Excellent.
We’ve gone through the entire process, but there is one thing that some
of you might have noticed. That’s not something you want to use to run
your containers from.
SDK images are powerful and we use to build and run applications.
Nevertheless, to deploy our application to a production environment, we
should create an image that is optimized for that purpose only.
So let’s see how to upgrade our Dockerfile and publish our application
to the runtime-optimized image.
For this purpose, we are going to use something called a multistage build
in the Docker world. Multistage builds can be created by using FROM
command multiple times in a Dockefile.
27
So we are going to do just that to upgrade our process. Instead of using
our SDK image to publish our application with, we are going to introduce
another base image to the Dockerfile and publish the artifacts inside it.
Third, we moved the entry point to the runtime image, so that we run the
application when we instantiate the runtime container. The SDK container
shall no longer be responsible for running our application.
We’ve also added a https port, which we want going forward. HTTPS is a
standard these days and we want to run our application securely.
28
Type the docker build -t
codemazeblog/companyemployees:runtime . command and
enjoy the process.
After the build finishes, we can see two images. One is tagged and one is
not. One is runtime and one is SDK. In this case, we are not interested in
the SDK image, but you can clearly see the size difference.
29
So let’s clean the existing ones (prompt Yes):
If you still have problems with the certificate, navigate to the pfx file
manually and click on it. Go through the certificate installation wizard,
enter the password and then try to run the application again.
That’s it, if you did everything correctly, you can access the application at
https://localhost:8081/swagger and check if the certificate is indeed valid
(click on the tiny lock button left of the URL). You might want to clear the
browser cache or run incognito to make sure the certificate is applied.
30
Let's wrap the chapter up with the commands you might find useful while
following these steps.
Here are some of Docker commands that might help you along the way:
Great, now that we’ve conquered Dockerfiles and multistage builds, it’s
the right time to move forward and see how we can improve our Docker
skills even more.
31
To make all of our efforts so far legit, we are going to add a MSSQL
database as another container and connect it with our application. Since
we'll have multiple containers running we are going to introduce the
Docker Compose tool which is one of the best tools for configuring and
running multi-container applications.
But what does that mean exactly and how does it do that?
Docker Compose uses a YAML file to define the services and then run
them with a single command. Once you define your services, you can run
them with the docker-compose up command, and shut them down
with the docker-compose down command.
If you've followed along, you might have noticed that running Docker
images can get pretty complicated. This is not such a big problem once
you learn what each Docker command does, but once you have multiple
images, the pain of running all of them manually rises exponentially.
32
Other features of Docker Compose include:
Overall, Compose is a nifty and powerful tool for managing your multi-
container apps, and we are going to see just how to use it for our
ASP.NET Core App by adding the MSSQL image as our database
container.
Let's drill down into it and see how awesome Compose can be.
Once we've created the file, we can start adding some commands to it:
version: '3.0'
services:
companyemployees:
image: codemazeblog/companyemployees:runtime
ports:
- "8080:5000"
- "8081:5001"
33
This is the simplest of Compose files, and it's practically the same thing
we did with the docker run command in the previous chapter, but
without HTTPS stuff.
version: '3.0'
services:
companyemployees:
image: codemazeblog/companyemployees:runtime
ports:
- "8080:5000"
- "8081:5001"
environment:
- ASPNETCORE_Kestrel__Certificates__Default__Password=awesomepass
-
ASPNETCORE_Kestrel__Certificates__Default__Path=/https/companyemployees.pf
x
- SECRET=CodeMazeSecretKey1234
volumes:
- ${USERPROFILE}/.aspnet/https:/https/
What we did is simply add the environment variables and volume to the
docker compose file itself.
We’ll also going to add the environment variable SECRET which will help
us work with JWT.
As you might have noticed, the file contains sensitive information now, so
the important thing to remember is not to commit it to public repos.
34
Ideally, this information should not be kept in the docker-compose file
itself.
You can use the power of volumes to add the sensitive data to the
container, by mounting volumes with files containing sensitive data. For
the simplicity of this example, we will keep it this way.
Now the only command we need to run to achieve the same result is
docker-compose up and our application will be running as usual.
One thing to note here is that if we quit a container with Ctrl+C, it won't
kill the container or the network created by Compose. To make sure you
release the resources you need to run docker-compose down.
Okay, so until now, we've just run the existing image with docker
compose. But if we make some changes to the application or the
Dockerfile, Compose would still run the image we've built before those
changes.
We would need to build the image again with the docker build -t
codemazeblog/companyemployees . command, and then run the
docker-compose up command to apply those changes.
35
To automate this step we can modify our docker-compose.yml file
with the build step:
version: '3.0'
services:
companyemployees:
image: codemazeblog/companyemployees:runtime
build:
context: .
ports:
- "8080:5000"
- "8081:5001"
environment:
- ASPNETCORE_Kestrel__Certificates__Default__Password=awesomepass
-
ASPNETCORE_Kestrel__Certificates__Default__Path=/https/companyemployees.pf
x
volumes:
- ${USERPROFILE}/.aspnet/https:/https/
The build step sets the context to the current directory and enables us to
build our image using the Dockerfile defined in that context.
So, now we can add the --build flag to our docker-compose command
to force the rebuild of the image: docker-compose up --build.
Now try running the command and see for yourself if that's the case. We
have removed the need for the docker build command just like that.
Okay, great.
Let's move on to the main attraction where the real fun begins.
36
public static void ConfigureSqlContext(this IServiceCollection services,
IConfiguration configuration) =>
services.AddDbContext<RepositoryContext>(opts =>
opts.UseSqlServer(configuration.GetConnectionString("sqlConnection"),
b => b.MigrationsAssembly("CompanyEmployees")));
There is one more thing we need to change, and that's the connection
string in the appsettings.json file since we don't want to use the root
user. We are also changing the server from . to db. You'll see why in a
moment:
"ConnectionStrings": {
"sqlConnection": "server=db; database=CompanyEmployee; User Id=sa;
Password=AwesomePass_1234"
}
Okay, excellent.
Now that we prepared our application, let's add the MSSQL image to our
docker-compose.yml. We are going to use
mcr.microsoft.com/mssql/server:2019-latest image since it's compatible
with our application:
version: '3.0'
services:
db:
image: mcr.microsoft.com/mssql/server:2019-latest
ports:
- "1433:1433"
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=AwesomePass_1234
restart: always
companyemployees:
depends_on:
- db
image: codemazeblog/companyemployees:runtime
build:
context: .
ports:
- "8080:5000"
37
- "8081:5001"
environment:
- ASPNETCORE_Kestrel__Certificates__Default__Password=awesomepass
-
ASPNETCORE_Kestrel__Certificates__Default__Path=/https/companyemployees.pf
x
- SECRET=CodeMazeSecretKey
volumes:
- ${USERPROFILE}/.aspnet/https:/https/
We need to open the port 1433 if we want to access the server locally.
A MSSQL image has some predefined environment values that it looks for
while initializing.
These are:
MSSQL_PID we are not using this one because the default value is
"Developer" and that’s what we need in this case. Other values are:
Express, Standard, Enterprise, EnterpriseCore. Use the one that suits
your needs.
38
which order to start, but can’t guarantee that the containers are indeed
ready. For example, we can try to seed the data even though the
database has not started yet.
And now that we use the real database, let’s create a simple Migrations
manager class that will run our migrations on docker-compose up.
_numberOfRetries++;
Console.WriteLine($"The server was not found or
was not accessible. Retrying... #{_numberOfRetries}");
MigrateDatabase(host);
}
39
throw;
}
}
return host;
}
Now the only thing we need to do is to extend our Program.cs code a bit:
app.MigrateDatabase().Run();
We’ve just added MigrateDatabase() before we actually run the
application.
Great, now let’s try it out through swagger interface with api/companies
request:
40
NOTE: If you have problems with authentication, refer to the chapter
“JWT and Identity in ASP.NET Core” in the book. We have explained
how to create a user, login and then authorize in Swagger UI.
We've learned how to make a Compose file and how to make it build and
run our images with the docker-compose up and docker-compose
down commands.
41
Docker Compose has allowed us to easily add a MSSQL database
container that our ASP.NET Core app can persist its data in. This setup
will help us in the next chapter where we are going to learn how to make
a local registry and push our images to it instead to the Docker Hub.
42
In this chapter, we will learn more about image management and
distribution. There are several ways to do that, whether locally or in the
cloud, so you should probably take some time to learn these concepts
before starting with continuous integration and application deployment.
We can use one of the existing and well-established cloud registries like
Docker Hub, Quay, Google Container Registry, Amazon Elastic Container
Registry or any other. We can also make our own registry and host it
locally.
43
Each image has it's own tag. For example, for the entire series, we've
been using the mcr.microsoft.com/dotnet/aspnet:6.0 image which
contains ASP.NET Core runtime with version 6.0. We can choose which
one we want to pull by typing docker pull image-name:tag similar to
GitHub repo and commits. We can go back to whichever commit we want
and pull it to the local machine.
That's putting it in very simple terms. But now that we cleared the air
around these terms we can proceed to the next section.
As we've mentioned, Docker Hub is just one of the registry providers. And
a good one at that. We can find all sorts of images over there and push
our own. We can create unlimited public repositories and one private repo
free of charge. If you need more private repositories, you can choose one
of the Docker Hub monthly plans.
You can create your own account on Docker Hub right now and try it out.
We did something like that in chapter 2, but now we are going in-depth.
To push the image from the local machine to Docker Hub we need to type
docker login and enter the credentials of your account in the prompt.
After that, you can easily push the image by typing docker push
accountname/imagename:tag.
44
If we don't specify the tag Docker will apply the :latest tag to it.
If we want to pull the image from the Docker Hub to the local machine,
we need to type docker pull accountname/imagename:tag. The
same rule applies here. If you don't specify the tag, you are going to pull
the image tagged :latest.
Docker Hub is super neat and very intuitive and offers a great deal of
functionality for free.
But what if we need more privacy? Or our client wants to use its own
server. If that's the case, we can make our own Docker Registry.
So how do we do that?
45
// 20191218210805
// http://localhost:50000/v2/_catalog
"repositories": [
The same thing can be done with Docker Compose that we introduced in
the previous chapter of the series.
Let's navigate to the root of our solution and make a new folder
Infrastructure and in it, another one called Registry. In the Registry
folder, we are going to create a docker-compose.yml file:
version: '3.0'
services:
my-registry:
image: registry:latest
container_name: my-registry
volumes:
- registry:/var/lib/registry
ports:
- "50000:5000"
restart: unless-stopped
volumes:
registry:
We defined the same thing we did with Docker command but with some
additional goodies. We added a volume to persist our data and defined
the restart policy as unless-stopped, to keep the registry up unless it is
explicitly stopped.
Now we can stop the registry we've spin up before with docker stop
my-registry and docker rm my-registry to remove the attached
container.
46
After that run docker-compose up -d in the
/Infrastructure/Registry folder.
We have the exact same registry we spin up before and we can access it
by navigating to http://localhost:50000/v2/_catalog.
We should also add the entry to the windows hosts file so we can use my-
registry instead of localhost (usually at
C:/Windows/system32/drivers/etc/hosts):
127.0.0.1 my-registry
Ok, so now we have our own local registry. Let's push some images to it.
47
Moreover, if we navigate
to http://localhost:50000/v2/codemazeblog/companyemploy
ees/tags/list we'll see:
// 20191218210646
// http://localhost:50000/v2/codemazeblog/companyemployees/tags/list
"name": "codemazeblog/companyemployees",
"tags": [
"runtime"
Here we can find the list of all the available tags of our image.
To test this out we can remove the local image with docker rmi my-
registry:50000/codemazeblog/companyemployees:runtime.
Now you can pull the image from the local registry by typing docker
pull my-registry:50000/codemazeblog/
companyemployees:runtime and behold, once again, the image is on
your local machine!
Setting the registry certificate is beyond the scope of this bonus book, but
you can find more info on how to set a certificate for a local registry here.
48
So be careful and secure your registries if you want to use them in
production.
Let's wrap this up with some examples of when we might to set up a local
Docker registry.
Now that you know pretty much everything you need to run a local
registry, you might wonder: "But why should I use a local registry when I
have all those nice options available?".
Now you know how to properly create and persist your Web API
application as a docker image.
We’ve given you the tools to play with. Docker is a big topic and the one
that can easily take a whole book to cover. We encourage you to play
around with it and explore other possibilities.
Hopefully, this bonus book gave you some ideas and at least a peek at
one bit of potential that Docker has.
Happy dockerization, and make sure to share your results with us!
49