CND - Docker - Unit - IV-I
CND - Docker - Unit - IV-I
Containers use the host OS, meaning all containers must be compatible with that OS.
Containers are lightweight, taking only the resources needed to run the application and
the container manager.
Container images are relatively small in size, making them easy to share.
Containers might be isolated only very lightly from each other. A process in one
container could access memory used by another container, for example.
Tools such as Kubernetes make it relatively easy to run multiple containers together,
specifying how and when containers interact. Docker is a popular open source
containerization tool based on Linux containers
Containers are ephemeral, meaning they stay alive only for as long as the larger system
needs them. Storage is usually handled outside the container.
In traditional virtualization, the host operating system (OS) runs on a physical machine, and each
virtual machine (VM) runs its OS. This approach can be resource-intensive and lacks flexibility.
Containerization, on the other hand, takes advantage of the host OS, allowing multiple containers
to share the same OS kernel while providing isolated runtime environments.
Docker:
Docker is a software platform that allows you to build, test, and deploy applications quickly.
Docker packages software into standardized units called containers that have everything the
software needs to run including libraries, system tools, code, and runtime.
Using Docker lets you ship code faster, standardize application operations, seamlessly move
code, and save money by improving resource utilization. With Docker, you get a single object
that can reliably run anywhere. Docker's simple and straightforward syntax gives you full
control. Wide adoption means there's a robust ecosystem of tools and off-the-shelf applications
that are ready to use with Docker.
Docker was created to work on the Linux platform, but it was extended to offer greater support
for non-Linux OSes, including Microsoft Windows and Apple OS X. Versions of Docker for
Amazon Web Services (AWS) and Microsoft Azure are available.
Docker Engine: It is a core part of docker, that handles the creation and management of
containers.
Docker Image: It is a read-only template that is used for creating containers, containing
the application code and dependencies.
Docker Hub: It is a cloud based repository that is used for finding and sharing the
container images.
Docker Registry : It is a storage distribution system for docker images, where you can
store the images in both public and private modes.
Docker makes use of client-server architecture. The Docker client talks with the Docker daemon
which helps in building, running, and distributing the Docker containers. The Docker client runs
with the daemon on the same system or we can connect the Docker client with the Docker
daemon remotely. With the help of REST API over a UNIX socket or a network, the Docker
client and daemon interact with each other. To know more about working of docker refer to the
Architecture of Docker.
Docker packages, provisions and runs containers. Container technology is available through the
operating system: A container packages the application service or function with all of the
libraries, configuration files, dependencies and other necessary parts and parameters to operate.
Each container shares the services of one underlying OS. Docker images contain all the
dependencies needed to execute code inside a container, so containers that move between Docker
environments with the same OS work with no changes.
Docker uses resource isolation in the OS kernel to run multiple containers on the same OS. This
is different than virtual machines (VMs), which encapsulate an entire OS with executable code
on top of an abstracted layer of physical hardware resources.
Docker Image
A Docker image is a file used to execute code in a Docker container. Docker images act as a set
of instructions to build a Docker container, such as a template. Docker images also act as the
starting point when using Docker. An image is comparable to a snapshot in virtual machine
(VM) environments.
Docker is an open source project that's used to create, run and deploy applications in containers.
A Docker image contains application code, libraries, tools, dependencies and other files needed
to make an application run. When a user runs an image, it can become one or many instances of
a container. A Docker daemon operates in the background to oversee images, containers and
related tasks. Communication between a client and the daemon is facilitated through sockets or
a RESTful API.
Docker images have multiple layers, each originating from the previous layer but different. The
layers speed up Docker builds while increasing reusability and decreasing disk use. Layers help
to avoid transferring redundant data and skip any build steps that haven't been changed per the
Docker cache.
Image layers are also read-only files. Once a container is created, a writable layer is added on top
of the unchangeable images, letting a user make changes.
References to disk space in Docker images and containers can be confusing. It's important to
distinguish between size and virtual size. Size refers to the disk space that the writable layer of a
container uses, while the virtual size is the disk space used for the container and the writeable
layer. The read-only layers of an image can be shared between any container started from the
same image.
Components of Docker Image
The following are the terminologies and components related to Docker Image:
Layers: Immutable filesystem layers stacked to form a complete image.
Base Image: The foundational layer, often a minimal OS or runtime environment.
Dockerfile: A text file containing instructions to build a Docker image.
Image ID: A unique identifier for each Docker image.
Tags: Labels used to manage and version Docker images.
Structure of Docker Image
The layers of software that make up a Docker image make it easier to configure the
dependencies needed to execute the container.
Base Image: The basic image will be the starting point for the majority of
Dockerfiles, and it can be made from scratch.
Parent Image: The parent image is the image that our image is based on. We can
refer to the parent image in the Dockerfile using the FROM command, and each
declaration after that affects the parent image.
Layers: Docker images have numerous layers. To create a sequence of
intermediary images, each layer is created on top of the one before it.
Docker Registry: Refer to this page on the Docker Registry for further
information.
The following are the some of the sub commands that are used with Docker Image:
Command Description
docker image
This command is used for building an image from the Dockerfile
build
docker image
It is used for knowing the history of the docker image
history
docker image It is used for displaying the detailed information on one or more
inspect images
docker image It used for removing unused images that are not associated with
prune any containers
docker image It helps in crating a tag to the target image that refers to
tag the source image.
A docker image provides a wide range of use cases which provide the following benefits:
Development and deployment efficiency. A Docker image has everything needed to run a
containerized application, including code, config files, environment variables, libraries and
runtimes. When the image is deployed to a Docker environment, it can be executed as a
Docker container. The docker run command creates a container from a specific image.
Consistency. Docker offers a consistent environment for applications, letting them function
consistently across all environments from development to production. Also,
Docker's parity feature ensures that images function the same regardless of the server or
laptop they're running on, which saves time when configuring environments and
troubleshooting issues that are unique to each one.
Portability. Docker images are lightweight, small and fast, which makes them extremely
portable across all different versions of Linux, laptops or the cloud.
Speed and agility. Docker enables users to create and deploy containers instantly, without
the need to boot the OS. With the ability to easily create, destroy, stop or start containers and
automate deployment through YAML configuration files, Docker streamlines infrastructure
scaling. By using container images throughout the pipeline and enabling non-dependent jobs
to perform concurrently, it speeds up CI/CD pipelines, resulting in a faster time to market
and increased productivity.
Isolation and security. Docker images provide isolation by running applications in
containers. Because each container has its own filesystem, processes and network stack,
dependencies and programs are kept separate from both the host system and each other. This
isolation improves security and prevents conflicts between applications.
Reusability. Docker images are a reusable asset deployable on any host. Developers can take
the static image layers from one project and use them in another. This saves the user time
because they don't have to recreate an image from scratch.
Docker containers and Docker images are both fundamental concepts in Docker that execute
unique characteristics. The main differences between a Docker container and a Docker
image include the following.
Docker container
It's used to create, run and deploy applications that are isolated from the underlying
hardware.
A Docker container can use one machine, share its kernel and virtualize the OS to run more
isolated processes. As a result, Docker containers are lightweight.
Docker containers can be scaled rapidly to meet the demands of a changing workload. This
makes them suitable for microservices architectures and cloud-native applications.
Docker image
Docker images are also immutable. While they can't be changed, they can be duplicated,
shared or deleted. This feature is useful for testing new software or configurations because
whatever happens, the image remains unchanged.
Containers are dependent on Docker images and need a runable image to exist because
images are used to construct runtime environments and are needed to run an application.
Docker images are created with the build command and are housed in a Docker registry.
Because of their layered structure where multiple layers of images are built upon one
another, they require minimal data transfer across networks.
Docker Installation
Example:
This will run the ubuntu image and start an interactive terminal (-it flag) with a bash shell.
Example:
Map Ports: If the container needs to interact with the host, you can map ports:
Example:
Mount Volumes: To persist data or share files between your host and a container:
Example:
4. Managing Containers
docker ps
docker ps -a
Remove a Container:
docker rm <container_id>
Example:
Attach to a Running Container: This allows you to interact with the container’s
primary process:
6. Docker Networking
docker network ls
7. Docker Volumes
Volumes are used to persist data that is generated by and used by Docker containers.
Create a Volume:
List Volumes:
docker volume ls
Inspect a Volume:
Remove a Volume:
8. Docker Compose
Docker Compose is a tool for defining and running multi-container Docker applications. It uses
a docker-compose.yml file to configure application services.
yaml
version: '3' services: web: image: nginx ports: - "8080:80" app: image: myapp build: ./app
docker-compose up
docker-compose down
docker stats
10. Dockerfile
Dockerfile
# Use a base image FROM ubuntu:latest # Install dependencies RUN apt-get update && apt-get
install -y python3 # Set the working directory WORKDIR /app # Copy the application code
COPY . /app # Run the application CMD ["python3", "app.py"]
11. Cleaning Up
Docker Engine
Docker Engine is the actual technology behind building, shipping, and running container applications.
However, it does its work in a client-server model, which requires using many components and services
for such operations.
When people refer to "Docker," they are probably referring to either Docker Engine itself or Docker
Inc., the company that provides several versions of containerization technology based on Docker
Engine.
Components Of Docker Engine
Docker Engine is an open-source technology that includes a server running a background process called
`dockerd`, a REST API, and a command-line interface (CLI) known as `docker`. In the following
explanation you will know how the engine works: it runs a server-side daemon that manages images,
containers, networks, and storage volumes. The users can interact with this daemon with help of the
CLI, directly through the API.
An essential aspect of Docker Engine is its declarative nature. This means that administrators describe
a specific desired state for the system. Docker Engine automatically works at keeping the real state
aligned with the desired state at all times.
Docker Engine Architecture
Basically, Docker's client-server setup streamlines dealing with stuff like images, containers, networks,
and volumes. This makes developing and moving workloads easier. As more businesses use Docker for
its efficiency and scalability, grasping its engine components, usage, and benefits is key to using
container technology properly.
Docker Daemon: The Docker daemon, called dockerd, is essential. It manages and runs Docker
containers and handles their creation. It acts as a server in Docker's setup, receiving requests and
commands from other components.
Docker Client: Users communicate with Docker through the CLI client (docker). This client talks
to the Docker daemon using Docker APIs, allowing for direct command-line interaction or
scripting. This flexibility enables diverse operational approaches.
Docker Images and Containers: At Docker's core, you find images and containers. Images act as
unchanging blueprints. Containers are created from these blueprints. Containers provide the
surroundings needed to run apps.
Docker Registries: These are places where Docker images live and get shared. Registries are vital.
They enable reusability and spreading of containers.
Networking and Volumes: Docker has networking capabilities. They control how containers talk
to one another and the host system. Volumes in Docker allow data storage across containers. This
enhances data handling within Docker.
To create docker containers from a docker image, first we must have a docker image. We can get
our required docker image either from Dockerhub, or can create our custom docker image by
using a Dockerfile. After we have our required docker image, follow the following steps:
Step 1. List all the docker images, that are available locally. Enter the following command
to do this:
Command
docker images
Step 2: Copy the image ID of the target image, that we want to contanerize. Image ID is a
unique ID of any docker image. Let’s say we want to create a container from ubuntu image
with latest tag. We will copy the image ID of ubuntu image present at 2nd position, which is
– 3b418d7b466a.
Step 3: The third and the last step is to start a container for our target image. The docker
run command is used to do this. Below is the syntax of command:
Command
docker run <options> <image_ID>
The <options> for the run command are explained in docker’s documentation, you can check it
out from here – https://docs.docker.com/engine/reference/run
Command
docker run -it 3b418d7b466a
creating container with containerid
Step 1: List all the docker images, that are available locally. Enter the following command to do
this:
docker images
Step 2: In this example, we wann run the first docker image, which is a NodeJS application. It’s
the kartikkala/mirror_website. So we will copy it’s image ID and run it with the necessary
volume mounted and port 8080 mapped with the host PC. We are mapping port 8080 as it’s
programmed in the NodeJS app to listen at port 8080. To know which port you need to map to
the host PC, you can refer to this article – Managing Ports
Command:
docker run -p 8080:8080 -d --mount type=bind,src=$(pwd)/../volume1,dst=/downloadables
kartikkala/mirror_website:beta
First thing, port 8080 of the container is exposed to port 8080 of the host machine with the ‘ -p ‘
flag.
Second thing, volume1 directory is bind mounted as a volume on /downloadables folder, where
volume1 folder is of the host machine, and /downloadables is inside the container. This will
cause all the changes inside the /downloadables folder to be reflected in the the volume1 folder
directly.
Step 3: Now we will open up our browser and check on localhost or any other IP address
assosicated with out local machine. Below is the screenshot for our web app running in the
browser:
Output:
Selection_004-(1).jpg
Our web app is up and running on port 8080
You want to find out Images with specific names, you can use the following command.
sudo docker images <image-name>
To pull Docker Images with specific tags or versions, you can use the following command.
Docker Compose
Basically, through the docker-compose.yml file, we define the configuration for each container:
build context, environment variables, ports to be exposed, and the relationship between services.
Running all the defined services can be done by one command, the docker-compose
up command, ensuring they work together accordingly.
Key Concepts of Docker Compose
Docker Compose introduces several essential concepts that are necessary to understand and be
able to use the tool effectively. These consist of the architecture of a Docker Compose file
written in YAML, services, networks, volumes, and environment variables. Let’s discuss each of
these concepts.
Ordinarily, the Docker Compose file would be a docker-compose.yml file using YAML. The file
describes the configuration your application might require regarding services, networks, and
volumes. It gives a guide on spinning up the environment the application will run under.
Understanding the structure of this file is crucial for effectively using Docker Compose.
Docker Hub:
Docker Hub is a service provided by Docker for finding and sharing container images.
It's the world’s largest repository of container images with an array of content sources including
container community developers, open source projects, and independent software vendors (ISV)
building and distributing their code in containers.
Docker Hub is also where you can go to carry out administrative tasks for organizations. If you
have a Docker Team or Business subscription, you can also carry out administrative tasks in the
Docker Admin Console.
Builds: Automatically build container images from GitHub and Bitbucket and push them to
Docker Hub.
Webhooks: Trigger actions after a successful push to a repository to integrate Docker Hub with
other services.
Docker Trusted Registry (DTR)
Docker Trusted Registry (DTR) is the enterprise-grade image storage solution from Docker. You
install it behind your firewall so that you can securely store and manage the Docker images you
use in your applications.
Docker trusted registry or simply Docker registry is an enterprise offering from Docker. the most
common terminology that you will hear with Docker Enterprise Edition is DTR and UCP
(universal control plane).
In order for DTR to work UCP has to be insallted and for UCP to be installed you would need
Docker Enterprise Edition. Once you install Docker EE you can get a free liscense from
DockerHub.
DTR Features:
DTR can be installed on any platform where you can store your Docker images securely, behind
your firewall. DTR has a user interface that allows authorized users in your organization to
browse Docker images and review repository events. It even allows you to see what Dockerfile
lines were used to produce the image and, if security scanning is enabled, to see a list of all of the
software installed in your images.
Availability
DTR is highly available as it has multiple replicas of containers in case anything fails.
Efficiency
DTR has this ability to clean the unreferenced manifests and cache the images as well for faster
pulling of images.
Built-in access control
STR has great authentication mechanisms like RBAC, LDAP sync. It uses the same
authentication as of UCP.
Security scanning
Image Scanning is built in feature provided out of the box by DTR.
Image signing
DTR has built in Notary, you can use Docker Content Trust to sign and verify images.
Docker Swarm:
A Docker Swarm is a container orchestration tool running the Docker application. It has been
configured to join together in a cluster. The activities of the cluster are controlled by a swarm
manager, and machines that have joined the cluster are referred to as nodes.
Key Features:
A Docker Swarm is a group of either physical or virtual machines that are running the Docker application
and that have been configured to join together in a cluster.
The activities of the cluster are controlled by a swarm manager, and machines that have joined the cluster
are referred to as nodes.
One of the key benefits associated with the operation of a docker swarm is the high level of availability
offered for applications.
Docker Swarm lets you connect containers to multiple hosts similar to Kubernetes.
Docker Swarm has two types of services: replicated and global.
Docker attach
The docker attach command is used to attach your terminal to a running container. This
allows you to interact directly with the container's main process (usually the process that was
started by the CMD or ENTRYPOINT in the Dockerfile). It essentially connects your terminal to
the standard input (stdin), output (stdout), and error (stderr) streams of the running container.
You typically use docker attach when you want to interact with a running container's primary
process, like a web server, a command-line process, or a running application.
Syntax:
Example Usage:
When you attach to a container, you may want to detach and return to your host terminal without
stopping the container. You can do this by pressing:
Ctrl + C — This will stop the container (if the main process supports termination via SIGINT).
Ctrl + P, Ctrl + Q — This will detach from the container without stopping it.
Key Considerations:
docker attach connects to the primary process (the one specified by the
Dockerfile’s CMD or ENTRYPOINT), so if the container is running something like a web
server, you'll see its output. If it's running an interactive application (like bash), you can interact
with it as though you're inside the container.
If you attach to a container running in detached mode (-d), you'll see the output of the main
process in the terminal. However, it won't create a new terminal session or allow you to run
additional commands directly unless the main process allows interaction.
docker attach only allows interaction with the main process of the container. If the container
runs multiple processes or has additional background tasks, you won't be able to interact with
those unless they're specifically configured to interact with the terminal. For example, if you're
running an application that’s logging to stdout or stderr, docker attach will show you those logs.
If you want to run commands or interact with a container after it has started, and you're not just
interested in its primary process, you may find it more useful to use docker exec instead.
docker exec: Executes a new command inside the container (like bash or sh), and you get a new
interactive shell. This is useful if you want to run commands in a running container without
attaching to its main process.
Example:
Example Workflow:
You’ll be connected to the sleep command’s standard output, which won’t show much since it’s
just sleeping, but you could interact with the process if it were running an interactive shell.
Docker File
A Dockerfile is a script that uses the Docker platform to generate containers automatically. It is
essentially a text document that contains all the instructions that a user may use to create an
image from the command line. The Docker platform is a Linux-based platform that allows
developers to create and execute containers, self-contained programs, and systems that are
independent of the underlying infrastructure. Docker, which is based on the
Linux kernel’s resource isolation capabilities, allows developers and system administrators to
transfer programs across multiple systems and machines by executing them within containers.
2. MAINTAINER
This statement is a kind of documentation, which defines the author who is creating this
Dockerfile or who should you contact if it has bugs.
Example:
MAINTAINER Firstname Lastname <example@geeksforgeeks.com>
3. RUN
The RUN statement defines running a command through the shell, waiting for it to finish, and
saving the result. It tells what process will be running inside the container at the run time.
Example:
RUN unzip install.zip /opt/install
RUN echo hello
4. ADD
If we define to add some files, ADD statement is used. It basically gives instructions to copy
new files, directories, or remote file URLs and then adds them to the filesystem of the image.
To sum up it can add local files, contents of tar archives as well as URLs.
Example:
Local Files: ADD run.sh /run.sh
Tar Archives: ADD project.tar.gz /install/
URLs: ADD https://project.example-gfg.com/downloads/1.0/testingproject.rpm/test
5. ENV
ENV statement sets the environment variables both during the build and when running the
result. It can be used in the Dockerfile and any scripts it calls. It can be used in the Dockerfile
as well as any scripts that the Dockerfile calls. These are also persistent with the container and
can be referred to at any moment.
Example:
ENV URL_POST=production.example-gfg.com
6. ENTRYPOINT
It specifies the starting of the expression to use when starting your container. Simply
ENTRYPOINT specifies the start of the command to run. If your container acts as a command-
line program, you can use ENTRYPOINT.
Example:
ENTRYPOINT ["/start.sh"]
7. CMD
CMD specifies the whole command to run. We can say CMD is the default argument passed
into the ENTRYPOINT. The main purpose of the CMD command is to launch the software
required in a container.
Example:
CMD ["program-foreground"]
CMD ["executable", "program1", "program2"]
Docker Commands
Docker is a platform that enables the creation, deployment, and running of applications with
the help of containers. A container is a unit of software that packages the code and all its
dependencies together so that the application becomes runnable irrespective of the
environment.
The container isolates the application and its dependencies into a self-contained unit that can
run anywhere. Container removes the need for physical hardware, allowing for more efficient
use of computing resources. Containers provide operating-system-level virtualization.
Additionally, using Docker commands, developers can easily manage these containers,
enhancing their productivity and workflow efficiency.
Let's understand a few of the above commands along with their usage in detail. The following
are the most used docker basic commands for beginners and experienced docker
professionals.
1. docker –version
Syntax:
2. docker pull
Syntax:
To download an image or set of images (i.e. A Repository) , Once can use docker pull
command
Example:
Syntax:
The docker run command creates a writeable container layer over the specified image and
then starts it using the specified command.
The docker run command can be used with many variations, One can refer to the following
documentation docker run.
4. docker ps
Syntax:
docker ps [OPTIONS]
The above command can be used with other options like - all or –a
Example:
$ docker ps
$ docker ps -a
5. docker exec
Syntax
docker exec [OPTIONS] CONTAINER COMMAND [ARG...]
Refer to the following article for more detail regarding the usage of the docker exec
command docker exec.
6. docker stop
Syntax:
The main process inside the container will receive SIGTERM, and after a grace period,
SIGKILL. The first signal can be changed with the STOPSIGNAL instruction in the
container’s Dockerfile, or the --stop-signal option to docker run.
Example:
7. docker restart
Example:
8. docker kill
9. docker commit
This command is used to create a new image from the container image.
Docker commit command allows users to take an existing running container and save its
current state as an image
Use docker image push to share your images to the Docker Hub registry or to a self -hosted
one.
Example:
11. docker rm
This command is used to remove one or more docker containers. We can use options such as -
f i.e. force removal of running container which internally uses SIGKILL. Or -v which
removes any anonymous volumes associated with the container.
Example:
docker rm container1
docker rm -v container1
Copy Code
docker rm -f running_container
This command is used to remove one or more docker images from the system. We can use
some common options such as - f for force removal of an image or --no-prune for
not deleting untagged parent images.
Syntax:
This command is used to upload the docker image to a Docker registry such as Docker Hub
or a private registry.
Example:
This command is used to log in to the Docker registry such as Docker Hub, a private registry,
or any other third-party registry. We can use some common options such as -u for the
username of the registry, -p for the password of the registry.
Examples:
This command is used to start one or more containers. We can use common
options such as -a to attach stderr /stdout and forward signal. Also -i option
can be used as an interactive mode where the container STDIN can be
attached.
Example: