0% found this document useful (0 votes)
16 views28 pages

CND - Docker - Unit - IV-I

Uploaded by

Dhanush reddy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views28 pages

CND - Docker - Unit - IV-I

Uploaded by

Dhanush reddy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

Containerization:

Containerization is a method of virtualization that allows you to isolate and package an


application and its dependencies into a standardized unit called a container.

 Containers use the host OS, meaning all containers must be compatible with that OS.
 Containers are lightweight, taking only the resources needed to run the application and
the container manager.
 Container images are relatively small in size, making them easy to share.
 Containers might be isolated only very lightly from each other. A process in one
container could access memory used by another container, for example.
 Tools such as Kubernetes make it relatively easy to run multiple containers together,
specifying how and when containers interact. Docker is a popular open source
containerization tool based on Linux containers
 Containers are ephemeral, meaning they stay alive only for as long as the larger system
needs them. Storage is usually handled outside the container.

In traditional virtualization, the host operating system (OS) runs on a physical machine, and each
virtual machine (VM) runs its OS. This approach can be resource-intensive and lacks flexibility.
Containerization, on the other hand, takes advantage of the host OS, allowing multiple containers
to share the same OS kernel while providing isolated runtime environments.

Docker:

Docker is a software platform that allows you to build, test, and deploy applications quickly.
Docker packages software into standardized units called containers that have everything the
software needs to run including libraries, system tools, code, and runtime.

Using Docker lets you ship code faster, standardize application operations, seamlessly move
code, and save money by improving resource utilization. With Docker, you get a single object
that can reliably run anywhere. Docker's simple and straightforward syntax gives you full
control. Wide adoption means there's a robust ecosystem of tools and off-the-shelf applications
that are ready to use with Docker.

Docker was created to work on the Linux platform, but it was extended to offer greater support
for non-Linux OSes, including Microsoft Windows and Apple OS X. Versions of Docker for
Amazon Web Services (AWS) and Microsoft Azure are available.

Key Components of Docker

The following are the some of the key components of Docker:

 Docker Engine: It is a core part of docker, that handles the creation and management of
containers.
 Docker Image: It is a read-only template that is used for creating containers, containing
the application code and dependencies.

 Docker Hub: It is a cloud based repository that is used for finding and sharing the
container images.

 Dockerfile: It is a script that containing instructions to build a docker image.

 Docker Registry : It is a storage distribution system for docker images, where you can
store the images in both public and private modes.

Docker Architecture and Docker works:

Docker makes use of client-server architecture. The Docker client talks with the Docker daemon
which helps in building, running, and distributing the Docker containers. The Docker client runs
with the daemon on the same system or we can connect the Docker client with the Docker
daemon remotely. With the help of REST API over a UNIX socket or a network, the Docker
client and daemon interact with each other. To know more about working of docker refer to the
Architecture of Docker.

Docker packages, provisions and runs containers. Container technology is available through the
operating system: A container packages the application service or function with all of the
libraries, configuration files, dependencies and other necessary parts and parameters to operate.
Each container shares the services of one underlying OS. Docker images contain all the
dependencies needed to execute code inside a container, so containers that move between Docker
environments with the same OS work with no changes.
Docker uses resource isolation in the OS kernel to run multiple containers on the same OS. This
is different than virtual machines (VMs), which encapsulate an entire OS with executable code
on top of an abstracted layer of physical hardware resources.

Docker Image

A Docker image is a file used to execute code in a Docker container. Docker images act as a set
of instructions to build a Docker container, such as a template. Docker images also act as the
starting point when using Docker. An image is comparable to a snapshot in virtual machine
(VM) environments.

Docker is an open source project that's used to create, run and deploy applications in containers.
A Docker image contains application code, libraries, tools, dependencies and other files needed
to make an application run. When a user runs an image, it can become one or many instances of
a container. A Docker daemon operates in the background to oversee images, containers and
related tasks. Communication between a client and the daemon is facilitated through sockets or
a RESTful API.

Layers of Docker images

Docker images have multiple layers, each originating from the previous layer but different. The
layers speed up Docker builds while increasing reusability and decreasing disk use. Layers help
to avoid transferring redundant data and skip any build steps that haven't been changed per the
Docker cache.

Image layers are also read-only files. Once a container is created, a writable layer is added on top
of the unchangeable images, letting a user make changes.

References to disk space in Docker images and containers can be confusing. It's important to
distinguish between size and virtual size. Size refers to the disk space that the writable layer of a
container uses, while the virtual size is the disk space used for the container and the writeable
layer. The read-only layers of an image can be shared between any container started from the
same image.
Components of Docker Image
The following are the terminologies and components related to Docker Image:
 Layers: Immutable filesystem layers stacked to form a complete image.
 Base Image: The foundational layer, often a minimal OS or runtime environment.
 Dockerfile: A text file containing instructions to build a Docker image.
 Image ID: A unique identifier for each Docker image.
 Tags: Labels used to manage and version Docker images.
Structure of Docker Image
The layers of software that make up a Docker image make it easier to configure the
dependencies needed to execute the container.
 Base Image: The basic image will be the starting point for the majority of
Dockerfiles, and it can be made from scratch.
 Parent Image: The parent image is the image that our image is based on. We can
refer to the parent image in the Dockerfile using the FROM command, and each
declaration after that affects the parent image.
 Layers: Docker images have numerous layers. To create a sequence of
intermediary images, each layer is created on top of the one before it.
 Docker Registry: Refer to this page on the Docker Registry for further
information.

Commands of Docker Image:

The following are the some of the sub commands that are used with Docker Image:
Command Description

docker image
This command is used for building an image from the Dockerfile
build

docker image
It is used for knowing the history of the docker image
history

docker image It is used for displaying the detailed information on one or more
inspect images

docker image It used for removing unused images that are not associated with
prune any containers

This command helps in saving the docker images into a tar


docker image save archived files
Command Description

docker image It helps in crating a tag to the target image that refers to
tag the source image.

Docker image use cases

A docker image provides a wide range of use cases which provide the following benefits:

 Development and deployment efficiency. A Docker image has everything needed to run a
containerized application, including code, config files, environment variables, libraries and
runtimes. When the image is deployed to a Docker environment, it can be executed as a
Docker container. The docker run command creates a container from a specific image.

 Consistency. Docker offers a consistent environment for applications, letting them function
consistently across all environments from development to production. Also,
Docker's parity feature ensures that images function the same regardless of the server or
laptop they're running on, which saves time when configuring environments and
troubleshooting issues that are unique to each one.

 Platform independence. A Docker image is a cross-platform image. For example, it can be


created in the Windows environment, submitted to the Docker hub and fetched by users
running Linux and other operating systems (OSes).

 Portability. Docker images are lightweight, small and fast, which makes them extremely
portable across all different versions of Linux, laptops or the cloud.

 Speed and agility. Docker enables users to create and deploy containers instantly, without
the need to boot the OS. With the ability to easily create, destroy, stop or start containers and
automate deployment through YAML configuration files, Docker streamlines infrastructure
scaling. By using container images throughout the pipeline and enabling non-dependent jobs
to perform concurrently, it speeds up CI/CD pipelines, resulting in a faster time to market
and increased productivity.
 Isolation and security. Docker images provide isolation by running applications in
containers. Because each container has its own filesystem, processes and network stack,
dependencies and programs are kept separate from both the host system and each other. This
isolation improves security and prevents conflicts between applications.

 Versioning and rollback. Docker's change-committing and version-controlling capabilities


enable instant rollback to previous versions if new changes disrupt the environment.

 Reusability. Docker images are a reusable asset deployable on any host. Developers can take
the static image layers from one project and use them in another. This saves the user time
because they don't have to recreate an image from scratch.

 Scalability. By spinning up several instances of containers, Docker images facilitate easy


horizontal application scaling. With the use of orchestration and management options such
as Docker Swarm or Kubernetes, organizations can automate load balancing and scaling in
response to demand.

Docker container vs. Docker image

Docker containers and Docker images are both fundamental concepts in Docker that execute
unique characteristics. The main differences between a Docker container and a Docker
image include the following.

Docker container

 A Docker container is a virtualized runtime environment used in application development.

 It's used to create, run and deploy applications that are isolated from the underlying
hardware.

 A Docker container can use one machine, share its kernel and virtualize the OS to run more
isolated processes. As a result, Docker containers are lightweight.

 Docker containers can be scaled rapidly to meet the demands of a changing workload. This
makes them suitable for microservices architectures and cloud-native applications.
Docker image

 A Docker image is similar to a snapshot in other types of VM environments. It's a record of a


Docker container at a specific point in time.

 Docker images are also immutable. While they can't be changed, they can be duplicated,
shared or deleted. This feature is useful for testing new software or configurations because
whatever happens, the image remains unchanged.

 Containers are dependent on Docker images and need a runable image to exist because
images are used to construct runtime environments and are needed to run an application.

 Docker images are created with the build command and are housed in a Docker registry.
Because of their layered structure where multiple layers of images are built upon one
another, they require minimal data transfer across networks.

Docker Installation

Step-By-Step Docker Installation on Windows


1. Go to the website https://docs.docker.com/docker-for-windows/install/ and download
the docker file.
Note: A 64-bit processor and 4GB system RAM are the hardware prerequisites required to
successfully run Docker on Windows 10.
2. Then, double-click on the Docker Desktop Installer.exe to run the installer.
Note: Suppose the installer (Docker Desktop Installer.exe) is not downloaded; you can get it
from Docker Hub and run it whenever required.
3. Once you start the installation process, always enable Hyper-V Windows Feature on the
Configuration page.
4. Then, follow the installation process to allow the installer and wait till the process is done.
5. After completion of the installation process, click Close and restart.

Working with Docker Containers


A container is an instance of an image. You can run containers from an image with the docker
run command.
 Start a New Container:

docker run <image_name>

Example:

docker run -it ubuntu bash

This will run the ubuntu image and start an interactive terminal (-it flag) with a bash shell.

 Run a Container in Detached Mode: To run a container in the background (detached


mode):

docker run -d <image_name>

Example:

docker run -d nginx

 Map Ports: If the container needs to interact with the host, you can map ports:

docker run -p <host_port>:<container_port> <image_name>

Example:

docker run -p 8080:80 nginx

 Mount Volumes: To persist data or share files between your host and a container:

docker run -v <host_path>:<container_path> <image_name>

Example:

docker run -v /host/data:/container/data ubuntu

4. Managing Containers

 View Running Containers:

docker ps

 View All Containers (Including Stopped Ones):

docker ps -a

 Stop a Running Container:

docker stop <container_id>


 Start a Stopped Container:

docker start <container_id>

 Remove a Container:

docker rm <container_id>

 Remove a Stopped Container Automatically: You can run a container and


automatically remove it after it exits using the --rm flag:

docker run --rm <image_name>

5. Interacting with Containers

 Execute a Command in a Running Container: If you want to execute a command


(like bash) in a running container:

docker exec -it <container_id> <command>

Example:

docker exec -it <container_id> bash

 Attach to a Running Container: This allows you to interact with the container’s
primary process:

docker attach <container_id>

6. Docker Networking

 View Docker Networks:

docker network ls

 Create a Custom Network:

docker network create <network_name>

 Connect a Container to a Network:

docker network connect <network_name> <container_id>

 Disconnect a Container from a Network:

docker network disconnect <network_name> <container_id>

7. Docker Volumes
Volumes are used to persist data that is generated by and used by Docker containers.

 Create a Volume:

docker volume create <volume_name>

 List Volumes:

docker volume ls

 Inspect a Volume:

docker volume inspect <volume_name>

 Remove a Volume:

docker volume rm <volume_name>

8. Docker Compose

Docker Compose is a tool for defining and running multi-container Docker applications. It uses
a docker-compose.yml file to configure application services.

 Install Docker Compose (if not already installed): Installation Guide

 Create a docker-compose.yml file: Here's a basic example:

yaml

version: '3' services: web: image: nginx ports: - "8080:80" app: image: myapp build: ./app

 Start Services with Docker Compose:

docker-compose up

 Stop Services with Docker Compose:

docker-compose down

9. Logging and Monitoring

 View Logs of a Running Container:

docker logs <container_id>

 Follow Logs in Real-Time:

docker logs -f <container_id>


 View Resource Usage:

docker stats

10. Dockerfile

A Dockerfile is a script containing instructions to build a Docker image. Here’s an example:

Dockerfile

# Use a base image FROM ubuntu:latest # Install dependencies RUN apt-get update && apt-get
install -y python3 # Set the working directory WORKDIR /app # Copy the application code
COPY . /app # Run the application CMD ["python3", "app.py"]

To build an image from this Dockerfile, you can use:

docker build -t myapp .

11. Cleaning Up

 Remove Unused Docker Images:

docker rmi <image_name>

 Remove Unused Containers:

docker container prune

 Remove Unused Volumes:

docker volume prune

Docker Engine
Docker Engine is the actual technology behind building, shipping, and running container applications.
However, it does its work in a client-server model, which requires using many components and services
for such operations.

When people refer to "Docker," they are probably referring to either Docker Engine itself or Docker
Inc., the company that provides several versions of containerization technology based on Docker
Engine.
Components Of Docker Engine
Docker Engine is an open-source technology that includes a server running a background process called
`dockerd`, a REST API, and a command-line interface (CLI) known as `docker`. In the following
explanation you will know how the engine works: it runs a server-side daemon that manages images,
containers, networks, and storage volumes. The users can interact with this daemon with help of the
CLI, directly through the API.

An essential aspect of Docker Engine is its declarative nature. This means that administrators describe
a specific desired state for the system. Docker Engine automatically works at keeping the real state
aligned with the desired state at all times.
Docker Engine Architecture
Basically, Docker's client-server setup streamlines dealing with stuff like images, containers, networks,
and volumes. This makes developing and moving workloads easier. As more businesses use Docker for
its efficiency and scalability, grasping its engine components, usage, and benefits is key to using
container technology properly.
 Docker Daemon: The Docker daemon, called dockerd, is essential. It manages and runs Docker
containers and handles their creation. It acts as a server in Docker's setup, receiving requests and
commands from other components.
 Docker Client: Users communicate with Docker through the CLI client (docker). This client talks
to the Docker daemon using Docker APIs, allowing for direct command-line interaction or
scripting. This flexibility enables diverse operational approaches.
 Docker Images and Containers: At Docker's core, you find images and containers. Images act as
unchanging blueprints. Containers are created from these blueprints. Containers provide the
surroundings needed to run apps.
 Docker Registries: These are places where Docker images live and get shared. Registries are vital.
They enable reusability and spreading of containers.
 Networking and Volumes: Docker has networking capabilities. They control how containers talk
to one another and the host system. Volumes in Docker allow data storage across containers. This
enhances data handling within Docker.

Creating Containers with an Image:

To create docker containers from a docker image, first we must have a docker image. We can get
our required docker image either from Dockerhub, or can create our custom docker image by
using a Dockerfile. After we have our required docker image, follow the following steps:

Step 1. List all the docker images, that are available locally. Enter the following command
to do this:

Command
docker images

Step 2: Copy the image ID of the target image, that we want to contanerize. Image ID is a
unique ID of any docker image. Let’s say we want to create a container from ubuntu image
with latest tag. We will copy the image ID of ubuntu image present at 2nd position, which is
– 3b418d7b466a.

Step 3: The third and the last step is to start a container for our target image. The docker
run command is used to do this. Below is the syntax of command:
Command
docker run <options> <image_ID>

The <options> for the run command are explained in docker’s documentation, you can check it
out from here – https://docs.docker.com/engine/reference/run
Command
docker run -it 3b418d7b466a
creating container with containerid

Example of Running a Nodejs Web App Docker Image


In this example, we will see a NodeJS docker image getting contanerized by the docker run
command. Here are the steps to do it:

Step 1: List all the docker images, that are available locally. Enter the following command to do
this:

docker images

Step 2: In this example, we wann run the first docker image, which is a NodeJS application. It’s
the kartikkala/mirror_website. So we will copy it’s image ID and run it with the necessary
volume mounted and port 8080 mapped with the host PC. We are mapping port 8080 as it’s
programmed in the NodeJS app to listen at port 8080. To know which port you need to map to
the host PC, you can refer to this article – Managing Ports

Command:
docker run -p 8080:8080 -d --mount type=bind,src=$(pwd)/../volume1,dst=/downloadables
kartikkala/mirror_website:beta

So you can see, there are a bunch of things going on here:

First thing, port 8080 of the container is exposed to port 8080 of the host machine with the ‘ -p ‘
flag.
Second thing, volume1 directory is bind mounted as a volume on /downloadables folder, where
volume1 folder is of the host machine, and /downloadables is inside the container. This will
cause all the changes inside the /downloadables folder to be reflected in the the volume1 folder
directly.
Step 3: Now we will open up our browser and check on localhost or any other IP address
assosicated with out local machine. Below is the screenshot for our web app running in the
browser:
Output:
Selection_004-(1).jpg
Our web app is up and running on port 8080

Build, Name and Tag the Docker Container Images


The following are the commands we are going to discuss on building, Name and Tag the docker
container Images:

Build Docker Container Image


The following command is used for building the docker image from a Dockerfile in the current
directory:
docker build -t my_image_name .
Name and Tag the Docker Container Image
The following command is used for Naming and tag the Docker Container Image:
docker build -t my_image_name:tag .
Example

docker build -t my_web_app:v1


Verify the Docker Image
The following is the command used for verify the Docker Image:
docker images
Running And Viewing Docker Containers
The following are the commands used for running and viewing the Docker Containers:
1. Run a Docker Container
The following is the command used for running a docker container from an Image:

docker run [OPTIONS] IMAGE [COMMAND] [ARG...]


Example:
docker run -d -p 8080:80 my_web_app
2. View Running Containers
The following is the command used for view running Docker Containers:
docker ps
3. View All Docker Containers
The following is the command used for viewing all the Docker Containers:
docker ps –a
Working with Images

1. Listing Docker Images


To list Docker Images in your local Docker repository, you can use this command.

sudo docker images

2. Listing the Images by their names and tags

You want to find out Images with specific names, you can use the following command.
sudo docker images <image-name>

3. Listing images with full-length IDs


Usually when you list Docker Images, only partial Image Ids (First 12 Characters) are displayed.
To display Docker Images, with full-length Image Ids, you use the –no-trunc flag.

sudo docker images --no-trunc

4. Using filters to list Images


You can use the –filter option along with the list command, to filter out only desired Images.

For example, we will filter out only Ubuntu images below.

sudo docker --filter=reference='ubuntu'

5. Pulling Docker Images with specific tags

To pull Docker Images with specific tags or versions, you can use the following command.

sudo docker pull <image-name>:<tag-name>

Docker Compose

Docker Compose is a tool specifically designed to simplify the management of multi-container


Docker applications. It uses a YAML file in which the definition of services, networks, and
volumes that an application requires is described.

Basically, through the docker-compose.yml file, we define the configuration for each container:
build context, environment variables, ports to be exposed, and the relationship between services.
Running all the defined services can be done by one command, the docker-compose
up command, ensuring they work together accordingly.
Key Concepts of Docker Compose

Docker Compose introduces several essential concepts that are necessary to understand and be
able to use the tool effectively. These consist of the architecture of a Docker Compose file
written in YAML, services, networks, volumes, and environment variables. Let’s discuss each of
these concepts.

Docker Compose File Mechanism (YAML)

Ordinarily, the Docker Compose file would be a docker-compose.yml file using YAML. The file
describes the configuration your application might require regarding services, networks, and
volumes. It gives a guide on spinning up the environment the application will run under.
Understanding the structure of this file is crucial for effectively using Docker Compose.

Key Elements of YAML File


 Version − This defines the format of the Docker Compose file so that it ensures
compatibility with different Docker Compose features.
 Services − Contains lists of all services (containers) composing the application. Each
service is described with uncounted configuration options.
 Networks − It will specify custom networks for inter-container communication and may
specify the configuration options and network drivers.
 Volumes − Declares shared volumes that are used to allow persistent storage. Volumes
can be shared between services or used to store data outside the container's lifecycle.

Docker Hub:

Docker Hub is a service provided by Docker for finding and sharing container images.

It's the world’s largest repository of container images with an array of content sources including
container community developers, open source projects, and independent software vendors (ISV)
building and distributing their code in containers.

Docker Hub is also where you can go to carry out administrative tasks for organizations. If you
have a Docker Team or Business subscription, you can also carry out administrative tasks in the
Docker Admin Console.

key features are included in Docker Hub

Repositories: Push and pull container images.

Builds: Automatically build container images from GitHub and Bitbucket and push them to
Docker Hub.

Webhooks: Trigger actions after a successful push to a repository to integrate Docker Hub with
other services.
Docker Trusted Registry (DTR)

Docker Trusted Registry (DTR) is the enterprise-grade image storage solution from Docker. You
install it behind your firewall so that you can securely store and manage the Docker images you
use in your applications.

Docker trusted registry or simply Docker registry is an enterprise offering from Docker. the most
common terminology that you will hear with Docker Enterprise Edition is DTR and UCP
(universal control plane).

In order for DTR to work UCP has to be insallted and for UCP to be installed you would need
Docker Enterprise Edition. Once you install Docker EE you can get a free liscense from
DockerHub.

DTR Features:

Image and job management

DTR can be installed on any platform where you can store your Docker images securely, behind
your firewall. DTR has a user interface that allows authorized users in your organization to
browse Docker images and review repository events. It even allows you to see what Dockerfile
lines were used to produce the image and, if security scanning is enabled, to see a list of all of the
software installed in your images.

Availability
DTR is highly available as it has multiple replicas of containers in case anything fails.
Efficiency
DTR has this ability to clean the unreferenced manifests and cache the images as well for faster
pulling of images.
Built-in access control
STR has great authentication mechanisms like RBAC, LDAP sync. It uses the same
authentication as of UCP.
Security scanning
Image Scanning is built in feature provided out of the box by DTR.
Image signing
DTR has built in Notary, you can use Docker Content Trust to sign and verify images.
Docker Swarm:

A Docker Swarm is a container orchestration tool running the Docker application. It has been
configured to join together in a cluster. The activities of the cluster are controlled by a swarm
manager, and machines that have joined the cluster are referred to as nodes.

Key Features:

 A Docker Swarm is a group of either physical or virtual machines that are running the Docker application
and that have been configured to join together in a cluster.
 The activities of the cluster are controlled by a swarm manager, and machines that have joined the cluster
are referred to as nodes.
 One of the key benefits associated with the operation of a docker swarm is the high level of availability
offered for applications.
 Docker Swarm lets you connect containers to multiple hosts similar to Kubernetes.
 Docker Swarm has two types of services: replicated and global.

Docker attach

The docker attach command is used to attach your terminal to a running container. This
allows you to interact directly with the container's main process (usually the process that was
started by the CMD or ENTRYPOINT in the Dockerfile). It essentially connects your terminal to
the standard input (stdin), output (stdout), and error (stderr) streams of the running container.

When to Use docker attach:

You typically use docker attach when you want to interact with a running container's primary
process, like a web server, a command-line process, or a running application.

Syntax:

docker attach <container_id_or_name>

Example Usage:

Run a container in the background:

docker run -d --name my_nginx nginx

Attach to the running container:

docker attach my_nginx


This will bring up the terminal attached to the container's standard output (stdout) and standard
error (stderr), allowing you to see logs or interact with the container if its process is interactive
(like a shell).

Detaching from the Container:

When you attach to a container, you may want to detach and return to your host terminal without
stopping the container. You can do this by pressing:

Ctrl + C — This will stop the container (if the main process supports termination via SIGINT).

Ctrl + P, Ctrl + Q — This will detach from the container without stopping it.

Key Considerations:

docker attach connects to the primary process (the one specified by the
Dockerfile’s CMD or ENTRYPOINT), so if the container is running something like a web
server, you'll see its output. If it's running an interactive application (like bash), you can interact
with it as though you're inside the container.

If you attach to a container running in detached mode (-d), you'll see the output of the main
process in the terminal. However, it won't create a new terminal session or allow you to run
additional commands directly unless the main process allows interaction.

Limitation of docker attach:

docker attach only allows interaction with the main process of the container. If the container
runs multiple processes or has additional background tasks, you won't be able to interact with
those unless they're specifically configured to interact with the terminal. For example, if you're
running an application that’s logging to stdout or stderr, docker attach will show you those logs.

If you want to run commands or interact with a container after it has started, and you're not just
interested in its primary process, you may find it more useful to use docker exec instead.

docker exec vs. docker attach:

docker exec: Executes a new command inside the container (like bash or sh), and you get a new
interactive shell. This is useful if you want to run commands in a running container without
attaching to its main process.

Example:

docker exec -it <container_id_or_name> bash


docker attach: Attaches to the main process of the container, allowing you to interact directly
with its standard input/output.

Example Workflow:

Start a container in the background with an interactive process:

docker run -d --name my-container ubuntu sleep 1000

Attach to the container:

docker attach my-container

You’ll be connected to the sleep command’s standard output, which won’t show much since it’s
just sleeping, but you could interact with the process if it were running an interactive shell.

Detach from the container without stopping it by pressing Ctrl + P, Ctrl + Q.

Docker File

A Dockerfile is a script that uses the Docker platform to generate containers automatically. It is
essentially a text document that contains all the instructions that a user may use to create an
image from the command line. The Docker platform is a Linux-based platform that allows
developers to create and execute containers, self-contained programs, and systems that are
independent of the underlying infrastructure. Docker, which is based on the
Linux kernel’s resource isolation capabilities, allows developers and system administrators to
transfer programs across multiple systems and machines by executing them within containers.

syntax for writing a Dockerfile and Format


1. FROM
A FROM statement defines which image to download and start from. It must be the first
command in your Dockerfile. A Dockerfile can have multiple FROM statements which means
the Dockerfile produces more than one image.
Example:
FROM java: 8

2. MAINTAINER
This statement is a kind of documentation, which defines the author who is creating this
Dockerfile or who should you contact if it has bugs.
Example:
MAINTAINER Firstname Lastname <example@geeksforgeeks.com>

3. RUN
The RUN statement defines running a command through the shell, waiting for it to finish, and
saving the result. It tells what process will be running inside the container at the run time.
Example:
RUN unzip install.zip /opt/install
RUN echo hello

4. ADD
If we define to add some files, ADD statement is used. It basically gives instructions to copy
new files, directories, or remote file URLs and then adds them to the filesystem of the image.
To sum up it can add local files, contents of tar archives as well as URLs.
Example:
Local Files: ADD run.sh /run.sh
Tar Archives: ADD project.tar.gz /install/
URLs: ADD https://project.example-gfg.com/downloads/1.0/testingproject.rpm/test

5. ENV
ENV statement sets the environment variables both during the build and when running the
result. It can be used in the Dockerfile and any scripts it calls. It can be used in the Dockerfile
as well as any scripts that the Dockerfile calls. These are also persistent with the container and
can be referred to at any moment.
Example:
ENV URL_POST=production.example-gfg.com

6. ENTRYPOINT
It specifies the starting of the expression to use when starting your container. Simply
ENTRYPOINT specifies the start of the command to run. If your container acts as a command-
line program, you can use ENTRYPOINT.
Example:
ENTRYPOINT ["/start.sh"]

7. CMD
CMD specifies the whole command to run. We can say CMD is the default argument passed
into the ENTRYPOINT. The main purpose of the CMD command is to launch the software
required in a container.
Example:
CMD ["program-foreground"]
CMD ["executable", "program1", "program2"]

Docker Commands

Docker is a platform that enables the creation, deployment, and running of applications with
the help of containers. A container is a unit of software that packages the code and all its
dependencies together so that the application becomes runnable irrespective of the
environment.
The container isolates the application and its dependencies into a self-contained unit that can
run anywhere. Container removes the need for physical hardware, allowing for more efficient
use of computing resources. Containers provide operating-system-level virtualization.
Additionally, using Docker commands, developers can easily manage these containers,
enhancing their productivity and workflow efficiency.

Let's understand a few of the above commands along with their usage in detail. The following
are the most used docker basic commands for beginners and experienced docker
professionals.

Top 15 Basic Docker Commands

1. docker –version

This command is used to get the current version of the docker

Syntax:

docker - -version [OPTIONS]

By default, this will render all version information in an easy-to-read layout.

2. docker pull

Pull an image or a repository from a registry

Syntax:

docker pull [OPTIONS] NAME[: TAG|@DIGEST]

To download an image or set of images (i.e. A Repository) , Once can use docker pull
command

Example:

$ docker pull dockerimage


3. docker run

This command is used to create a container from an image

Syntax:

docker run [OPTIONS] IMAGE [COMMAND] [ARG...]

The docker run command creates a writeable container layer over the specified image and
then starts it using the specified command.

The docker run command can be used with many variations, One can refer to the following
documentation docker run.

4. docker ps

This command is used to list all the containers

Syntax:

docker ps [OPTIONS]

The above command can be used with other options like - all or –a

docker ps -all: Lists all containers

Example:

$ docker ps

$ docker ps -a

5. docker exec

This command is used to run a command in a running container

Syntax
docker exec [OPTIONS] CONTAINER COMMAND [ARG...]

Docker exec command runs a new command in a running container.

Refer to the following article for more detail regarding the usage of the docker exec
command docker exec.

6. docker stop

This command is used to stop one or more running containers.

Syntax:

docker stop [OPTIONS] CONTAINER [CONTAINER...]

The main process inside the container will receive SIGTERM, and after a grace period,
SIGKILL. The first signal can be changed with the STOPSIGNAL instruction in the
container’s Dockerfile, or the --stop-signal option to docker run.

Example:

$ docker stop my_container

7. docker restart

This command is used to restart one or more containers.

Syntax: docker restart [OPTIONS] CONTAINER [CONTAINER...]

Example:

$ docker restart my_container

8. docker kill

This command is used to kill one or more containers.

Syntax: docker kill [OPTIONS] CONTAINER [CONTAINER...]


Example:

$ docker kill my_container

9. docker commit

This command is used to create a new image from the container image.

Syntax: docker commit [OPTIONS] CONTAINER [REPOSITORY[:TAG]]

Docker commit command allows users to take an existing running container and save its
current state as an image

There are certain steps to be followed before running the command

 First , Pull the image from docker hub


 Deploy the container using the image id from first step
 Modify the container (Any changes ,if needed)
 Commit the changes
Example:

$ docker commit c3f279d17e0a dev/testimage:version3.

10. docker push

This docker command is used to push an image or repository to a registry.

Syntax: docker push [OPTIONS] NAME[: TAG]

Use docker image push to share your images to the Docker Hub registry or to a self -hosted
one.

Example:

$ docker image push registry-host:5000/myadmin/rhel-httpd:lates


Apart from the above commands, we have other commands for which the detailing can be
found in the following link Docker reference.

11. docker rm

This command is used to remove one or more docker containers. We can use options such as -
f i.e. force removal of running container which internally uses SIGKILL. Or -v which
removes any anonymous volumes associated with the container.

Syntax: docker rm [OPTIONS] CONTAINER [CONTAINER...]

Example:

docker rm container1

Removing multiple containers:

docker rm container1 container2 container3

Removing with -v and -f options:

docker rm -v container1

Copy Code

docker rm -f running_container

12. docker rmi

This command is used to remove one or more docker images from the system. We can use
some common options such as - f for force removal of an image or --no-prune for
not deleting untagged parent images.

Syntax:

docker rmi [OPTIONS] IMAGE

 Remove a single image:


docker rmi my_image:tag

 Remove multiple images:

docker rmi image1:tag image2:tag image3:tag

 Force remove an image:

docker rmi -f my_image:tag

 Removing image without deleting untagged images:

docker rmi --no-prune my_image:tag

13. docker push

This command is used to upload the docker image to a Docker registry such as Docker Hub
or a private registry.

Syntax: docker push [OPTIONS] NAME[:TAG]

Example:

Command: docker push myusername/myrepository:latest

14. docker login

This command is used to log in to the Docker registry such as Docker Hub, a private registry,
or any other third-party registry. We can use some common options such as -u for the
username of the registry, -p for the password of the registry.

Syntax: docker login [OPTIONS] [SERVER]

Examples:

 Login to Docker Hub: docker login

Command: docker login


 Login with username and password:

Command: docker login -u myusername -p mypassword

Note: In case we need to log in to other Docker registries:

Command: docker login myregistry.com

15. docker start

This command is used to start one or more containers. We can use common
options such as -a to attach stderr /stdout and forward signal. Also -i option
can be used as an interactive mode where the container STDIN can be
attached.

Syntax: docker start [OPTIONS] CONTAINER [CONTAINER...]

Example:

1. Starting Single Container:

Command: docker start container1

You might also like