Unit V

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 7

Define Docker and explain its key purpose in software development.

Docker is an open-source project that automates the deployment of software applications


inside containers by providing an additional layer of abstraction and automation of OS-level
virtualization on Linux.

Docker is a tool that allows developers, sys-admins etc. to easily deploy their applications in
a sandbox (called containers) to run on the host operating system i.e. Linux. The key benefit
of Docker is that it allows users to package an application with all of its dependencies into a
standardized unit for software development. Unlike virtual machines, containers do not have
high overhead and hence enable more efficient usage of the underlying system and resources.

Containers offer a logical packaging mechanism in which applications can be abstracted from
the environment in which they actually run. This decoupling allows container-based
applications to be deployed easily and consistently, regardless of whether the target
environment is a private data center, the public cloud, or even a developer’s personal laptop.
This gives developers the ability to create predictable environments that are isolated from the
rest of the applications and can be run anywhere.

The docker ps command shows that all containers are currently running.

$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS
PORTS NAMES

Since no containers are running, we see a blank line. Let's try a more useful variant: docker
ps -a

$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS
PORTS NAMES
305297d7a235 busybox "uptime" 11 minutes ago Exited (0) 11 minutes
ago distracted_goldstine
ff0a5c3750b9 busybox "sh" 12 minutes ago Exited (0) 12 minutes
ago elated_ramanujan
14e5bd11d164 hello-world "/hello" 2 minutes ago Exited (0) 2 minutes
ago thirsty_euclid

So what we see above is a list of all containers that we ran. The STATUS column shows that
these containers exited a few minutes ago.

Terminology

 Images - The blueprints of our application which form the basis of containers.
 Containers - Created from Docker images and run the actual application.
 Docker Daemon - The background service running on the host that manages building,
running and distributing Docker containers. The daemon is the process that runs in the
operating system which clients talk to.
 Docker Client - The command line tool that allows the user to interact with the
daemon. More generally, there can be other forms of clients too - such
as Kitematic which provide a GUI to the users.
 Docker Hub - A registry of Docker images. We can think of the registry as a directory
of all available Docker images.

A Dockerfile is a simple text file that contains a list of commands that the Docker client calls
while creating an image. It's a simple way to automate the image creation process.

client/Dockerfile

FROM mhart/alpine-node

WORKDIR /usr/src/app

COPY package* .

RUN npm install

COPY . .

EXPOSE 3000

CMD [ "npm", "start" ]

This Dockerfile is a basic setup for creating a Docker image for a Node.js application using
the mhart/alpine-node base image, which is a minimal Alpine Linux image with Node.js pre-
installed.
1. FROM mhart/alpine-node: This sets the base image for our Docker image. In this
case, it's using the mhart/alpine-node image, which is a lightweight Alpine Linux
image with Node.js installed.
2. WORKDIR /usr/src/app: This sets the working directory within the container where
subsequent commands will be executed. In this case, it's set to /usr/src/app.
3. COPY package* .: This copies the package.json and package-lock.json (if present)
files from our host machine's current directory into the container's /usr/src/app
directory.
4. RUN npm install: This runs the npm install command inside the container to install
dependencies specified in package.json.
5. COPY . .: This copies the rest of our application code from our host machine's current
directory into the container's /usr/src/app directory. This assumes that our application
code is in the same directory as our Dockerfile.
6. EXPOSE 3000: This exposes port 3000 from the container to the host system. This
means that any service running inside the container on port 3000 will be accessible
from outside the container.
7. CMD [ "npm", "start" ]: This sets the default command to run when the container
starts. In this case, it runs npm start, assuming that our package.json has a start script
defined.

Overall, this Dockerfile sets up an environment for a Node.js application, installs


dependencies, exposes port 3000, and starts the application using npm start.
server/Dockerfile
FROM mhart/alpine-node

# Set the working directory in the container


WORKDIR /usr/src/app

# Copy package.json and package-lock.json to the working directory


COPY package*.json ./

# Install app dependencies


RUN npm install

# Bundle app source


COPY . .

# Expose the port our app runs on


EXPOSE 5000
# Define the command to run our app
CMD ["node", "index.js"]

This Dockerfile is for building a Docker image for a Node.js application using the
mhart/alpine-node base image.
1. FROM mhart/alpine-node: Sets the base image for the Docker image. In this case, it's
using mhart/alpine-node, which is a lightweight Alpine Linux image with Node.js
installed.
2. WORKDIR /usr/src/app: Sets the working directory within the container where
subsequent commands will be executed. In this case, it's set to /usr/src/app.
3. COPY package*.json ./: Copies the package.json and package-lock.json (if it exists)
from the host machine's current directory into the container's /usr/src/app directory.
4. RUN npm install: Runs npm install inside the container to install dependencies
specified in package.json.
5. COPY . .: Copies the rest of our application code from the host machine's current
directory into the container's /usr/src/app directory. This assumes that our application
code is in the same directory as our Dockerfile.
6. EXPOSE 5000: Exposes port 5000 from the container to the host system. This means
that any service running inside the container on port 5000 will be accessible from
outside the container.
7. CMD ["node", "index.js"]: Defines the default command to run when the container
starts. In this case, it runs node index.js, assuming that our main entry point file is
named index.js.

Overall, this Dockerfile sets up an environment for a Node.js application, installs


dependencies, exposes port 5000, and starts the application using node index.js.

docker-compose.yml
version: '3.8'
services:
frontend:
build:
context: ./client
ports:
- "3000:3000"

networks:
- my-network
depends_on:
- backend

backend:
build: ./server
ports:
- "5000:5000"

networks:
- my-network

networks:
my-network:
driver: bridge

This is a docker-compose.yml file that defines two services: frontend and backend, along
with a custom network called my-network.

1. Version: Specifies the version of Docker Compose being used. In this case, it's
version 3.8.
2. Services:
 frontend:
 build: Specifies the build context for the frontend service. It tells
Docker to look for a Dockerfile in the ./client directory.
 ports: Maps port 3000 from the container to port 3000 on the host
machine, allowing access to the frontend application.
 networks: Attaches the service to the my-network network, allowing
communication between services.
 depends_on: Specifies that the frontend service depends on the
backend service.
 backend:
 build: Specifies the build context for the backend service. It tells
Docker to look for a Dockerfile in the ./server directory.
 ports: Maps port 5000 from the container to port 5000 on the host
machine, allowing access to the backend application.
 networks: Attaches the service to the my-network network, allowing
communication between services.
3. Networks:
 my-network:
 driver: Specifies the network driver to use. In this case, it's set to
bridge, which is the default network driver for Docker.

This docker-compose.yml file defines a setup where we have a frontend and backend service,
each running in its own container, and both containers are connected to the same network
(my-network). The frontend service is accessible via port 3000 on the host machine, while
the backend service is accessible via port 5000. The frontend service depends on the backend
service, ensuring that the backend is started before the frontend.

clone the repository locally

But before we start, we need to make sure the ports and names are free.
Now we can run docker-compose. Navigate to the directory and run docker-
compose up -d in detached mode.
We can see that compose went ahead and created a new network and attached
both the new services in that network so that each of these are discoverable to
the other. Each container for a service joins the default network and is both
reachable by other containers on that network, and discoverable by them at a
hostname identical to the container name.

You might also like