From Containers To Kubernetes With Nodejs
From Containers To Kubernetes With Nodejs
International License.
ISBN 978-0-9997730-5-5
From Containers to Kubernetes with
Node.js
Kathleen Juell
2020-05
From Containers to Kubernetes with
Node.js
1. About DigitalOcean
2. Preface - Getting Started with this Book
3. Introduction
4. How To Build a Node.js Application with Docker
5. How To Integrate MongoDB with Your Node Application
6. Containerizing a Node.js Application for Development With
Docker Compose
7. How To Migrate a Docker Compose Workflow to Kubernetes
8. How To Scale a Node.js Application with MongoDB on
Kubernetes Using Helm
9. How To Secure a Containerized Node.js Application with Nginx,
Let’s Encrypt, and Docker Compose
About DigitalOcean
To work with the examples in this book, we recommend that you have a
local development environment running Ubuntu 18.04. For examples that
model pushing code to production, we recommend that you provision a
remote Ubuntu 18.04 server. This will be important as you begin exploring
how to deploy to production with containers and SSL certificates.
When working with Kubernetes, we also recommend that you have a
local machine or server with the kubectl command line tool installed.
Each chapter of the book will also have clear requirements that pertain
to the instructions it covers.
Introduction
Prerequisites
To follow this tutorial, you will need: - One Ubuntu 18.04 server, set up
following this Initial Server Setup guide. - Docker installed on your server,
following Steps 1 and 2 of How To Install and Use Docker on Ubuntu
18.04. - Node.js and npm installed, following these instructions on
installing with the PPA managed by NodeSource. - A Docker Hub account.
For an overview of how to set this up, refer to this introduction on getting
started with Docker Hub.
"name": "nodejs-image-demo",
"version": "1.0.0",
"license": "MIT",
"main": "app.js",
"keywords": [
"nodejs",
"bootstrap",
"express"
],
"dependencies": {
"express": "^4.16.4"
This file includes the project name, author, and license under which it is
being shared. Npm recommends making your project name short and
descriptive, and avoiding duplicates in the npm registry. We’ve listed the
MIT license in the license field, permitting the free use and distribution of
the application code.
Additionally, the file specifies: - "main": The entrypoint for the
application, app.js. You will create this file next. - "dependencies":
The project dependencies — in this case, Express 4.16.4 or above.
Though this file does not list a repository, you can add one by following
these guidelines on adding a repository to your package.json file. This
is a good addition if you are versioning your application.
Save and close the file when you’ve finished making changes.
To install your project’s dependencies, run the following command:
npm install
This will install the packages you’ve listed in your package.json
file in your project directory.
We can now move on to building the application files.
The require function loads the express module, which we then use
to create the app and router objects. The router object will perform
the routing function of the application, and as we define HTTP method
routes we will add them to this object to define how our application will
handle requests.
This section of the file also sets a couple of constants, path and port:
- path: Defines the base directory, which will be the views subdirectory
within the current project directory. - port: Tells the app to listen on and
bind to port 8080.
Next, set the routes for the application using the router object:
~/node_project/app.js
...
router.use(function (req,res,next) {
console.log('/' + req.method);
next();
});
router.get('/', function(req,res){
res.sendFile(path + 'index.html');
});
router.get('/sharks', function(req,res){
res.sendFile(path + 'sharks.html');
});
...
app.use(express.static(path));
app.use('/', router);
app.listen(port, function () {
})
router.use(function (req,res,next) {
console.log('/' + req.method);
next();
});
router.get('/', function(req,res){
res.sendFile(path + 'index.html');
});
router.get('/sharks', function(req,res){
res.sendFile(path + 'sharks.html');
});
app.use(express.static(path));
app.use('/', router);
app.listen(port, function () {
})
Save and close the file when you are finished.
Next, let’s add some static content to the application. Start by creating
the views directory:
mkdir views
Open the landing page file, index.html:
nano views/index.html
Add the following code to the file, which will import Boostrap and
create a jumbotron component with a link to the more detailed
sharks.html info page:
~/node_project/views/index.html
<!DOCTYPE html>
<html lang="en">
<head>
<title>About Sharks</title>
<meta charset="utf-8">
scale=1">
<link rel="stylesheet"
href="https://stackpath.bootstrapcdn.com/bootstrap/4.1.3/css/bootst
rap.min.css" integrity="sha384-
MCw98/SFnGE8fJT3GXwEOngsV7Zt27NXFoaoApmYm81iuXoPkFOJwJ8ERdknLPMO"
crossorigin="anonymous">
<link href="https://fonts.googleapis.com/css?
</head>
<body>
navbar-expand-md">
<div class="container">
data-toggle="collapse" data-target="#bs-example-navbar-collapse-1"
navigation</span>
</button> <a class="navbar-brand" href="#">Everything
Sharks</a>
navbar-collapse-1">
class="nav-link">Home</a>
</li>
class="nav-link">Sharks</a>
</li>
</ul>
</div>
</div>
</nav>
<div class="jumbotron">
<div class="container">
<br>
</p>
</div>
</div>
<div class="container">
<div class="row">
<div class="col-lg-6">
</p>
</div>
<div class="col-lg-6">
</p>
</div>
</div>
</div>
</body>
</html>
The top-level navbar here allows users to toggle between the Home and
Sharks pages. In the navbar-nav subcomponent, we are using
Bootstrap’s active class to indicate the current page to the user. We’ve
also specified the routes to our static pages, which match the routes we
defined in app.js:
~/node_project/views/index.html
...
collapse-1">
link">Home</a>
</li>
link">Sharks</a>
</li>
</ul>
</div>
...
...
<div class="jumbotron">
<div class="container">
<br>
</p>
</div>
</div>
...
~/node_project/views/index.html
...
...
<!DOCTYPE html>
<html lang="en">
<head>
<title>About Sharks</title>
<meta charset="utf-8">
scale=1">
<link rel="stylesheet"
href="https://stackpath.bootstrapcdn.com/bootstrap/4.1.3/css/bootst
rap.min.css" integrity="sha384-
MCw98/SFnGE8fJT3GXwEOngsV7Zt27NXFoaoApmYm81iuXoPkFOJwJ8ERdknLPMO"
crossorigin="anonymous">
<link href="https://fonts.googleapis.com/css?
</head>
expand-md">
<div class="container">
data-toggle="collapse" data-target="#bs-example-navbar-collapse-1"
navigation</span>
Sharks</a>
<div class="collapse navbar-collapse" id="bs-example-
navbar-collapse-1">
link">Home</a>
</li>
class="nav-link">Sharks</a>
</li>
</ul>
</div>
</div>
</nav>
<h1>Shark Info</h1>
</div>
<div class="container">
<div class="row">
<div class="col-lg-6">
<p>
dangerous to humans, though many more are not. The sawshark, for
</div>
<img
src="https://assets.digitalocean.com/articles/docker_node_image/saw
shark.jpg" alt="Sawshark">
</p>
</div>
<div class="col-lg-6">
<p>
<img
src="https://assets.digitalocean.com/articles/docker_node_image/sam
</p>
</div>
</div>
</div>
</html>
Note that in this file, we again use the active class to indicate the
current page.
Save and close the file when you are finished.
Finally, create the custom CSS style sheet that you’ve linked to in
index.html and sharks.html by first creating a css folder in the
views directory:
mkdir views/css
Open the style sheet:
nano views/css/styles.css
Add the following code, which will set the desired color and font for our
pages:
~/node_project/views/css/styles.css
.navbar {
margin-bottom: 0;
body {
background: #020A1B;
color: #ffffff;
h1,
h2 {
font-weight: bold;
p {
font-size: 16px;
color: #ffffff;
.jumbotron {
background: #0048CD;
color: white;
text-align: center;
}
.jumbotron p {
color: white;
font-size: 26px;
.btn-primary {
color: #fff;
text-color: #000000;
border-color: white;
margin-bottom: 5px;
img,
video,
audio {
margin-top: 20px;
max-width: 80%;
div.caption: {
float: left;
clear: both;
In addition to setting font and color, this file also limits the size of the
images by specifying a max-width of 80%. This will prevent them from
taking up more room than we would like on the page.
Save and close the file when you are finished.
With the application files in place and the project dependencies
installed, you are ready to start the application.
If you followed the initial server setup tutorial in the prerequisites, you
will have an active firewall permitting only SSH traffic. To permit traffic
to port 8080 run:
sudo ufw allow 8080
To start the application, make sure that you are in your project’s root
directory:
cd ~/node_project
Start the application with node app.js:
node app.js
Navigate your browser to http://your_server_ip:8080. You
will see the following landing page:
You now have an application up and running. When you are ready, quit
the server by typing CTRL+C. We can now move on to creating the
Dockerfile that will allow us to recreate and scale this application as
desired.
~/node_project/Dockerfile
FROM node:10-alpine
This image includes Node.js and npm. Each Dockerfile must begin with
a FROM instruction.
By default, the Docker Node image includes a non-root node user that
you can use to avoid running your application container as root. It is a
recommended security practice to avoid running containers as root and to
restrict capabilities within the container to only those required to run its
processes. We will therefore use the node user’s home directory as the
working directory for our application and set them as our user inside the
container. For more information about best practices when working with
the Docker Node image, see this best practices guide.
To fine-tune the permissions on our application code in the container,
let’s create the node_modules subdirectory in /home/node along
with the app directory. Creating these directories will ensure that they
have the permissions we want, which will be important when we create
local node modules in the container with npm install. In addition to
creating these directories, we will set ownership on them to our node user:
~/node_project/Dockerfile
...
/home/node/app
~/node_project/Dockerfile
...
WORKDIR /home/node/app
If a WORKDIR isn’t set, Docker will create one by default, so it’s a good
idea to set it explicitly.
Next, copy the package.json and package-lock.json (for npm
5+) files:
~/node_project/Dockerfile
...
COPY package*.json ./
~/node_project/Dockerfile
...
USER node
After copying the project dependencies and switching our user, we can
run npm install:
~/node_project/Dockerfile
...
~/node_project/Dockerfile
...
COPY --chown=node:node . .
This will ensure that the application files are owned by the non-root
node user.
Finally, expose port 8080 on the container and start the application:
~/node_project/Dockerfile
...
EXPOSE 8080
EXPOSE does not publish the port, but instead functions as a way of
documenting which ports on the container will be published at runtime.
CMD runs the command to start the application — in this case, node
app.js. Note that there should only be one CMD instruction in each
Dockerfile. If you include more than one, only the last will take effect.
There are many things you can do with the Dockerfile. For a complete
list of instructions, please refer to Docker’s Dockerfile reference
documentation.
The complete Dockerfile looks like this:
~/node_project/Dockerfile
FROM node:10-alpine
/home/node/app
WORKDIR /home/node/app
COPY package*.json ./
USER node
COPY --chown=node:node . .
EXPOSE 8080
Save and close the file when you are finished editing.
Before building the application image, let’s add a .dockerignore
file. Working in a similar way to a .gitignore file, .dockerignore
specifies which files and directories in your project directory should not
be copied over to your container.
Open the .dockerignore file:
nano .dockerignore
Inside the file, add your local node modules, npm logs, Dockerfile, and
.dockerignore file:
~/node_project/.dockerignore
node_modules
npm-debug.log
Dockerfile
.dockerignore
If you are working with Git then you will also want to add your .git
directory and .gitignore file.
Save and close the file when you are finished.
You are now ready to build the application image using the docker
build command. Using the -t flag with docker build will allow
you to tag the image with a memorable name. Because we are going to
push the image to Docker Hub, let’s include our Docker Hub username in
the tag. We will tag the image as nodejs-image-demo, but feel free to
replace this with a name of your own choosing. Remember to also replace
your_dockerhub_username with your own Docker Hub username:
docker build -t your_dockerhub_username/nodejs-
image-demo .
The . specifies that the build context is the current directory.
It will take a minute or two to build the image. Once it is complete,
check your images:
docker images
You will see the following output:
Output
REPOSITORY TAG
your_dockerhub_username/nodejs-image-demo latest
node 10-alpine
CONTAINER ID IMAGE
NAMES
e50ad27074a7 your_dockerhub_username/nodejs-image-demo
0.0.0.0:80->8080/tcp nodejs-image-demo
With your container running, you can now visit your application by
navigating your browser to http://your_server_ip. You will see
your application landing page once again:
Now that you have created an image for your application, you can push
it to Docker Hub for future use.
Step 4 — Using a Repository to Work with Images
By pushing your application image to a registry like Docker Hub, you
make it available for subsequent use as you build and scale your
containers. We will demonstrate how this works by pushing the application
image to a repository and then using the image to recreate our container.
The first step to pushing the image is to log in to the Docker Hub
account you created in the prerequisites:
docker login -u your_dockerhub_username
When prompted, enter your Docker Hub account password. Logging in
this way will create a ~/.docker/config.json file in your user’s
home directory with your Docker Hub credentials.
You can now push the application image to Docker Hub using the tag
you created earlier, your_dockerhub_username/nodejs-image-
demo:
docker push your_dockerhub_username/nodejs-image-
demo
Let’s test the utility of the image registry by destroying our current
application container and image and rebuilding them with the image in our
repository.
First, list your running containers:
docker ps
You will see the following output:
Output
CONTAINER ID IMAGE
NAMES
e50ad27074a7 your_dockerhub_username/nodejs-image-demo
0.0.0.0:80->8080/tcp nodejs-image-demo
REPOSITORY TAG
your_dockerhub_username/nodejs-image-demo latest
<none> <none>
<none> <none>
<none> <none>
<none> <none>
<none> <none>
<none> <none>
<none> <none>
<none> <none>
node 10-alpine
Remove the stopped container and all of the images, including unused
or dangling images, with the following command:
docker system prune -a
Type y when prompted in the output to confirm that you would like to
remove the stopped container and images. Be advised that this will also
remove your build cache.
You have now removed both the container running your application
image and the image itself. For more information on removing Docker
containers, images, and volumes, please see How To Remove Docker
Images, Containers, and Volumes.
With all of your images and containers deleted, you can now pull the
application image from Docker Hub:
docker pull your_dockerhub_username/nodejs-image-
demo
List your images once again:
docker images
You will see your application image:
Output
REPOSITORY TAG
your_dockerhub_username/nodejs-image-demo latest
You can now rebuild your container using the command from Step 3:
docker run --name nodejs-image-demo -p 80:8080 -d
your_dockerhub_username/nodejs-image-demo
List your running containers:
docker ps
Output
CONTAINER ID IMAGE
NAMES
f6bc2f50dff6 your_dockerhub_username/nodejs-image-demo
0.0.0.0:80->8080/tcp nodejs-image-demo
Conclusion
In this tutorial you created a static web application with Express and
Bootstrap, as well as a Docker image for this application. You used this
image to create a container and pushed the image to Docker Hub. From
there, you were able to destroy your image and container and recreate
them using your Docker Hub repository.
If you are interested in learning more about how to work with tools like
Docker Compose and Docker Machine to create multi-container setups,
you can look at the following guides: - How To Install Docker Compose on
Ubuntu 18.04. - How To Provision and Manage Remote Docker Hosts with
Docker Machine on Ubuntu 18.04.
For general tips on working with container data, see: - How To Share
Data between Docker Containers. - How To Share Data Between the
Docker Container and the Host.
If you are interested in other Docker-related topics, please see our
complete library of Docker tutorials.
How To Integrate MongoDB with Your
Node Application
As you work with Node.js, you may find yourself developing a project
that stores and queries data. In this case, you will need to choose a
database solution that makes sense for your application’s data and query
types.
In this tutorial, you will integrate a MongoDB database with an existing
Node application. NoSQL databases like MongoDB can be useful if your
data requirements include scalability and flexibility. MongoDB also
integrates well with Node since it is designed to work asynchronously with
JSON objects.
To integrate MongoDB into your project, you will use the Object
Document Mapper (ODM) Mongoose to create schemas and models for
your application data. This will allow you to organize your application
code following the model-view-controller (MVC) architectural pattern,
which lets you separate the logic of how your application handles user
input from how your data is structured and rendered to the user. Using this
pattern can facilitate future testing and development by introducing a
separation of concerns into your codebase.
At the end of the tutorial, you will have a working shark information
application that will take a user’s input about their favorite sharks and
display the results in the browser:
Shark Output
Prerequisites
A local development machine or server running Ubuntu 18.04, along
with a non-root user with sudo privileges and an active firewall. For
guidance on how to set these up on an 18.04 server, please see this
Initial Server Setup guide.
Node.js and npm installed on your machine or server, following these
instructions on installing with the PPA managed by NodeSource.
MongoDB installed on your machine or server, following Step 1 of
How To Install MongoDB in Ubuntu 18.04.
Output
21min ago
...
Next, open the Mongo shell to create your user:
mongo
This will drop you into an administrative shell:
Output
...
>
You will see some administrative warnings when you open the shell due
to your unrestricted access to the admin database. You can learn more
about restricting this access by reading How To Install and Secure
MongoDB on Ubuntu 16.04, for when you move into a production setup.
For now, you can use your access to the admin database to create a user
with userAdminAnyDatabase privileges, which will allow password-
protected access to your application’s databases.
In the shell, specify that you want to use the admin database to create
your user:
use admin
Next, create a role and password by adding a username and password
with the db.createUser command. After you type this command, the
shell will prepend three dots before each line until the command is
complete. Be sure to replace the user and password provided here with
your own username and password:
db.createUser(
{
user: "sammy",
pwd: "your_password",
roles: [ { role: "userAdminAnyDatabase", db:
"admin" } ]
}
)
This creates an entry for the user sammy in the admin database. The
username you select and the admin database will serve as identifiers for
your user.
The output for the entire process will look like this, including the
message indicating that the entry was successful:
Output
> db.createUser(
... {
... }
...)
"user" : "sammy",
"roles" : [
"role" : "userAdminAnyDatabase",
"db" : "admin"
With your user and password created, you can now exit the Mongo
shell:
exit
Now that you have created your database user, you can move on to
cloning the starter project code and adding the Mongoose library, which
will allow you to implement schemas and models for the collections in
your databases.
Step 2 — Adding Mongoose and Database Information to
the Project
Our next steps will be to clone the application starter code and add
Mongoose and our MongoDB database information to the project.
In your non-root user’s home directory, clone the nodejs-image-
demo repository from the DigitalOcean Community GitHub account. This
repository includes the code from the setup described in How To Build a
Node.js Application with Docker.
Clone the repository into a directory called node_project:
git clone https://github.com/do-community/nodejs-
image-demo.git node_project
Change to the node_project directory:
cd node_project
Before modifying the project code, let’s take a look at the project’s
structure using the tree command.
Tip: tree is a useful command for viewing file and directory structures
from the command line. You can install it with the following command:
sudo apt install tree
To use it, cd into a given directory and type tree. You can also provide
the path to the starting point with a command like:
tree /home/sammy/sammys-project
Type the following to look at the node_project directory:
tree
The structure of the current project looks like this:
Output
├── Dockerfile
├── README.md
├── app.js
├── package-lock.json
├── package.json
└── views
├── css
│ └── styles.css
├── index.html
└── sharks.html
~/node_project/db.js
This will give you access to Mongoose’s built-in methods, which you
will use to create the connection to your database.
Next, add the following constants to define information for Mongo’s
connection URI. Though the username and password are optional, we will
include them so that we can require authentication for our database. Be
sure to replace the username and password listed below with your own
information, and feel free to call the database something other than
'sharkinfo' if you would prefer:
~/node_project/db.js
~/node_project/db.js
...
const url =
`mongodb://${MONGO_USERNAME}:${MONGO_PASSWORD}@${MONGO_HOSTNAME}:${
MONGO_PORT}/${MONGO_DB}?authSource=admin`;
Note that in the URI we’ve specified the authSource for our user as
the admin database. This is necessary since we have specified a username
in our connection string. Using the useNewUrlParser flag with
mongoose.connect() specifies that we want to use Mongo’s new
URL parser.
Save and close the file when you are finished editing.
As a final step, add the database connection information to the app.js
file so that the application can use it. Open app.js:
nano app.js
The first lines of the file will look like this:
~/node_project/app.js
...
Below the router constant definition, located near the top of the file,
add the following line:
~/node_project/app.js
...
const db = require('./db');
...
In keeping with this theme, we can have users add new sharks with
details about their overall character. This goal will shape how we create
our schema.
To keep your schemas and models distinct from the other parts of your
application, create a models directory in the current project directory:
mkdir models
Next, open a file called sharks.js to create your schema and model:
nano models/sharks.js
Import the mongoose module at the top of the file:
~/node_project/models/sharks.js
Below this, define a Schema object to use as the basis for your shark
schema:
~/node_project/models/sharks.js
You can now define the fields you would like to include in your schema.
Because we want to create a collection with individual sharks and
information about their behaviors, let’s include a name key and a
character key. Add the following Shark schema below your constant
definitions:
~/node_project/models/sharks.js
...
});
This definition includes information about the type of input we expect
from users — in this case, a string — and whether or not that input is
required.
Finally, create the Shark model using Mongoose’s model() function.
This model will allow you to query documents from your collection and
validate new documents. Add the following line at the bottom of the file:
~/node_project/models/sharks.js
...
This last line makes our Shark model available as a module using the
module.exports property. This property defines the values that the
module will export, making them available for use elsewhere in the
application.
The finished models/sharks.js file looks like this:
~/node_project/models/sharks.js
});
Save and close the file when you are finished editing.
With the Shark schema and model in place, you can start working on
the logic that will determine how your application will handle user input.
~/node_project/controllers/sharks.js
Next, we’ll write a sequence of functions that we will export with the
controller module using Node’s exports shortcut. These functions will
include the three tasks related to our user’s shark data: - Sending users the
shark input form. - Creating a new shark entry. - Displaying the sharks
back to users.
To begin, create an index function to display the sharks page with the
input form. Add this function below your imports:
~/node_project/controllers/sharks.js
...
res.sendFile(path.resolve('views/sharks.html'));
};
...
console.log(req.body);
newShark.save(function (err) {
if(err) {
} else {
res.redirect('/sharks/getshark');
});
};
This function will be called when a user posts shark data to the form on
the sharks.html page. We will create the route with this POST
endpoint later in the tutorial when we create our application’s routes. With
the body of the POST request, our create function will make a new
shark document object, here called newShark, using the Shark model
that we’ve imported. We’ve added a console.log method to output the
shark entry to the console in order to check that our POST method is
working as intended, but you should feel free to omit this if you would
prefer.
Using the newShark object, the create function will then call
Mongoose’s model.save() method to make a new shark document
using the keys you defined in the Shark model. This callback function
follows the standard Node callback pattern: callback(error,
results). In the case of an error, we will send a message reporting the
error to our users, and in the case of success, we will use the
res.redirect() method to send users to the endpoint that will render
their shark information back to them in the browser.
Finally, the list function will display the collection’s contents back to
the user. Add the following code below the create function:
~/node_project/controllers/sharks.js
...
if (err) {
res.render('getshark', {
sharks: sharks
});
});
};
res.sendFile(path.resolve('views/sharks.html'));
};
console.log(req.body);
newShark.save(function (err) {
if(err) {
} else {
res.redirect('/sharks/getshark');
});
};
if (err) {
res.render('getshark', {
sharks: sharks
});
});
};
Keep in mind that though we are not using arrow functions here, you
may wish to include them as you iterate on this code in your own
development process.
Save and close the file when you are finished editing.
Before moving on to the next step, you can run tree again from your
node_project directory to view the project’s structure at this point.
This time, for the sake of brevity, we’ll tell tree to omit the
node_modules directory using the -I option:
tree -I node_modules
With the additions you’ve made, your project’s structure will look like
this:
Output
├── Dockerfile
├── README.md
├── app.js
├── controllers
│ └── sharks.js
├── db.js
├── models
│ └── sharks.js
├── package-lock.json
├── package.json
└── views
├── css
│ └── styles.css
├── index.html
└── sharks.html
Now that you have a controller component to direct how user input gets
saved and returned to the user, you can move on to creating the views that
will implement your controller’s logic.
~/node_project/app.js
...
app.use(express.static(path));
...
Adding this function will enable access to the parsed POST data from
our shark information form. We are specifying true with the extended
option to enable greater flexibility in the type of data our application will
parse (including things like nested objects). Please see the function
documentation for more information about options.
Save and close the file when you are finished editing.
Next, we will add template functionality to our views. First, install the
ejs package with npm install:
npm install ejs
Next, open the sharks.html file in the views folder:
nano views/sharks.html
In Step 3, we looked at this page to determine how we should write our
Mongoose schema and model:
Shark Info Page
Now, rather than having a two column layout, we will introduce a third
column with a form where users can input information about sharks.
As a first step, change the dimensions of the existing columns to 4 to
create three equal-sized columns. Note that you will need to make this
change on the two lines that currently read <div class="col-lg-
6">. These will both become <div class="col-lg-4">:
~/node_project/views/sharks.html
...
<div class="container">
<div class="row">
<div class="col-lg-4">
<p>
dangerous to humans, though many more are not. The sawshark, for
</div>
<img
src="https://assets.digitalocean.com/articles/docker_node_image/saw
shark.jpg" alt="Sawshark">
</p>
</div>
<div class="col-lg-4">
<p>
<img
src="https://assets.digitalocean.com/articles/docker_node_image/sam
</p>
</div>
</div>
</div>
</html>
...
<div class="col-lg-4">
<p>
<button type="submit">Submit</button>
</form>
</p>
</div>
...
<div class="container">
<div class="row">
<div class="col-lg-4">
<p>
dangerous to humans, though many more are not. The sawshark, for
</div>
<img
src="https://assets.digitalocean.com/articles/docker_node_image/saw
shark.jpg" alt="Sawshark">
</p>
</div>
<div class="col-lg-4">
<p>
<img
src="https://assets.digitalocean.com/articles/docker_node_image/sam
</p>
</div>
<div class="col-lg-4">
<p>
<button type="submit">Submit</button>
</form>
</p>
</div>
</div>
</div>
</html>
Save and close the file when you are finished editing.
Now that you have a way to collect your user’s input, you can create an
endpoint to display the returned sharks and their associated character
information.
Copy the newly modified sharks.html file to a file called
getshark.html:
cp views/sharks.html views/getshark.html
Open getshark.html:
nano views/getshark.html
Inside the file, we will modify the column that we used to create our
sharks input form by replacing it with a column that will display the
sharks in our sharks collection. Again, your code will go between the
existing </p> and </div> tags from the preceding column and the
closing tags for the row, container, and HTML document. Remember to
leave these tags in place as you add the following code to create the
column:
~/node_project/views/getshark.html
...
<div class="col-lg-4">
<p>
<ul>
</ul>
</p>
</div>
Here you are using EJS template tags and the forEach() method to
output each value in your sharks collection, including information about
the most recently added shark.
The entire container with all three columns, including the column with
your sharks collection, will look like this when finished:
~/node_project/views/getshark.html
...
<div class="container">
<div class="row">
<div class="col-lg-4">
<p>
dangerous to humans, though many more are not. The sawshark, for
</div>
<img
src="https://assets.digitalocean.com/articles/docker_node_image/saw
shark.jpg" alt="Sawshark">
</p>
</div>
<div class="col-lg-4">
<p>
<img
src="https://assets.digitalocean.com/articles/docker_node_image/sam
</p>
</div>
<div class="col-lg-4">
<p>
</ul>
</p>
</div>
</div>
</div>
</html>
Save and close the file when you are finished editing.
In order for the application to use the templates you’ve created, you will
need to add a few lines to your app.js file. Open it again:
nano app.js
Above where you added the express.urlencoded() function, add
the following lines:
~/node_project/app.js
...
app.engine('html', require('ejs').renderFile);
app.use(express.static(path));
...
const db = require('./db');
router.use(function (req,res,next) {
console.log('/' + req.method);
next();
});
router.get('/',function(req,res){
res.sendFile(path + 'index.html');
});
router.get('/sharks',function(req,res){
res.sendFile(path + 'sharks.html');
});
app.engine('html', require('ejs').renderFile);
app.use(express.static(path));
app.use('/', router);
app.listen(port, function () {
})
Now that you have created views that can work dynamically with user
data, it’s time to create your project’s routes to bring together your views
and controller logic.
~/node_project/routes/index.js
...
console.log('/' + req.method);
next();
});
Requests to our application’s root will be directed here first, and from
here users will be directed to our application’s landing page, the route we
will define next. Add the following code below the router.use function
to define the route to the landing page:
~/node_project/routes/index.js
...
router.get('/',function(req,res){
res.sendFile(path.resolve('views/index.html'));
});
When users visit our application, the first place we want to send them is
to the index.html landing page that we have in our views directory.
Finally, to make these routes accessible as importable modules
elsewhere in the application, add a closing expression to the end of the file
to export the router object:
~/node_project/routes/index.js
...
module.exports = router;
console.log('/' + req.method);
next();
});
router.get('/',function(req,res){
res.sendFile(path.resolve('views/index.html'));
});
module.exports = router;
Save and close this file when you are finished editing.
Next, open a file called sharks.js to define how the application
should use the different endpoints and views we’ve created to work with
our user’s shark input:
nano routes/sharks.js
At the top of the file, import the express and router objects:
~/node_project/routes/sharks.js
~/node_project/routes/sharks.js
Now you can create routes using the index, create, and list
functions you defined in your sharks controller file. Each route will be
associated with the appropriate HTTP method: GET in the case of
rendering the main sharks information landing page and returning the list
of sharks to the user, and POST in the case of creating a new shark entry:
~/node_project/routes/sharks.js
...
shark.index(req,res);
});
shark.create(req,res);
});
shark.list(req,res);
});
~/node_project/routes/index.js
...
module.exports = router;
The finished file will look like this:
~/node_project/routes/sharks.js
shark.index(req,res);
});
shark.create(req,res);
});
shark.list(req,res);
});
module.exports = router;
Save and close the file when you are finished editing.
The last step in making these routes accessible to your application will
be to add them to app.js. Open that file again:
nano app.js
Below your db constant, add the following import for your routes:
~/node_project/app.js
...
const db = require('./db');
~/node_project/app.js
...
app.use(express.static(path));
app.use('/sharks', sharks);
app.listen(port, function () {
})
You can now delete the routes that were previously defined in this file,
since you are importing your application’s routes using the sharks router
module.
The final version of your app.js file will look like this:
~/node_project/app.js
const db = require('./db');
app.engine('html', require('ejs').renderFile);
app.use(express.static(path));
app.use('/sharks', sharks);
app.listen(port, function () {
})
Save and close the file when you are finished editing.
You can now run tree again to see the final structure of your project:
tree -I node_modules
Your project structure will now look like this:
Output
├── Dockerfile
├── README.md
├── app.js
├── controllers
│ └── sharks.js
├── db.js
├── models
│ └── sharks.js
├── package-lock.json
├── package.json
├── routes
│ ├── index.js
│ └── sharks.js
└── views
├── css
│ └── styles.css
├── getshark.html
├── index.html
└── sharks.html
With all of your application components created and in place, you are
now ready to add a test shark to your database!
If you followed the initial server setup tutorial in the prerequisites, you
will need to modify your firewall, since it currently only allows SSH
traffic. To permit traffic to port 8080 run:
sudo ufw allow 8080
Start the application:
node app.js
Next, navigate your browser to http://your_server_ip:8080.
You will see the following landing page:
Click on the Get Shark Info button. You will see the following
information page, with the shark input form added:
Shark Info Form
In the form, add a shark of your choosing. For the purpose of this
demonstration, we will add Megalodon Shark to the Shark Name field,
and Ancient to the Shark Character field:
Shark Output
You will also see output in your console indicating that the shark has
been added to your collection:
Output
If you would like to create a new shark entry, head back to the Sharks
page and repeat the process of adding a shark.
You now have a working shark information application that allows users
to add information about their favorite sharks.
Conclusion
In this tutorial, you built out a Node application by integrating a
MongoDB database and rewriting the application’s logic using the MVC
architectural pattern. This application can act as a good starting point for a
fully-fledged CRUD application.
For more resources on the MVC pattern in other contexts, please see our
Django Development series or How To Build a Modern Web Application
to Manage Customer Information with Django and React on Ubuntu 18.04.
For more information on working with MongoDB, please see our library
of tutorials on MongoDB.
Containerizing a Node.js Application for
Development With Docker Compose
Prerequisites
To follow this tutorial, you will need: - A development server running
Ubuntu 18.04, along with a non-root user with sudo privileges and an
active firewall. For guidance on how to set these up, please see this Initial
Server Setup guide. - Docker installed on your server, following Steps 1
and 2 of How To Install and Use Docker on Ubuntu 18.04. - Docker
Compose installed on your server, following Step 1 of How To Install
Docker Compose on Ubuntu 18.04.
...
"dependencies": {
"ejs": "^2.6.1",
"express": "^4.16.4",
"mongoose": "^5.4.10"
},
"devDependencies": {
"nodemon": "^1.18.10"
Save and close the file when you are finished editing.
With the project code in place and its dependencies modified, you can
move on to refactoring the code for a containerized workflow.
~/home/node_project/app.js
...
...
app.listen(port, function () {
});
~/home/node_project/app.js
...
...
app.listen(port, function () {
});
Our new constant definition assigns port dynamically using the value
passed in at runtime or 8080. Similarly, we’ve rewritten the listen
function to use a template literal, which will interpolate the port value
when listening for connections. Because we will be mapping our ports
elsewhere, these revisions will prevent our having to continuously revise
this file as our environment changes.
When you are finished editing, save and close the file.
Next, we will modify our database connection information to remove
any configuration credentials. Open the db.js file, which contains this
information:
nano db.js
Currently, the file does the following things: - Imports Mongoose, the
Object Document Mapper (ODM) that we’re using to create schemas and
models for our application data. - Sets the database credentials as
constants, including the username and password. - Connects to the
database using the mongoose.connect method.
For more information about the file, please see Step 3 of How To
Integrate MongoDB with Your Node Application.
Our first step in modifying the file will be redefining the constants that
include sensitive information. Currently, these constants look like this:
~/node_project/db.js
...
...
~/node_project/db.js
...
const {
MONGO_USERNAME,
MONGO_PASSWORD,
MONGO_HOSTNAME,
MONGO_PORT,
MONGO_DB
} = process.env;
...
Save and close the file when you are finished editing.
At this point, you have modified db.js to work with your application’s
environment variables, but you still need a way to pass these variables to
your application. Let’s create an .env file with values that you can pass
to your application at runtime.
Open the file:
nano .env
This file will include the information that you removed from db.js:
the username and password for your application’s database, as well as the
port setting and database name. Remember to update the username,
password, and database name listed here with your own information:
~/node_project/.env
MONGO_USERNAME=sammy
MONGO_PASSWORD=your_password
MONGO_PORT=27017
MONGO_DB=sharkinfo
Note that we have removed the host setting that originally appeared in
db.js. We will now define our host at the level of the Docker Compose
file, along with other information about our services and containers.
Save and close this file when you are finished editing.
Because your .env file contains sensitive information, you will want to
ensure that it is included in your project’s .dockerignore and
.gitignore files so that it does not copy to your version control or
containers.
Open your .dockerignore file:
nano .dockerignore
Add the following line to the bottom of the file:
~/node_project/.dockerignore
...
.gitignore
.env
Save and close the file when you are finished editing.
The .gitignore file in this repository already includes .env, but
feel free to check that it is there:
nano .gitignore
~~/node_project/.gitignore
...
.env
...
~/node_project/db.js
...
const {
MONGO_USERNAME,
MONGO_PASSWORD,
MONGO_HOSTNAME,
MONGO_PORT,
MONGO_DB
} = process.env;
const url =
`mongodb://${MONGO_USERNAME}:${MONGO_PASSWORD}@${MONGO_HOSTNAME}:${
MONGO_PORT}/${MONGO_DB}?authSource=admin`;
~/node_project/db.js
...
const {
MONGO_USERNAME,
MONGO_PASSWORD,
MONGO_HOSTNAME,
MONGO_PORT,
MONGO_DB
} = process.env;
const options = {
useNewUrlParser: true,
reconnectTries: Number.MAX_VALUE,
reconnectInterval: 500,
connectTimeoutMS: 10000,
};
...
~/node_project/db.js
...
Delete the existing connect method and replace it with the following
code, which includes the options constant and a promise:
~/node_project/db.js
...
console.log('MongoDB is connected');
})
.catch( function(err) {
console.log(err);
});
const {
MONGO_USERNAME,
MONGO_PASSWORD,
MONGO_HOSTNAME,
MONGO_PORT,
MONGO_DB
} = process.env;
const options = {
useNewUrlParser: true,
reconnectTries: Number.MAX_VALUE,
reconnectInterval: 500,
connectTimeoutMS: 10000,
};
const url =
`mongodb://${MONGO_USERNAME}:${MONGO_PASSWORD}@${MONGO_HOSTNAME}:${
MONGO_PORT}/${MONGO_DB}?authSource=admin`;
console.log('MongoDB is connected');
})
.catch( function(err) {
console.log(err);
});
Save and close the file when you have finished editing.
You have now added resiliency to your application code to handle cases
where your application might fail to connect to your database. With this
code in place, you can move on to defining your services with Compose.
#!/bin/sh
for/blob/master/wait-for
TIMEOUT=15
QUIET=0
echoerr() {
usage() {
exitcode="$1"
Usage:
messages
no timeout
USAGE
exit "$exitcode"
}
wait_for() {
result=$?
if [ $# -gt 0 ] ; then
exec "$@"
fi
exit 0
fi
sleep 1
done
exit 1
while [ $# -gt 0 ]
do
case "$1" in
*:* )
shift 1
;;
-q | --quiet)
QUIET=1
shift 1
;;
-t)
TIMEOUT="$2"
shift 2
;;
--timeout=*)
TIMEOUT="${1#*=}"
shift 1
;;
--)
shift
break
;;
--help)
usage 0
;;
*)
usage 1
;;
esac
done
usage 2
fi
wait_for "$@"
Save and close the file when you are finished adding the code.
Make the script executable:
chmod +x wait-for.sh
Next, open the docker-compose.yml file:
nano docker-compose.yml
First, define the nodejs application service by adding the following
code to the file:
~/node_project/docker-compose.yml
version: '3'
services:
nodejs:
build:
context: .
dockerfile: Dockerfile
image: nodejs
container_name: nodejs
restart: unless-stopped
env_file: .env
environment:
- MONGO_USERNAME=$MONGO_USERNAME
- MONGO_PASSWORD=$MONGO_PASSWORD
- MONGO_HOSTNAME=db
- MONGO_PORT=$MONGO_PORT
- MONGO_DB=$MONGO_DB
ports:
- "80:8080"
volumes:
- .:/home/node/app
- node_modules:/home/node/app/node_modules
networks:
- app-network
/home/node/app/node_modules/.bin/nodemon app.js
The nodejs service definition includes the following options: -
build: This defines the configuration options, including the context
and dockerfile, that will be applied when Compose builds the
application image. If you wanted to use an existing image from a registry
like Docker Hub, you could use the image instruction instead, with
information about your username, repository, and image tag. - context:
This defines the build context for the image build — in this case, the
current project directory. - dockerfile: This specifies the
Dockerfile in your current project directory as the file Compose will
use to build the application image. For more information about this file,
please see How To Build a Node.js Application with Docker. - image,
container_name: These apply names to the image and container. -
restart: This defines the restart policy. The default is no, but we have
set the container to restart unless it is stopped. - env_file: This tells
Compose that we would like to add environment variables from a file
called .env, located in our build context. - environment: Using this
option allows you to add the Mongo connection settings you defined in the
.env file. Note that we are not setting NODE_ENV to development,
since this is Express’s default behavior if NODE_ENV is not set. When
moving to production, you can set this to production to enable view
caching and less verbose error messages. Also note that we have specified
the db database container as the host, as discussed in Step 2. - ports:
This maps port 80 on the host to port 8080 on the container. - volumes:
We are including two types of mounts here: - The first is a bind mount that
mounts our application code on the host to the /home/node/app
directory on the container. This will facilitate rapid development, since
any changes you make to your host code will be populated immediately in
the container. - The second is a named volume, node_modules. When
Docker runs the npm install instruction listed in the application
Dockerfile, npm will create a new node_modules directory on the
container that includes the packages required to run the application. The
bind mount we just created will hide this newly created node_modules
directory, however. Since node_modules on the host is empty, the bind
will map an empty directory to the container, overriding the new
node_modules directory and preventing our application from starting.
The named node_modules volume solves this problem by persisting the
contents of the /home/node/app/node_modules directory and
mounting it to the container, hiding the bind.
**Keep the following points in mind when using
this approach**:
- Your bind will mount the contents of the
`node_modules` directory on the container to the
host and this directory will be owned by `root`,
since the named volume was created by Docker.
- If you have a pre-existing `node_modules`
directory on the host, it will override the
`node_modules` directory created on the container.
The setup that we're building in this tutorial
assumes that you do **not** have a pre-existing
`node_modules` directory and that you won't be
working with `npm` on your host. This is in
keeping with a [twelve-factor approach to
application development](https://12factor.net/),
which minimizes dependencies between execution
environments.
networks: This specifies that our application service will join the
app-network network, which we will define at the bottom on the
file.
command: This option lets you set the command that should be
executed when Compose runs the image. Note that this will override
the CMD instruction that we set in our application Dockerfile.
Here, we are running the application using the wait-for script,
which will poll the db service on port 27017 to test whether or not
the database service is ready. Once the readiness test succeeds, the
script will execute the command we have set,
/home/node/app/node_modules/.bin/nodemon app.js,
to start the application with nodemon. This will ensure that any
future changes we make to our code are reloaded without our having
to restart the application.
Next, create the db service by adding the following code below the
application service definition:
~/node_project/docker-compose.yml
...
db:
image: mongo:4.1.8-xenial
container_name: db
restart: unless-stopped
env_file: .env
environment:
- MONGO_INITDB_ROOT_USERNAME=$MONGO_USERNAME
- MONGO_INITDB_ROOT_PASSWORD=$MONGO_PASSWORD
volumes:
- dbdata:/data/db
networks:
- app-network
Some of the settings we defined for the nodejs service remain the same,
but we’ve also made the following changes to the image,
environment, and volumes definitions: - image: To create this
service, Compose will pull the 4.1.8-xenial Mongo image from
Docker Hub. We are pinning a particular version to avoid possible future
conflicts as the Mongo image changes. For more information about
version pinning, please see the Docker documentation on Dockerfile best
practices. - MONGO_INITDB_ROOT_USERNAME,
MONGO_INITDB_ROOT_PASSWORD: The mongo image makes these
environment variables available so that you can modify the initialization
of your database instance. MONGO_INITDB_ROOT_USERNAME and
MONGO_INITDB_ROOT_PASSWORD together create a root user in the
admin authentication database and ensure that authentication is enabled
when the container starts. We have set
MONGO_INITDB_ROOT_USERNAME and
MONGO_INITDB_ROOT_PASSWORD using the values from our .env
file, which we pass to the db service using the env_file option. Doing
this means that our sammy application user will be a root user on the
database instance, with access to all of the administrative and operational
privileges of that role. When working in production, you will want to
create a dedicated application user with appropriately scoped privileges.
Note: Keep in mind that these variables will not take effect if you start
the container with an existing data directory in place.
...
networks:
app-network:
driver: bridge
volumes:
dbdata:
node_modules:
version: '3'
services:
nodejs:
build:
context: .
dockerfile: Dockerfile
image: nodejs
container_name: nodejs
restart: unless-stopped
env_file: .env
environment:
- MONGO_USERNAME=$MONGO_USERNAME
- MONGO_PASSWORD=$MONGO_PASSWORD
- MONGO_HOSTNAME=db
- MONGO_PORT=$MONGO_PORT
- MONGO_DB=$MONGO_DB
ports:
- "80:8080"
volumes:
- .:/home/node/app
- node_modules:/home/node/app/node_modules
networks:
- app-network
/home/node/app/node_modules/.bin/nodemon app.js
db:
image: mongo:4.1.8-xenial
container_name: db
restart: unless-stopped
env_file: .env
environment:
- MONGO_INITDB_ROOT_USERNAME=$MONGO_USERNAME
- MONGO_INITDB_ROOT_PASSWORD=$MONGO_PASSWORD
volumes:
- dbdata:/data/db
networks:
- app-network
networks:
app-network:
driver: bridge
volumes:
dbdata:
node_modules:
Save and close the file when you are finished editing.
With your service definitions in place, you are ready to start the
application.
Output
...
You can also get more detailed information about the startup processes
by displaying the log output from the services:
docker-compose logs
You will see something like this if everything has started correctly:
Output
...
...
You can also check the status of your containers with docker-
compose ps:
docker-compose ps
You will see output indicating that your containers are running:
Output
-------------------------------------------------------------------
---
>8080/tcp
Click on the Get Shark Info button. You will see a page with an entry
form where you can enter a shark name and a description of that shark’s
general character:
Click on the Submit button. You will see a page with this shark
information displayed back to you:
Shark Output
As a final step, we can test that the data you’ve just entered will persist
if you remove your database container.
Back at your terminal, type the following command to stop and remove
your containers and network:
docker-compose down
Note that we are not including the --volumes option; hence, our
dbdata volume is not removed.
The following output confirms that your containers and network have
been removed:
Output
Enter a new shark of your choosing. We’ll go with Whale Shark and
Large:
Enter New Shark
Once you click Submit, you will see that the new shark has been added
to the shark collection in your database without the loss of the data you’ve
already entered:
Conclusion
By following this tutorial, you have created a development setup for your
Node application using Docker containers. You’ve made your project more
modular and portable by extracting sensitive information and decoupling
your application’s state from your application code. You have also
configured a boilerplate docker-compose.yml file that you can revise
as your development needs and requirements change.
As you develop, you may be interested in learning more about designing
applications for containerized and Cloud Native workflows. Please see
Architecting Applications for Kubernetes and Modernizing Applications
for Kubernetes for more information on these topics.
To learn more about the code used in this tutorial, please see How To
Build a Node.js Application with Docker and How To Integrate MongoDB
with Your Node Application. For information about deploying a Node
application with an Nginx reverse proxy using containers, please see How
To Secure a Containerized Node.js Application with Nginx, Let’s Encrypt,
and Docker Compose.
How To Migrate a Docker Compose
Workflow to Kubernetes
Prerequisites
1.18.0 (06a2e56)
With kompose installed and ready to use, you can now clone the
Node.js project code that you will be translating to Kubernetes.
ID CREATED SIZE
your_dockerhub_username/node-kubernetes latest
node 10-alpine
Next, log in to the Docker Hub account you created in the prerequisites:
docker login -u your_dockerhub_username
When prompted, enter your Docker Hub account password. Logging in
this way will create a ~/.docker/config.json file in your user’s
home directory with your Docker Hub credentials.
Push the application image to Docker Hub with the docker push
command. Remember to replace your_dockerhub_username with
your own Docker Hub username:
docker push your_dockerhub_username/node-
kubernetes
You now have an application image that you can pull to run your
application with Kubernetes. The next step will be to translate your
application service definitions to Kubernetes objects.
...
services:
nodejs:
build:
context: .
dockerfile: Dockerfile
image: nodejs
container_name: nodejs
restart: unless-stopped
env_file: .env
environment:
- MONGO_USERNAME=$MONGO_USERNAME
- MONGO_PASSWORD=$MONGO_PASSWORD
- MONGO_HOSTNAME=db
- MONGO_PORT=$MONGO_PORT
- MONGO_DB=$MONGO_DB
ports:
- "80:8080"
volumes:
- .:/home/node/app
- node_modules:/home/node/app/node_modules
networks:
- app-network
/home/node/app/node_modules/.bin/nodemon app.js
...
Make the following edits to your service definition: - Use your node-
kubernetes image instead of the local Dockerfile. - Change the
container restart policy from unless-stopped to always. -
Remove the volumes list and the command instruction.
The finished service definition will now look like this:
~/node_project/docker-compose.yaml
...
services:
nodejs:
image: your_dockerhub_username/node-kubernetes
container_name: nodejs
restart: always
env_file: .env
environment:
- MONGO_USERNAME=$MONGO_USERNAME
- MONGO_PASSWORD=$MONGO_PASSWORD
- MONGO_HOSTNAME=db
- MONGO_PORT=$MONGO_PORT
- MONGO_DB=$MONGO_DB
ports:
- "80:8080"
networks:
- app-network
...
Next, scroll down to the db service definition. Here, make the following
edits: - Change the restart policy for the service to always. - Remove
the .env file. Instead of using values from the .env file, we will pass the
values for our MONGO_INITDB_ROOT_USERNAME and
MONGO_INITDB_ROOT_PASSWORD to the database container using the
Secret we will create in Step 4.
The db service definition will now look like this:
~/node_project/docker-compose.yaml
...
db:
image: mongo:4.1.8-xenial
container_name: db
restart: always
environment:
- MONGO_INITDB_ROOT_USERNAME=$MONGO_USERNAME
- MONGO_INITDB_ROOT_PASSWORD=$MONGO_PASSWORD
volumes:
- dbdata:/data/db
networks:
- app-network
...
...
volumes:
dbdata:
Save and close the file when you are finished editing.
Before translating our service definitions, we will need to write the
.env file that kompose will use to create the ConfigMap with our non-
sensitive information. Please see Step 2 of Containerizing a Node.js
Application for Development With Docker Compose for a longer
explanation of this file.
In that tutorial, we added .env to our .gitignore file to ensure that
it would not copy to version control. This means that it did not copy over
when we cloned the node-mongo-docker-dev repository in Step 2 of this
tutorial. We will therefore need to recreate it now.
Create the file:
nano .env
kompose will use this file to create a ConfigMap for our application.
However, instead of assigning all of the variables from the nodejs
service definition in our Compose file, we will add only the MONGO_DB
database name and the MONGO_PORT. We will assign the database
username and password separately when we manually create a Secret
object in Step 4.
Add the following port and database name information to the .env file.
Feel free to rename your database if you would like:
~/node_project/.env
MONGO_PORT=27017
MONGO_DB=sharkinfo
Save and close the file when you are finished editing.
You are now ready to create the files with your object specs. kompose
offers multiple options for translating your resources. You can: - Create
yaml files based on the service definitions in your docker-
compose.yaml file with kompose convert. - Create Kubernetes
objects directly with kompose up. - Create a Helm chart with kompose
convert -c.
For now, we will convert our service definitions to yaml files and then
add to and revise the files kompose creates.
Convert your service definitions to yaml files with the following
command:
kompose convert
You can also name specific or multiple Compose files using the -f flag.
After you run this command, kompose will output information about the
files it has created:
Output
~/node_project/secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: mongo-secret
data:
MONGO_USERNAME: your_encoded_username
MONGO_PASSWORD: your_encoded_password
We have named the Secret object mongo-secret, but you are free to
name it anything you would like.
Save and close this file when you are finished editing. As you did with
your .env file, be sure to add secret.yaml to your .gitignore file
to keep it out of version control.
With secret.yaml written, our next step will be to ensure that our
application and database Pods both use the values we added to the file.
Let’s start by adding references to the Secret to our application
Deployment.
Open the file called nodejs-deployment.yaml:
nano nodejs-deployment.yaml
The file’s container specifications include the following environment
variables defined under the env key:
~/node_project/nodejs-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
...
spec:
containers:
- env:
- name: MONGO_DB
valueFrom:
configMapKeyRef:
key: MONGO_DB
name: nodejs-env
- name: MONGO_HOSTNAME
value: db
- name: MONGO_PASSWORD
- name: MONGO_PORT
valueFrom:
configMapKeyRef:
key: MONGO_PORT
name: nodejs-env
- name: MONGO_USERNAME
apiVersion: extensions/v1beta1
kind: Deployment
...
spec:
containers:
- env:
- name: MONGO_DB
valueFrom:
configMapKeyRef:
key: MONGO_DB
name: nodejs-env
- name: MONGO_HOSTNAME
value: db
- name: MONGO_PASSWORD
valueFrom:
secretKeyRef:
name: mongo-secret
key: MONGO_PASSWORD
- name: MONGO_PORT
valueFrom:
configMapKeyRef:
key: MONGO_PORT
name: nodejs-env
- name: MONGO_USERNAME
valueFrom:
secretKeyRef:
name: mongo-secret
key: MONGO_USERNAME
Save and close the file when you are finished editing.
Next, we’ll add the same values to the db-deployment.yaml file.
Open the file for editing:
nano db-deployment.yaml
In this file, we will add references to our Secret for following variable
keys: MONGO_INITDB_ROOT_USERNAME and
MONGO_INITDB_ROOT_PASSWORD. The mongo image makes these
variables available so that you can modify the initialization of your
database instance. MONGO_INITDB_ROOT_USERNAME and
MONGO_INITDB_ROOT_PASSWORD together create a root user in the
admin authentication database and ensure that authentication is enabled
when the database container starts.
Using the values we set in our Secret ensures that we will have an
application user with root privileges on the database instance, with
access to all of the administrative and operational privileges of that role.
When working in production, you will want to create a dedicated
application user with appropriately scoped privileges.
Under the MONGO_INITDB_ROOT_USERNAME and
MONGO_INITDB_ROOT_PASSWORD variables, add references to the
Secret values:
~/node_project/db-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
...
spec:
containers:
- env:
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongo-secret
key: MONGO_PASSWORD
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: mongo-secret
key: MONGO_USERNAME
image: mongo:4.1.8-xenial
...
Save and close the file when you are finished editing.
With your Secret in place, you can move on to creating your database
Service and ensuring that your application container only attempts to
connect to the database once it is fully set up and initialized.
apiVersion: v1
kind: Service
metadata:
annotations:
creationTimestamp: null
labels:
io.kompose.service: db
name: db
spec:
ports:
- port: 27017
targetPort: 27017
selector:
io.kompose.service: db
status:
loadBalancer: {}
The selector that we have included here will match this Service
object with our database Pods, which have been defined with the label
io.kompose.service: db by kompose in the db-
deployment.yaml file. We’ve also named this service db.
Save and close the file when you are finished editing.
Next, let’s add an Init Container field to the containers array in
nodejs-deployment.yaml. This will create an Init Container that we
can use to delay our application container from starting until the db
Service has been created with a Pod that is reachable. This is one of the
possible uses for Init Containers; to learn more about other use cases,
please see the official documentation.
Open the nodejs-deployment.yaml file:
nano nodejs-deployment.yaml
Within the Pod spec and alongside the containers array, we are
going to add an initContainers field with a container that will poll
the db Service.
Add the following code below the ports and resources fields and
above the restartPolicy in the nodejs containers array:
~/node_project/nodejs-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
...
spec:
containers:
...
name: nodejs
ports:
- containerPort: 8080
resources: {}
initContainers:
- name: init-db
image: busybox
restartPolicy: Always
...
This Init Container uses the BusyBox image, a lightweight image that
includes many UNIX utilities. In this case, we’ll use the netcat utility to
poll whether or not the Pod associated with the db Service is accepting
TCP connections on port 27017.
This container command replicates the functionality of the wait-for
script that we removed from our docker-compose.yaml file in Step
3. For a longer discussion of how and why our application used the wait-
for script when working with Compose, please see Step 4 of
Containerizing a Node.js Application for Development with Docker
Compose.
Init Containers run to completion; in our case, this means that our Node
application container will not start until the database container is running
and accepting connections on port 27017. The db Service definition
allows us to guarantee this functionality regardless of the exact location of
the database container, which is mutable.
Save and close the file when you are finished editing.
With your database Service created and your Init Container in place to
control the startup order of your containers, you can move on to checking
the storage requirements in your PersistentVolumeClaim and exposing
your application service using a LoadBalancer.
Output
If you are not working with a DigitalOcean cluster, you will need to
create a StorageClass and configure a provisioner of your choice. For
details about how to do this, please see the official documentation.
When kompose created dbdata-
persistentvolumeclaim.yaml, it set the storage resource to
a size that does not meet the minimum size requirements of our
provisioner. We will therefore need to modify our
PersistentVolumeClaim to use the minimum viable DigitalOcean Block
Storage unit: 1GB. Please feel free to modify this to meet your storage
requirements.
Open dbdata-persistentvolumeclaim.yaml:
nano dbdata-persistentvolumeclaim.yaml
Replace the storage value with 1Gi:
~/node_project/dbdata-persistentvolumeclaim.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: dbdata
name: dbdata
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
status: {}
~/node_project/nodejs-service.yaml
apiVersion: v1
kind: Service
...
spec:
type: LoadBalancer
ports:
...
Output
service/nodejs created
deployment.extensions/nodejs created
configmap/nodejs-env created
service/db created
deployment.extensions/db created
persistentvolumeclaim/dbdata created
secret/mongo-secret created
Output
AGE
10s
10s
Once that container has run and your application and database
containers have started, you will see this output:
Output
The Running STATUS indicates that your Pods are bound to nodes and
that the containers associated with those Pods are running. READY
indicates how many containers in a Pod are running. For more
information, please consult the documentation on Pod lifecycles.
Note: If you see unexpected phases in the STATUS column, remember
that you can troubleshoot your Pods with the following commands:
kubectl describe pods your_pod
kubectl logs your_pod
With your containers running, you can now access the application. To
get the IP for the LoadBalancer, type:
kubectl get svc
You will see the following output:
Output
PORT(S) AGE
27017/TCP 93s
443/TCP 25m12s
80:30729/TCP 93s
Click on the Get Shark Info button. You will see a page with an entry
form where you can enter a shark name and a description of that shark’s
general character:
Click on the Submit button. You will see a page with this shark
information displayed back to you:
Shark Output
Conclusion
The files you have created in this tutorial are a good starting point to build
from as you move toward production. As you develop your application,
you can work on implementing the following: - Centralized logging and
monitoring. Please see the relevant discussion in Modernizing
Applications for Kubernetes for a general overview. You can also look at
How To Set Up an Elasticsearch, Fluentd and Kibana (EFK) Logging Stack
on Kubernetes to learn how to set up a logging stack with Elasticsearch,
Fluentd, and Kibana. Also check out An Introduction to Service Meshes
for information about how service meshes like Istio implement this
functionality. - Ingress Resources to route traffic to your cluster. This is a
good alternative to a LoadBalancer in cases where you are running
multiple Services, which each require their own LoadBalancer, or where
you would like to implement application-level routing strategies (A/B &
canary tests, for example). For more information, check out How to Set Up
an Nginx Ingress with Cert-Manager on DigitalOcean Kubernetes and the
related discussion of routing in the service mesh context in An
Introduction to Service Meshes. - Backup strategies for your Kubernetes
objects. For guidance on implementing backups with Velero (formerly
Heptio Ark) with DigitalOcean’s Kubernetes product, please see How To
Back Up and Restore a Kubernetes Cluster on DigitalOcean Using Heptio
Ark.
How To Scale a Node.js Application with
MongoDB on Kubernetes Using Helm
Prerequisites
To complete this tutorial, you will need: - A Kubernetes 1.10+ cluster with
role-based access control (RBAC) enabled. This setup will use a
DigitalOcean Kubernetes cluster, but you are free to create a cluster using
another method. - The kubectl command-line tool installed on your
local machine or development server and configured to connect to your
cluster. You can read more about installing kubectl in the official
documentation. - Helm installed on your local machine or development
server and Tiller installed on your cluster, following the directions
outlined in Steps 1 and 2 of How To Install Software on Kubernetes
Clusters with the Helm Package Manager. - Docker installed on your local
machine or development server. If you are working with Ubuntu 18.04,
follow Steps 1 and 2 of How To Install and Use Docker on Ubuntu 18.04;
otherwise, follow the official documentation for information about
installing on other operating systems. Be sure to add your non-root user to
the docker group, as described in Step 2 of the linked tutorial. - A
Docker Hub account. For an overview of how to set this up, refer to this
introduction to Docker Hub.
~/node_project/db.js
...
const {
MONGO_USERNAME,
MONGO_PASSWORD,
MONGO_HOSTNAME,
MONGO_PORT,
MONGO_DB
} = process.env;
...
const url =
`mongodb://${MONGO_USERNAME}:${MONGO_PASSWORD}@${MONGO_HOSTNAME}:${
MONGO_PORT}/${MONGO_DB}?authSource=admin`;
...
~/node_project/db.js
...
const {
MONGO_USERNAME,
MONGO_PASSWORD,
MONGO_HOSTNAME,
MONGO_PORT,
MONGO_DB,
MONGO_REPLICASET
} = process.env;
...
const url =
`mongodb://${MONGO_USERNAME}:${MONGO_PASSWORD}@${MONGO_HOSTNAME}:${
MONGO_PORT}/${MONGO_DB}?
replicaSet=${MONGO_REPLICASET}&authSource=admin`;
...
Using the replicaSet option in the options section of the URI allows
us to pass in the name of the replica set, which, along with the hostnames
defined in the MONGO_HOSTNAME constant, will allow us to connect to
the set members.
Save and close the file when you are finished editing.
With your database connection information modified to work with
replica sets, you can now package your application, build the image with
the docker build command, and push it to Docker Hub.
Build the image with docker build and the -t flag, which allows
you to tag the image with a memorable name. In this case, tag the image
with your Docker Hub username and name it node-replicas or a name
of your own choosing:
docker build -t your_dockerhub_username/node-
replicas .
The . in the command specifies that the build context is the current
directory.
It will take a minute or two to build the image. Once it is complete,
check your images:
docker images
You will see the following output:
Output
ID CREATED SIZE
your_dockerhub_username/node-replicas latest
node 10-alpine
Next, log in to the Docker Hub account you created in the prerequisites:
docker login -u your_dockerhub_username
When prompted, enter your Docker Hub account password. Logging in
this way will create a ~/.docker/config.json file in your non-root
user’s home directory with your Docker Hub credentials.
Push the application image to Docker Hub with the docker push
command. Remember to replace your_dockerhub_username with
your own Docker Hub username:
docker push your_dockerhub_username/node-replicas
You now have an application image that you can pull to run your
replicated application with Kubernetes. The next step will be to configure
specific parameters to use with the MongoDB Helm chart.
Output
secret/keyfilesecret created
Remove key.txt:
rm key.txt
Alternatively, if you would like to save the file, be sure restrict its
permissions and add it to your .gitignore file to keep it out of version
control.
Next, create the Secret for your MongoDB admin user. The first step
will be to convert your desired username and password to base64.
Convert your database username:
echo -n 'your_database_username' | base64
Note down the value you see in the output.
Next, convert your password:
echo -n 'your_database_password' | base64
Take note of the value in the output here as well.
Open a file for the Secret:
nano secret.yaml
Note: Kubernetes objects are typically defined using YAML, which
strictly forbids tabs and requires two spaces for indentation. If you would
like to check the formatting of any of your YAML files, you can use a
linter or test the validity of your syntax using kubectl create with
the --dry-run and --validate flags:
kubectl create -f your_yaml_file.yaml --dry-run --
validate=true
In general, it is a good idea to validate your syntax before creating
resources with kubectl.
Add the following code to the file to create a Secret that will define a
user and password with the encoded values you just created. Be sure to
replace the dummy values here with your own encoded username and
password:
~/node_project/secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: mongo-secret
data:
user: your_encoded_username
password: your_encoded_password
Output
secret/mongo-secret created
Output
If you are not working with a DigitalOcean cluster, you will need to
create a StorageClass and configure a provisioner of your choice. For
details about how to do this, please see the official documentation.
Now that you have ensured that you have a StorageClass configured,
open mongodb-values.yaml for editing:
nano mongodb-values.yaml
You will set values in this file that will do the following: - Enable
authorization. - Reference your keyfilesecret and mongo-secret
objects. - Specify 1Gi for your PersistentVolumes. - Set your replica set
name to db. - Specify 3 replicas for the set. - Pin the mongo image to the
latest version at the time of writing: 4.1.9.
Paste the following code into the file:
~/node_project/mongodb-values.yaml
replicas: 3
port: 27017
replicaSetName: db
podDisruptionBudget: {}
auth:
enabled: true
existingKeySecret: keyfilesecret
existingAdminSecret: mongo-secret
imagePullSecrets: []
installImage:
repository: unguiculus/mongodb-install
tag: 0.7
pullPolicy: Always
copyConfigImage:
repository: busybox
tag: 1.29.3
pullPolicy: Always
image:
repository: mongo
tag: 4.1.9
pullPolicy: Always
extraVars: {}
metrics:
enabled: false
image:
repository: ssalaues/mongodb-exporter
tag: 0.6.1
pullPolicy: IfNotPresent
port: 9216
path: /metrics
socketTimeout: 3s
syncTimeout: 1m
prometheusServiceDiscovery: true
resources: {}
podAnnotations: {}
securityContext:
enabled: true
runAsUser: 999
fsGroup: 999
runAsNonRoot: true
init:
resources: {}
timeout: 900
resources: {}
nodeSelector: {}
affinity: {}
tolerations: []
extraLabels: {}
persistentVolume:
enabled: true
#storageClass: "-"
accessModes:
- ReadWriteOnce
size: 1Gi
annotations: {}
serviceAnnotations: {}
terminationGracePeriodSeconds: 30
tls:
enabled: false
configmap: {}
readinessProbe:
initialDelaySeconds: 5
timeoutSeconds: 1
failureThreshold: 3
periodSeconds: 10
successThreshold: 1
livenessProbe:
initialDelaySeconds: 30
timeoutSeconds: 5
failureThreshold: 3
periodSeconds: 10
successThreshold: 1
Output
NAME: mongo
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/ConfigMap
mongo-mongodb-replicaset-init 1 1s
mongo-mongodb-replicaset-mongodb 1 1s
mongo-mongodb-replicaset-tests 1 0s
...
You can now check on the creation of your Pods with the following
command:
kubectl get pods
You will see output like the following as the Pods are being created:
Output
Output
The Running STATUS indicates that your Pods are bound to nodes and
that the containers associated with those Pods are running. READY
indicates how many containers in a Pod are running. For more
information, please consult the documentation on Pod lifecycles.
Note: If you see unexpected phases in the STATUS column, remember
that you can troubleshoot your Pods with the following commands:
kubectl describe pods your_pod
kubectl logs your_pod
Each of the Pods in your StatefulSet has a name that combines the name
of the StatefulSet with the ordinal index of the Pod. Because we created
three replicas, our StatefulSet members are numbered 0-2, and each has a
stable DNS entry comprised of the following elements:
$(statefulset-name)-$(ordinal).$(service
name).$(namespace).svc.cluster.local.
In our case, the StatefulSet and the Headless Service created by the
mongodb-replicaset chart have the same names:
kubectl get statefulset
Output
Output
443/TCP 42m
27017/TCP 4m35s
27017/TCP 4m35s
This means that the first member of our StatefulSet will have the
following DNS entry:
mongo-mongodb-replicaset-0.mongo-mongodb-
replicaset.default.svc.cluster.local
Because we need our application to connect to each MongoDB instance,
it’s essential that we have this information so that we can communicate
directly with the Pods, rather than with the Service. When we create our
custom application Helm chart, we will pass the DNS entries for each Pod
to our application using environment variables.
With your database instances up and running, you are ready to create the
chart for your Node application.
replicaCount: 3
image:
repository: your_dockerhub_username/node-replicas
tag: latest
pullPolicy: IfNotPresent
nameOverride: ""
fullnameOverride: ""
service:
type: LoadBalancer
port: 80
targetPort: 8080
...
Save and close the file when you are finished editing.
Next, open a secret.yaml file in the nodeapp/templates
directory:
nano nodeapp/templates/secret.yaml
In this file, add values for your MONGO_USERNAME and
MONGO_PASSWORD application constants. These are the constants that
your application will expect to have access to at runtime, as specified in
db.js, your database connection file. As you add the values for these
constants, remember to the use the base64-encoded values that you used
earlier in Step 2 when creating your mongo-secret object. If you need
to recreate those values, you can return to Step 2 and run the relevant
commands again.
Add the following code to the file:
~/node_project/nodeapp/templates/secret.yaml
apiVersion: v1
kind: Secret
metadata:
data:
MONGO_USERNAME: your_encoded_username
MONGO_PASSWORD: your_encoded_password
The name of this Secret object will depend on the name of your Helm
release, which you will specify when you deploy the application chart.
Save and close the file when you are finished.
Next, open a file to create a ConfigMap for your application:
nano nodeapp/templates/configmap.yaml
In this file, we will define the remaining variables that our application
expects: MONGO_HOSTNAME, MONGO_PORT, MONGO_DB, and
MONGO_REPLICASET. Our MONGO_HOSTNAME variable will include the
DNS entry for each instance in our replica set, since this is what the
MongoDB connection URI requires.
According to the Kubernetes documentation, when an application
implements liveness and readiness checks, SRV records should be used
when connecting to the Pods. As discussed in Step 3, our Pod SRV records
follow this pattern: $(statefulset-
name)-$(ordinal).$(service
name).$(namespace).svc.cluster.local. Since our MongoDB
StatefulSet implements liveness and readiness checks, we should use these
stable identifiers when defining the values of the MONGO_HOSTNAME
variable.
Add the following code to the file to define the MONGO_HOSTNAME,
MONGO_PORT, MONGO_DB, and MONGO_REPLICASET variables. You
are free to use another name for your MONGO_DB database, but your
MONGO_HOSTNAME and MONGO_REPLICASET values must be written as
they appear here:
~/node_project/nodeapp/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
data:
MONGO_HOSTNAME: "mongo-mongodb-replicaset-0.mongo-mongodb-
replicaset.default.svc.cluster.local,mongo-mongodb-replicaset-
1.mongo-mongodb-replicaset.default.svc.cluster.local,mongo-mongodb-
replicaset-2.mongo-mongodb-replicaset.default.svc.cluster.local"
MONGO_PORT: "27017"
MONGO_DB: "sharkinfo"
MONGO_REPLICASET: "db"
Because we have already created the StatefulSet object and replica set,
the hostnames that are listed here must be listed in your file exactly as
they appear in this example. If you destroy these objects and rename your
MongoDB Helm release, then you will need to revise the values included
in this ConfigMap. The same applies for MONGO_REPLICASET, since we
specified the replica set name with our MongoDB release.
Also note that the values listed here are quoted, which is the expectation
for environment variables in Helm.
Save and close the file when you are finished editing.
With your chart parameter values defined and your Secret and
ConfigMap manifests created, you can edit the application Deployment
template to use your environment variables.
Step 5 — Integrating Environment Variables into Your Helm
Deployment
With the files for our application Secret and ConfigMap in place, we will
need to make sure that our application Deployment can use these values.
We will also customize the liveness and readiness probes that are already
defined in the Deployment manifest.
Open the application Deployment template for editing:
nano nodeapp/templates/deployment.yaml
Though this is a YAML file, Helm templates use a different syntax from
standard Kubernetes YAML files in order to generate manifests. For more
information about templates, see the Helm documentation.
In the file, first add an env key to your application container
specifications, below the imagePullPolicy key and above ports:
~/node_project/nodeapp/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
containers:
- name: {{ .Chart.Name }}
}}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
ports:
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
containers:
- name: {{ .Chart.Name }}
}}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: MONGO_USERNAME
valueFrom:
secretKeyRef:
key: MONGO_USERNAME
- name: MONGO_PASSWORD
valueFrom:
secretKeyRef:
key: MONGO_PASSWORD
- name: MONGO_HOSTNAME
valueFrom:
configMapKeyRef:
key: MONGO_HOSTNAME
name: {{ .Release.Name }}-config
- name: MONGO_PORT
valueFrom:
configMapKeyRef:
key: MONGO_PORT
- name: MONGO_DB
valueFrom:
configMapKeyRef:
key: MONGO_DB
- name: MONGO_REPLICASET
valueFrom:
configMapKeyRef:
key: MONGO_REPLICASET
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
containers:
...
env:
...
ports:
- name: http
containerPort: 8080
protocol: TCP
...
Next, let’s modify the liveness and readiness checks that are included in
this Deployment manifest by default. These checks ensure that our
application Pods are running and ready to serve traffic: - Readiness probes
assess whether or not a Pod is ready to serve traffic, stopping all requests
to the Pod until the checks succeed. - Liveness probes check basic
application behavior to determine whether or not the application in the
container is running and behaving as expected. If a liveness probe fails,
Kubernetes will restart the container.
For more about both, see the relevant discussion in Architecting
Applications for Kubernetes.
In our case, we will build on the httpGet request that Helm has
provided by default and test whether or not our application is accepting
requests on the /sharks endpoint. The kubelet service will perform
the probe by sending a GET request to the Node server running in the
application Pod’s container and listening on port 8080. If the status code
for the response is between 200 and 400, then the kubelet will conclude
that the container is healthy. Otherwise, in the case of a 400 or 500 status,
kubelet will either stop traffic to the container, in the case of the
readiness probe, or restart the container, in the case of the liveness probe.
Add the following modification to the stated path for the liveness and
readiness probes:
~/node_project/nodeapp/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
containers:
...
env:
...
ports:
- name: http
containerPort: 8080
protocol: TCP
livenessProbe:
httpGet:
path: /sharks
port: http
readinessProbe:
httpGet:
path: /sharks
port: http
Save and close the file when you are finished editing.
You are now ready to create your application release with Helm. Run the
following helm install command, which includes the name of the
release and the location of the chart directory:
helm install --name nodejs ./nodeapp
Remember that you can run helm install with the --dry-run
and --debug options first, as discussed in Step 3, to check the generated
manifests for your release.
Again, because we are not including the --namespace flag with
helm install, our chart objects will be created in the default
namespace.
You will see the following output indicating that your release has been
created:
Output
NAME: nodejs
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/ConfigMap
nodejs-config 4 1s
==> v1/Deployment
nodejs-nodeapp 0/3 3 0 1s
...
Again, the output will indicate the status of the release, along with
information about the created objects and how you can interact with them.
Check the status of your Pods:
kubectl get pods
Output
Output
Now that your replicated application is working, let’s add some test data
to ensure that replication is working between members of the replica set.
Click on the Get Shark Info button. You will see a page with an entry
form where you can enter a shark name and a description of that shark’s
general character:
Shark Info Form
Shark Output
Enter a new shark of your choosing. We’ll go with Whale Shark and
Large:
Let’s check that the data we’ve entered has been replicated between the
primary and secondary members of our replica set.
Get a list of your Pods:
kubectl get pods
Output
To access the mongo shell on your Pods, you can use the kubectl
exec command and the username you used to create your mongo-
secret in Step 2. Access the mongo shell on the first Pod in the
StatefulSet with the following command:
kubectl exec -it mongo-mongodb-replicaset-0 --
mongo -u your_database_username -p --
authenticationDatabase admin
When prompted, enter the password associated with this username:
Output
Enter password:
...
db:PRIMARY>
Though the prompt itself includes this information, you can manually
check to see which replica set member is the primary with the
rs.isMaster() method:
rs.isMaster()
You will see output like the following, indicating the hostname of the
primary:
Output
db:PRIMARY> rs.isMaster()
"hosts" : [
"mongo-mongodb-replicaset-0.mongo-mongodb-
replicaset.default.svc.cluster.local:27017",
"mongo-mongodb-replicaset-1.mongo-mongodb-
replicaset.default.svc.cluster.local:27017",
"mongo-mongodb-replicaset-2.mongo-mongodb-
replicaset.default.svc.cluster.local:27017"
],
...
"primary" : "mongo-mongodb-replicaset-0.mongo-mongodb-
replicaset.default.svc.cluster.local:27017",
...
Output
switched to db sharkinfo
sharks
Output
Output
switched to db sharkinfo
Output
db:SECONDARY> db.sharks.find()
Conclusion
You have now deployed a replicated, highly-available shark information
application on a Kubernetes cluster using Helm charts. This demo
application and the workflow outlined in this tutorial can act as a starting
point as you build custom charts for your application and take advantage
of Helm’s stable repository and other chart repositories.
As you move toward production, consider implementing the following:
- Centralized logging and monitoring. Please see the relevant discussion in
Modernizing Applications for Kubernetes for a general overview. You can
also look at How To Set Up an Elasticsearch, Fluentd and Kibana (EFK)
Logging Stack on Kubernetes to learn how to set up a logging stack with
Elasticsearch, Fluentd, and Kibana. Also check out An Introduction to
Service Meshes for information about how service meshes like Istio
implement this functionality. - Ingress Resources to route traffic to your
cluster. This is a good alternative to a LoadBalancer in cases where you
are running multiple Services, which each require their own LoadBalancer,
or where you would like to implement application-level routing strategies
(A/B & canary tests, for example). For more information, check out How
to Set Up an Nginx Ingress with Cert-Manager on DigitalOcean
Kubernetes and the related discussion of routing in the service mesh
context in An Introduction to Service Meshes. - Backup strategies for your
Kubernetes objects. For guidance on implementing backups with Velero
(formerly Heptio Ark) with DigitalOcean’s Kubernetes product, please see
How To Back Up and Restore a Kubernetes Cluster on DigitalOcean Using
Heptio Ark.
To learn more about Helm, see An Introduction to Helm, the Package
Manager for Kubernetes, How To Install Software on Kubernetes Clusters
with the Helm Package Manager, and the Helm documentation.
How To Secure a Containerized Node.js
Application with Nginx, Let’s Encrypt, and
Docker Compose
There are multiple ways to enhance the flexibility and security of your
Node.js application. Using a reverse proxy like Nginx offers you the
ability to load balance requests, cache static content, and implement
Transport Layer Security (TLS). Enabling encrypted HTTPS on your
server ensures that communication to and from your application remains
secure.
Implementing a reverse proxy with TLS/SSL on containers involves a
different set of procedures from working directly on a host operating
system. For example, if you were obtaining certificates from Let’s Encrypt
for an application running on a server, you would install the required
software directly on your host. Containers allow you to take a different
approach. Using Docker Compose, you can create containers for your
application, your web server, and the Certbot client that will enable you to
obtain your certificates. By following these steps, you can take advantage
of the modularity and portability of a containerized workflow.
In this tutorial, you will deploy a Node.js application with an Nginx
reverse proxy using Docker Compose. You will obtain TLS/SSL
certificates for the domain associated with your application and ensure
that it receives a high security rating from SSL Labs. Finally, you will set
up a cron job to renew your certificates so that your domain remains
secure.
Prerequisites
To follow this tutorial, you will need: - An Ubuntu 18.04 server, a non-root
user with sudo privileges, and an active firewall. For guidance on how to
set these up, please see this Initial Server Setup guide. - Docker and
Docker Compose installed on your server. For guidance on installing
Docker, follow Steps 1 and 2 of How To Install and Use Docker on Ubuntu
18.04. For guidance on installing Compose, follow Step 1 of How To
Install Docker Compose on Ubuntu 18.04. - A registered domain name.
This tutorial will use example.com throughout. You can get one for free at
Freenom, or use the domain registrar of your choice. - Both of the
following DNS records set up for your server. You can follow this
introduction to DigitalOcean DNS for details on how to add them to a
DigitalOcean account, if that’s what you’re using:
An A record with example.com pointing to your server’s public IP
address.
An A record with www.example.com pointing to your server’s
public IP address.
FROM node:10-alpine
/home/node/app
WORKDIR /home/node/app
COPY package*.json ./
USER node
COPY --chown=node:node . .
EXPOSE 8080
These instructions build a Node image by copying the project code from
the current directory to the container and installing dependencies with
npm install. They also take advantage of Docker’s caching and image
layering by separating the copy of package.json and package-
lock.json, containing the project’s listed dependencies, from the copy
of the rest of the application code. Finally, the instructions specify that the
container will be run as the non-root node user with the appropriate
permissions set on the application code and node_modules directories.
For more information about this Dockerfile and Node image best
practices, please see the complete discussion in Step 3 of How To Build a
Node.js Application with Docker.
To test the application without SSL, you can build and tag the image
using docker build and the -t flag. We will call the image node-
demo, but you are free to name it something else:
docker build -t node-demo .
Once the build process is complete, you can list your images with
docker images:
docker images
You will see the following output, confirming the application image
build:
Output
SIZE
ago 70.7MB
Next, create the container with docker run. We will include three
flags with this command: - -p: This publishes the port on the container
and maps it to a port on our host. We will use port 80 on the host, but you
should feel free to modify this as necessary if you have another process
running on that port. For more information about how this works, see this
discussion in the Docker docs on port binding. - -d: This runs the
container in the background. - --name: This allows us to give the
container a memorable name.
Run the following command to build the container:
docker run --name node-demo -p 80:8080 -d node-
demo
Inspect your running containers with docker ps:
docker ps
You will see output confirming that your application container is
running:
Output
demo
Now that you have tested the application, you can stop the container and
remove the images. Use docker ps again to get your CONTAINER ID:
docker ps
Output
demo
server {
listen 80;
listen [::]:80;
root /var/www/html;
location / {
proxy_pass http://nodejs:8080;
location ~ /.well-known/acme-challenge {
allow all;
root /var/www/html;
This server block will allow us to start the Nginx container as a reverse
proxy, which will pass requests to our Node application container. It will
also allow us to use Certbot’s webroot plugin to obtain certificates for our
domain. This plugin depends on the HTTP-01 validation method, which
uses an HTTP request to prove that Certbot can access resources from a
server that responds to a given domain name.
Once you have finished editing, save and close the file. To learn more
about Nginx server and location block algorithms, please refer to this
article on Understanding Nginx Server and Location Block Selection
Algorithms.
With the web server configuration details in place, we can move on to
creating our docker-compose.yml file, which will allow us to create
our application services and the Certbot container we will use to obtain
our certificates.
version: '3'
services:
nodejs:
build:
context: .
dockerfile: Dockerfile
image: nodejs
container_name: nodejs
restart: unless-stopped
~/node_project/docker-compose.yml
services:
nodejs:
...
networks:
- app-network
...
webserver:
image: nginx:mainline-alpine
container_name: webserver
restart: unless-stopped
ports:
- "80:80"
volumes:
- web-root:/var/www/html
- ./nginx-conf:/etc/nginx/conf.d
- certbot-etc:/etc/letsencrypt
- certbot-var:/var/lib/letsencrypt
depends_on:
- nodejs
networks:
- app-network
Some of the settings we defined for the nodejs service remain the
same, but we’ve also made the following changes: - image: This tells
Compose to pull the latest Alpine-based Nginx image from Docker Hub.
For more information about alpine images, please see Step 3 of How To
Build a Node.js Application with Docker. - ports: This exposes port 80
to enable the configuration options we’ve defined in our Nginx
configuration.
We have also specified the following named volumes and bind mounts:
- web-root:/var/www/html: This will add our site’s static assets,
copied to a volume called web-root, to the the /var/www/html
directory on the container. - ./nginx-conf:/etc/nginx/conf.d:
This will bind mount the Nginx configuration directory on the host to the
relevant directory on the container, ensuring that any changes we make to
files on the host will be reflected in the container. - certbot-
etc:/etc/letsencrypt: This will mount the relevant Let’s Encrypt
certificates and keys for our domain to the appropriate directory on the
container. - certbot-var:/var/lib/letsencrypt: This mounts
Let’s Encrypt’s default working directory to the appropriate directory on
the container.
Next, add the configuration options for the certbot container. Be sure
to replace the domain and email information with your own domain name
and contact email:
~/node_project/docker-compose.yml
...
certbot:
image: certbot/certbot
container_name: certbot
volumes:
- certbot-etc:/etc/letsencrypt
- certbot-var:/var/lib/letsencrypt
- web-root:/var/www/html
depends_on:
- webserver
...
volumes:
certbot-etc:
certbot-var:
web-root:
driver: local
driver_opts:
type: none
device: /home/sammy/node_project/views/
o: bind
networks:
app-network:
driver: bridge
version: '3'
services:
nodejs:
build:
context: .
dockerfile: Dockerfile
image: nodejs
container_name: nodejs
restart: unless-stopped
networks:
- app-network
webserver:
image: nginx:mainline-alpine<^>
container_name: webserver
restart: unless-stopped
ports:
- "80:80"
volumes:
- web-root:/var/www/html
- ./nginx-conf:/etc/nginx/conf.d
- certbot-etc:/etc/letsencrypt
- certbot-var:/var/lib/letsencrypt
depends_on:
- nodejs
networks:
- app-network
certbot:
image: certbot/certbot
container_name: certbot
volumes:
- certbot-etc:/etc/letsencrypt
- certbot-var:/var/lib/letsencrypt
- web-root:/var/www/html
depends_on:
- webserver
command: certonly --webroot --webroot-path=/var/www/html --email
volumes:
certbot-etc:
certbot-var:
web-root:
driver: local
driver_opts:
type: none
device: /home/sammy/node_project/views/
o: bind
networks:
app-network:
driver: bridge
With the service definitions in place, you are ready to start the
containers and test your certificate requests.
Output
Output
-------------------------------------------------------------------
-----
>80/tcp
If you see anything other than Up in the State column for the nodejs
and webserver services, or an exit status other than 0 for the certbot
container, be sure to check the service logs with the docker-compose
logs command:
docker-compose logs service_name
You can now check that your credentials have been mounted to the
webserver container with docker-compose exec:
docker-compose exec webserver ls -la
/etc/letsencrypt/live
If your request was successful, you will see output like this:
Output
total 16
Now that you know your request will be successful, you can edit the
certbot service definition to remove the --staging flag.
Open docker-compose.yml:
nano docker-compose.yml
Find the section of the file with the certbot service definition, and
replace the --staging flag in the command option with the --force-
renewal flag, which will tell Certbot that you want to request a new
certificate with the same domains as an existing certificate. The certbot
service definition should now look like this:
~/node_project/docker-compose.yml
...
certbot:
image: certbot/certbot
container_name: certbot
volumes:
- certbot-etc:/etc/letsencrypt
- certbot-var:/var/lib/letsencrypt
- web-root:/var/www/html
depends_on:
- webserver
-d example.com -d www.example.com
...
certbot | /etc/letsencrypt/live/example.com/fullchain.pem
certbot | /etc/letsencrypt/live/example.com/privkey.pem
new or tweaked
run certbot
certificates, run
Certbot
should make a
obtained by Certbot so
certbot |
certbot | Donating to ISRG / Let's Encrypt:
https://letsencrypt.org/donate
https://eff.org/donate-le
certbot |
server {
listen 80;
listen [::]:80;
location ~ /.well-known/acme-challenge {
allow all;
root /var/www/html;
location / {
server {
server_tokens off;
ssl_certificate
/etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key
/etc/letsencrypt/live/example.com/privkey.pem;
ssl_buffer_size 8k;
ssl_dhparam /etc/ssl/certs/dhparam-2048.pem;
ssl_prefer_server_ciphers on;
ssl_ciphers
ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DH+3DES:!ADH:!AECDH:!MD5;
ssl_ecdh_curve secp384r1;
ssl_session_tickets off;
ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.8.8;
location / {
location @nodejs {
proxy_pass http://nodejs:8080;
add_header X-Frame-Options "SAMEORIGIN" always;
downgrade" always;
root /var/www/html;
The HTTP server block specifies the webroot for Certbot renewal
requests to the .well-known/acme-challenge directory. It also
includes a rewrite directive that directs HTTP requests to the root
directory to HTTPS.
The HTTPS server block enables ssl and http2. To read more about
how HTTP/2 iterates on HTTP protocols and the benefits it can have for
website performance, please see the introduction to How To Set Up Nginx
with HTTP/2 Support on Ubuntu 18.04. This block also includes a series of
options to ensure that you are using the most up-to-date SSL protocols and
ciphers and that OSCP stapling is turned on. OSCP stapling allows you to
offer a time-stamped response from your certificate authority during the
initial TLS handshake, which can speed up the authentication process.
The block also specifies your SSL and Diffie-Hellman credentials and
key locations.
Finally, we’ve moved the proxy pass information to this block,
including a location block with a try_files directive, pointing requests
to our aliased Node.js application container, and a location block for that
alias, which includes security headers that will enable us to get A ratings
on things like the SSL Labs and Security Headers server test sites. These
headers include X-Frame-Options, X-Content-Type-Options,
Referrer Policy, Content-Security-Policy, and X-XSS-
Protection. The HTTP Strict Transport Security (HSTS)
header is commented out — enable this only if you understand the
implications and have assessed its “preload” functionality.
Once you have finished editing, save and close the file.
Before recreating the webserver service, you will need to add a few
things to the service definition in your docker-compose.yml file,
including relevant port information for HTTPS and a Diffie-Hellman
volume definition.
Open the file:
nano docker-compose.yml
In the webserver service definition, add the following port mapping
and the dhparam named volume:
~/node_project/docker-compose.yml
...
webserver:
image: nginx:latest
container_name: webserver
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- web-root:/var/www/html
- ./nginx-conf:/etc/nginx/conf.d
- certbot-etc:/etc/letsencrypt
- certbot-var:/var/lib/letsencrypt
- dhparam:/etc/ssl/certs
depends_on:
- nodejs
networks:
- app-network
...
volumes:
...
dhparam:
driver: local
driver_opts:
type: none
device: /home/sammy/node_project/dhparam/
o: bind
Ports
-------------------------------------------------------------------
---------------------------
>443/tcp, 0.0.0.0:80->80/tcp
Finally, you can visit your domain to ensure that everything is working
as expected. Navigate your browser to https://example.com,
making sure to substitute example.com with your own domain name.
You will see the following landing page:
~/node_project/ssl_renew.sh
#!/bin/bash
COMPOSE="/usr/local/bin/docker-compose --no-ansi"
DOCKER="/usr/bin/docker"
cd /home/<^>sammy<^>/<^>node_project<^>/
$COMPOSE run certbot renew --dry-run && $COMPOSE kill -s SIGHUP webs
1. /bin/ed
3. /usr/bin/vim.basic
4. /usr/bin/vim.tiny
...
crontab
...
/var/log/cron.log 2>&1
This will set the job interval to every five minutes, so you can test
whether or not your renewal request has worked as intended. We have also
created a log file, cron.log, to record relevant output from the job.
After five minutes, check cron.log to see whether or not the renewal
request has succeeded:
tail -f /var/log/cron.log
You should see output confirming a successful renewal:
Output
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - -
been renewed:
/etc/letsencrypt/live/example.com/fullchain.pem (success)
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - -
You can now modify the crontab file to set a daily interval. To run the
script every day at noon, for example, you would modify the last line of
the file to look like this:
crontab
...
0 12 * * * /home/sammy/node_project/ssl_renew.sh >>
/var/log/cron.log 2>&1
You will also want to remove the --dry-run option from your
ssl_renew.sh script:
~/node_project/ssl_renew.sh
#!/bin/bash
COMPOSE="/usr/local/bin/docker-compose --no-ansi"
DOCKER="/usr/bin/docker"
cd /home/<^>sammy<^>/<^>node_project<^>/
Your cron job will ensure that your Let’s Encrypt certificates don’t
lapse by renewing them when they are eligible. You can also set up log
rotation with the Logrotate utility to rotate and compress your log files.
Conclusion
You have used containers to set up and run a Node application with an
Nginx reverse proxy. You have also secured SSL certificates for your
application’s domain and set up a cron job to renew these certificates
when necessary.
If you are interested in learning more about Let’s Encrypt plugins,
please see our articles on using the Nginx plugin or the standalone plugin.
You can also learn more about Docker Compose by looking at the
following resources: - How To Install Docker Compose on Ubuntu 18.04. -
How To Configure a Continuous Integration Testing Environment with
Docker and Docker Compose on Ubuntu 16.04. - How To Set Up Laravel,
Nginx, and MySQL with Docker Compose.
The Compose documentation is also a great resource for learning more
about multi-container applications.