Jenkins Guided Tour
Jenkins Guided Tour
Jenkins Guided Tour
This guided tour introduces you to the basics of using Jenkins and its main feature,
Jenkins Pipeline. This tour uses the "standalone" Jenkins distribution, which runs
locally on your own machine.
Prerequisites
A machine with:
o 256 MB of RAM, although more than 2 GB is recommended
o 10 GB of drive space (for Jenkins and your Docker image)
The following software installed:
o Java 8 or 11 (either a JRE or Java Development Kit (JDK) is fine)
o Docker (navigate to Get Docker site to access the Docker download
that’s suitable for your platform)
Download and run Jenkins
1. Download Jenkins.
2. Open up a terminal in the download directory.
3. Run java -jar jenkins.war --httpPort=8080.
4. Browse to http://localhost:8080.
5. Follow the instructions to complete the installation.
When the installation is complete, you can start putting Jenkins to work!
For more information about Pipeline and what a Jenkinsfile is, refer to the
respective Pipeline and Using a Jenkinsfile sections of the User Handbook.
1. Copy one of the examples below into your repository and name it Jenkinsfile
2. Click the New Item menu within
Jenkins
3. Provide a name for your new item (e.g. My-Pipeline) and select Multibranch
Pipeline
4. Click the Add Source button, choose the type of repository you want to use
and fill in the details.
5. Click the Save button and watch your first Pipeline run!
You may need to modify one of the example Jenkinsfile's to make it run with your
project. Try modifying the sh command to run the same command you would run on
your local machine.
After you have setup your Pipeline, Jenkins will automatically detect any new
Branches or Pull Requests that are created in your repository and start running
Pipelines for them.
Java
Node.js / JavaScript
Ruby
Python
PHP
Go
Think of a "step" like a single command which performs a single action. When a step
succeeds it moves onto the next step. When a step fails to execute correctly the
Pipeline will fail.
When all the steps in the Pipeline have successfully completed, the Pipeline is
considered to have successfully executed.
On Linux, BSD, and Mac OS (Unix-like) systems, the sh step is used to execute a
shell command in a Pipeline.
Windows
Windows-based systems should use the bat step for executing batch commands.
The "Deploy" stage retries the flakey-deploy.sh script 3 times, and then waits for up
to 3 minutes for the health-check.sh script to execute. If the health check script does
not complete in 3 minutes, the Pipeline will be marked as having failed in the
"Deploy" stage.
"Wrapper" steps such as timeout and retry may contain other steps,
including timeout or retry.
We can compose these steps together. For example, if we wanted to retry our
deployment 5 times, but never want to spend more than 3 minutes in total before
failing the stage:
Finishing up
When the Pipeline has finished executing, you may need to run clean-up steps or
perform some actions based on the outcome of the Pipeline. These actions can be
performed in the post section.
Underneath the hood, there are a few things agent causes to happen:
All the steps contained within the block are queued for execution by Jenkins.
As soon as an executor is available, the steps will begin to execute.
A workspace is allocated which will contain files checked out from source
control as well as any additional working files for the Pipeline.
There are several ways to define agents for use in Pipeline, for this tour we will only
focus on using an ephemeral Docker container.
Pipeline is designed to easily use Docker images and containers to run inside. This
allows the Pipeline to define the environment and tools required without having to
configure various system tools and dependencies on agents manually. This
approach allows you to use practically any tool which can be packaged in a Docker
container.
[Pipeline] stage
[Pipeline] { (Test)
[Pipeline] sh
[guided-tour] Running shell script
+ node --version
v14.15.0
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
environment {
DISABLE_AUTH = 'true'
DB_ENGINE = 'sqlite'
}
stages {
stage('Build') {
steps {
echo "Database engine is ${DB_ENGINE}"
echo "DISABLE_AUTH is ${DISABLE_AUTH}"
sh 'printenv'
}
}
}
}
To collect our test results and artifacts, we will use the post section.
This will always grab the test results and let Jenkins track them, calculate trends and
report on them. A Pipeline that has failing tests will be marked as "UNSTABLE",
denoted by yellow in the web UI. That is distinct from the "FAILED" state, denoted by
red.
Pipeline execution will by default proceed even when the build is unstable. To skip deployment after test
Declarative syntax, use the skipStagesAfterUnstable option. In Scripted syntax, you may
check currentBuild.currentResult == 'SUCCESS'.
When there are test failures, it is often useful to grab built artifacts from Jenkins for
local analysis and investigation. This is made practical by Jenkins’s built-in support
for storing "artifacts", files generated during the execution of the Pipeline.
This is easily done with the archiveArtifacts step and a file-globbing expression, as
is demonstrated in the example below:
post {
always {
archiveArtifacts artifacts: 'build/libs/**/*.jar', fingerprint: true
junit 'build/reports/**/*.xml'
}
}
}
Toggle Scripted Pipeline (Advanced)
If more than one parameter is specified in the archiveArtifacts step, then each
parameter’s name must explicitly be specified in the step code - i.e. artifacts for the
artifact’s path and file name and fingerprint to choose this option. If you only need
to specify the artifacts' path and file name/s, then you can omit the parameter
name artifacts - e.g.
archiveArtifacts 'build/libs/**/*.jar'
Recording tests and artifacts in Jenkins is useful for quickly and easily surfacing
information to various members of the team. In the next section we’ll talk about how
to tell those members of the team what’s been happening in our Pipeline.
See Glossary - Build Status for the different build statuses: SUCCESS, UNSTABLE, and FAILED.
There are plenty of ways to send notifications, below are a few snippets
demonstrating how to send notifications about a Pipeline to an email, a Hipchat
room, or a Slack channel.
Email
post {
failure {
mail to: 'team@example.com',
subject: "Failed Pipeline: ${currentBuild.fullDisplayName}",
body: "Something is wrong with ${env.BUILD_URL}"
}
}
Hipchat
post {
failure {
hipchatSend message: "Attention @here ${env.JOB_NAME} #${env.BUILD_NUMBER}
has failed.",
color: 'RED'
}
}
Slack
post {
success {
slackSend channel: '#ops-room',
color: 'good',
message: "The pipeline ${currentBuild.fullDisplayName} completed
successfully."
}
}
Now that the team is being notified when things are failing, unstable, or even
succeeding we can finish our continuous delivery pipeline with the exciting part:
shipping!
Deployment
The most basic continuous delivery pipeline will have, at minimum, three stages
which should be defined in a Jenkinsfile: Build, Test, and Deploy. For this section
we will focus primarily on the Deploy stage, but it should be noted that stable Build
and Test stages are an important precursor to any deployment activity.
stage('Deploy - Staging') {
steps {
sh './deploy staging'
sh './run-smoke-tests'
}
}
stage('Deploy - Production') {
steps {
sh './deploy production'
}
}
In this example, we’re assuming that whatever "smoke tests" are run by our ./run-
smoke-tests script are sufficient to qualify or validate a release to the production
environment. This kind of pipeline that automatically deploys code all the way
through to production can be considered an implementation of "continuous
deployment." While this is a noble ideal, for many there are good reasons why
continuous deployment might not be practical, but those can still enjoy the benefits of
continuous delivery. [1] Jenkins Pipeline readily supports both.
stage('Deploy - Staging') {
steps {
sh './deploy staging'
sh './run-smoke-tests'
}
}
stage('Sanity check') {
steps {
input "Does the staging environment look ok?"
}
}
stage('Deploy - Production') {
steps {
sh './deploy production'
}
}
}
}
Toggle Scripted Pipeline (Advanced)
Conclusion
This Guided Tour introduced you to the basics of using Jenkins and Jenkins
Pipeline. Because Jenkins is extremely extensible, it can be modified and configured
to handle practically any aspect of automation. To learn more about what Jenkins
can do, check out the User Handbook, or the Jenkins blog for the latest events,
tutorials, and updates.