Devops Complete Package: Apache As The Web Server PHP As The Object-Oriented Scripting Language
Devops Complete Package: Apache As The Web Server PHP As The Object-Oriented Scripting Language
Devops Complete Package: Apache As The Web Server PHP As The Object-Oriented Scripting Language
LAMP
=====
So when it comes to LAMP we are talking about { Linux, Apache, MySql & PHP }
LAMP is an open source Web development platform that uses Linux as the operating system, Apache as the Web
server, MySQL as the relational database management system and PHP as the object-oriented scripting language.
Because the platform has four layers, LAMP is sometimes referred to as a LAMP stack. Stacks can be built on different
operating systems.
Advantages Of LAMP
==================
Open Source
Easy to code with PHP
Easy to deploy an application
Develop locally
Cheap Hosting
Easy to build CMS application
o Wordpress, Drupal, Joomla, Moodle etc
Web Server
A web server is a program which serves web pages to users in response to their requests, which are forwarded by their
computers' HTTP clients(Browsers).
All computers that host websites must have web server program.
Web servers are the gateway between the average individual and the world wide web.
LAMP Architecture
Apache
=======
An open source web server used mostly for Unix and Linux platforms.
It is fast, secure and reliable.
Since 1996 Apache has been the most popular web server, presently apache holds 49.5% of market share i.e, 49.5% of all
the websites followed by Nginx 34% and Microsoft IIS 11%.
# cat /etc/redhat-release
# cat /etc/os-release
Now when you are doing server configuration, we need to be very careful while doing server configuration, coz sometimes
you may do some typos and you are unaware why the server is not starting, so to confirm your config is correct use
following command:
# vi /var/www/html/sample.php
<?php
echo “Today is ” . date(“Y/m/d”) . <br>”;
echo "Today is " . date("l");
?>
http://ip-address/sample.php
{ doesn’t show php rendering coz php engine was not there }
<VirtualHost *:81>
DocumentRoot /var/www/html/website2
</VirtualHost>
<VirtualHost *:82>
DocumentRoot /var/www/html/website3
</VirtualHost>
Nginx Server
============
Nginx is a http server, which is used in many high traffic websites like GitHub, heroku etc.
In load balancing, say u have a high traffic website which gets 1000 requests per second may be more then that, if u have
only one server all the 1000 requests will be severe by the single server where the response time will be decreased.
So we put more servers and distribute the load to all the servers equally.
So the 1000 requests will be distributed to 3 servers like 300 requests for each server assuming we got 3.
Reverse proxy, we are having multiple applications on the server, and u can only run one application on one port, say u
running one app on port 80 now u can’t run another application on port 80 we got to use another port. { app1.com ==>
app1.com:80 }
Now we are running multiple applications we need give diff port right something like { app2.com:81 }
Now I don’t want to specify port 81 in url, I want app2.com, to deal with this problem we can use nginx.
Now nginx intercepts the requests and it will see that the following request is for app2.com now it routes this request to
that application which is running on port 81. This is what reverse proxy is.
The word "proxy" describes someone or something acting on behalf of someone else.
In the computer realm, we are talking about one server acting on the behalf of another computer.
Forward Proxy
===============
Forward Proxy: Acting on behalf of a requestor (or service consumer)
"forward proxy" retrieves data from another web site on behalf of the original requestee.
a) Someone with administration authority over X's internet connection has decided to block all access to site Z.
Employees at a large company have been wasting too much time on facebook.com, so management wants access blocked
during business hours.
Torrent downloads can be blocked by our ISP’s so to view the torrent downloads we can use forward proxy.
Examples:
The administrator of Z has noticed hacking attempts coming from X, so the administrator has decided to block X's ip
address.
Z is a forum web site. X is spamming the forum. Z blocks X.
Reverse Proxy
===============
Reverse Proxy: Acting on behalf of service/content producer
However, in some scenarios, it is better for the administrator of Z to restrict or disallow direct access, and force visitors to
go through Y first. So, as before, we have data being retrieved by Y --> Z on behalf of X, which chains as follows: X --> Y -->
Z.
a) Z has a large web site that millions of people want to see, but a single web server cannot handle all the traffic. So Z sets
up many servers, and puts a reverse proxy on the internet that will send users to the server closest to them when they try
to visit Z. This is part of how the Content Distribution Network (CDN) concept works.
Examples:
Apple Trailers uses Akamai
Jquery.com hosts its javascript files using CloudFront CDN (sample).
2) The administrator of Z is worried about retaliation for content hosted on the server and does not want to expose the
main server directly to the public.
M1 IP-ADD: 104.154.22.74
M2 IP-ADD: 104.198.254.34
M3 IP-ADD: 104.198.70.78
Machine1
========
Machine2
========
Machine3
========
# yum -y install nginx
# systemctl enable nginx
# systemctl start nginx
# allow port 80 in firewall
# vi /etc/nginx/nginx.conf { delete everything under http section }
I am going to define a group of web servers with directive upstream and going to give this group a name as lbmysite.
Upstream defines a cluster that you can proxy requests to. It's commonly used for defining either a web server cluster for
load balancing, or or an app server cluster for routing / load balancing.
nginx.conf
=========
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
events {
worker_connections 1024;
}
## remove and replace from http && remove old server block { }
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
include /etc/nginx/conf.d/*.conf;
upstream lbmysite {
server 104.197.127.92:90;
server 35.194.22.247;
}
server {
listen 80 default_server;
listen [::]:80 default_server;
location / {
proxy_pass http://lbmysite;
}
}
Database
========
Installation Version 5.6
==================
# wget http://repo.mysql.com/mysql-community-release-el7-5.noarch.rpm
CMS Application
=============
Login to phpmyadmin, and create user a database and user called wordpress something like that.
GIT
Why Version Control ??
=================
● Made a change to code, realised it was a mistake and wanted to revert back?
● Lost code and didn’t have a backup of that code ?
● Had to maintain multiple versions of a product ?
● Wanted to see the difference between two (or more) versions of your code ?
● Wanted to prove that a particular change in code broke application or fixed a application ?
● Wanted to review the history of some code ?
● Wanted to submit a change to someone else's code ?
● Wanted to share your code, or let other people work on your code ?
● Wanted to see how much work is being done, and where, when and by whom ?
● Wanted to experiment with a new feature without interfering with working code ?
In these cases, and no doubt others, a version control system should make your life easier.
Key Points
● Backup
● Collaboration
● Storing Versions
● Restoring Previous Versions
● Understanding What Happened
Version Control / Revision control / Source Control is is a software that helps software developers to work together
and maintain a complete history of their work.
It lets you save a snapshot of your complete project at any time you want. When you later take a look at an older snapshot
("version"), your VCS shows you exactly how it differed from the previous one.
A version control system records the changes you make to your project’s files.
Types of VCS
Uses a central server to store all files and enables team collaboration.
But the major drawback of CVCS is its single point of failure, i.e., failure of the central server. Unfortunately, if the central
server goes down for an hour, then during that hour, no one can collaborate at all.
DVCS does not rely on the central server and that is why you can perform many operations when you are offline. You can
commit changes, create branches, view logs, and perform other operations when you are offline. You require network
connection only to publish your changes and take the latest changes.
How the Typical VCS works
A typical VCS uses something called Two tree architecture, this is what a lot of other VCS use apart from git.
1. Working Copy
2. Repository
These are our two trees, we call them trees because they represent a file structure.
Working copy [CLIENT] is the place where you make your changes.
Whenever you edit something, it is saved in working copy and it is a physically stored in a disk.
Repository [SERVER] is the place where all the version of the files or commits, logs etc is stored. It is also saved in a disk
and has its own set of files.
You cannot however change or get the files in a repository directly, in able to retrieve a specific file from there, you have
to checkout
Checking-out is the process of getting files from repository to your working copy. This is because you can only edit files
when it is on your working copy. When you are done editing the file, you will save it back to the repository by committing
it, so that it can be used by other developers.
Committing is the process of putting back the files from working copy to repository.
Hence, this architecture is called 2 Tree Architecture.
Because you have two tree in there Working Copy and Repository.
===========================
1. Working Copy
2. Staging
3. Repository
As Git uses Distributed version control system, So let’s talk about Git which will give you an understanding of DVCS.
Git was initially designed and developed by Linus Torvalds in 2005 for Linux kernel development. Git is an Open Source
tool.
History
For developing and maintaining Linux Kernel, Linus Torvalds used BitKeeper which is also one of the VCS, and is open
source till 2004.
Well interestingly Git has the Working Copy and Repository as well but it has added an extra tree Staging in between:
This is one of the fundamental difference of Git that sets it apart from other VCS, this Staging tree (usually termed as
Staging area) is a place where you prepare all the things that you are going to commit.
In Git, you don't move things directly from your working copy to the repository, you have to stage them first, one of the
main benefits of this is, to break up your working changes into smaller, self-contained pieces.
Staging allows you finer control over exactly how you want to approach version control.
Advantages Of Git
===================
Git works on most of OS: Linux, Windows, Solaris and MAC.
Installing Git
# git --version
=======================
===========================
When we setup git and before adding bunch of files, We need to fill up username & email and it’s basically git way of
creating an account.
=================
# mkdir website
The purpose of Git is to manage a project, or a set of files, as they change over time. Git stores this information in a data
structure called a repository.
# git init is only for when you create your own new repository from scratch.
=============
# mkdir website
# git init
# vi index.html
# git status
# git commit -m “Message” {moves file from staging area to local repo}
# git status
You can skip the staging area by # git commit -a -m “New Changes”
Commit History - How many Commits have happened ??
# git log
It gives commit history basically commit number, author info, date and commit message.
# git log {gives the latest commit on top and old will get down}
Git diff
========
Let’s see, the diff command gives the difference b/w two commits.
Now our log started increasing, like this the changes keep on adding file is one but there are different versions of this file.
Git Branching
================
In a collaborative environment, it is common for several developers to share and work on the same source code.
Some developers will be fixing bugs while others would be implementing new features.
Therefore, there has got to be a manageable way to maintain different versions of the same code base.
This is where the branch function comes to the rescue. Branch allows each developer to branch out from the original
code base and isolate their work from others. Another good thing about branch is that it helps Git to easily merge the
versions later on.
It is a common practice to create a new branch for each task (eg. bug fixing, new features etc.)
Branching means you diverge from the main line(master-working copy of application) of development and
continue to do work without messing with that main line.
Basically, you have your master branch and you don't want to mess anything up on that branch.
In many VCS tools, branching is an expensive process, often requiring you to create a new copy of your source code
directory, which can take a long time for large projects.
Some people refer to Git’s branching model as its “killer feature” and it certainly sets Git apart in the VCS community.
Why is it so special?
The way Git branches is incredibly lightweight, making branching operations nearly instantaneous, and switching back
and forth between branches generally just as fast.
A branch in Git is simply a lightweight movable pointer to one of these commits. The default branch name in Git is master.
As you start making commits, you’re given a master branch that points to the last commit you made. Every time you
commit, it moves forward automatically.
What happens if you create a new branch? Well, doing so creates a new pointer for you to move around.
This creates a new pointer to the same commit you’re currently on.
HEAD is a pointer to the latest commit id and is always moving, not stable.
In this case, you’re still on master. The git branch command only created a new branch — it didn’t switch to that branch.
This command shows you where the branch pointers are pointing:
You can see the “master” and “testing” branches that are right there next to the f30ab commit.
# vim test.rb
# git commit -a -m 'made a change'
This is interesting, because now your testing branch has moved forward, but your master branch still points to the
commit you were on when you ran git checkout to switch branches.
That command(git checkout master) did two things. It moved the HEAD pointer back to point to the master branch, and
it reverted the files in your working directory back to the snapshot that master points to.
===========
================
A merge conflict happens when two branches both modify the same region of a file and are subsequently merged.
Git don’t know which of the changes to keep, and thus needs human intervention to resolve the conflict.
# we do get a merge conflict here, open the services.html in vi and resolve the conflicts that occurred.
# git add .
# git commit -m “Conflict resolved”
Git ignore
============
It’s a list of files you want git to ignore in your working directory.
It's usually used to avoid committing transient files from your working directory that aren't useful to other collaborators
such as temp files IDE’s create, Compilation files, OS files etc.
============
https://www.gitignore.io/
*.bk
*.class
*.php
!index.php
[aeiou]*.txt
Stashing
=============
# git stash
Cherry Picking
==============
Cheery picking in Git is designed to apply some commit from one branch to another branch.
You can just revert the commit and cherry-pick it on another branch.
Tagging
============
In release management we are working as a team and I’m working on a module and whenever I’m changing some files I’m
pushing those files to remote master.
Now I have some 10 files which are perfect working copy, and I don’t want this files to be messed up by my other team
members, these 10 files they can directly go for release.
But if I keep them in the repository, as my team is working together, there is always a chance that, somebody or other can
mess that file, so to avoid these we can do TAGGING.
You can tag till a particular commit id, imagine all the files till now are my working copies:
Goto GitHub and see release click on it, you can download all the files till that commit.
So we are seeing the executable format of the code, that is called build result of the code.
What is build ??
==============
Build is the end result of your source code.
Build tool is nothing but, it takes your source code and converts it into human readable format (executable).
Build
====
The term build may refer to the process by which source code is converted into a stand-alone form that can be run on a
computer.
One of the most important steps of a software build is the compilation process, where source code files are converted into
executable code.
The process of building software is usually managed by a build tool i.e, maven.
Builds are created when a certain point in development has been reached or the code has been ready for implementation,
either for testing or outright release.
Build: Developers write the code, compile it, compress the code and save it in a compressed folder. This is called Build.
Release: As a part of this, starting from System study, developing the software and testing it for multiple cycles and deploy the
same in the production server. In short, one release consists of multiple builds.
Maven Objectives
===============
● A comprehensive model for projects which is reusable, maintainable, and easier to comprehend(understand).
● plugins
Earlier to maven we had ANT, which was pretty famous before maven.
Disadvantages of ANT
==================
ANT - Ant scripts need to be written for building
[build.xml need to tell src & classes ]
ANT - There is no dependency management
ANT - No project structure is defined
Advantages of Maven
==================
No script is required for building [automatically generated - pom.xml]
Dependencies are automatically downloaded
Project structure is generated by maven
Documentation for project can be generated
Maven is called as project management tool also, the reason is earlier when we used to create projects and we used to
create the directory structure and all by yourself, but now maven will take care of that process.
Whenever i generate a project using maven i will get src and test all by default.
MAVEN FEATURES
================
Dependency System
=================
Initially in any kind of a build, whenever a dependency is needed, if i’m using ANT i have to download the dependency
then keep it in a place where ANT can understood.
If i’m not giving the dependency manually my build will fail due to dependency issues.
Maven handles dependency in a beautiful manner, there is place called MAVEN CENTRAL. Maven central is a centralized
location where all the dependencies are stored over web/internet.
For ex im using a project and i’m having a dependency of junit, whenever my build reaches a phase where it needs junit
then it will download the dependencies automatically and it will store those dependencies in your machine. There is a
directory called as .m2 created in your machine, where all the dependencies are going to be saved. Next time when it
comes across the same dependency, then it doesn’t download it coz its already available in .m2 directory.
Plugin Oriented
=============
Maven has so many plugins that i can integrate, i can integrate junit, jmeter, sonarqube, tomcat, cobertura and so many
other.
There are two other Maven lifecycles of note beyond the default list above. They are
● clean: cleans up artifacts created by prior builds
● site: generates site documentation for this project
These lifecycle phases are executed sequentially to complete the default life cycle.
GAV
Archetype
=========
Maven archetypes are project templates which can be generated for you by Maven.
In other words, when you are starting a new project you can generate a template for that project with Maven.
In Maven a template is called an archetype.
Each Maven archetype thus corresponds to a project template that Maven can generate.
Installation
==========
Maven is dependent on java as we are running java applications, so to have maven, we also need to have java in system.
Install java
=========
Java package : java program --- java-1.8.0-openjdk
Java package : java compiler --- java-1.8.0-openjdk-devel
Install Maven
===========
Local
=====
The repository which resides in our local machine which are cached from the remote/central repository downloads and
ready for the usage.
Remote
=======
This repository as the name suggests resides in the remote server. Remote repository will be used for both downloading
and uploading the dependencies and artifacts.
Central
======
This is the repository provided by maven community. This repository contains large set of commonly used/required
libraries for any java project. Basically, internet connection is required if developers want to make use of this central
repository. But, no configuration is required for accessing this central repository.
● It scans through the local repositories for all the configured dependencies. If found, then it continues with the
further execution. If the configured dependencies are not found in the local repository, then it scans through the
central repository.
● If the specified dependencies are found in the central repository, then those dependencies are downloaded to the
local repository for the future reference and usage. If not found, then maven starts scanning into the remote
repositories.
● If no remote repository has been configured, then maven will throw an exception saying not able to find the
dependencies & stops processing. If found, then those dependencies are downloaded to the local repository for
the future reference and usage.
We have pom.xml which contains all the definitions for your project generated, this is the main file of the project.
# ls -l ~/.m2/repository
# mvn validate { whatever in the pom.xml is correct or not }
Let’s make some mistakes and try to fail this phase,
# mv pom.xml pom.xml.bk
# mvn validate { build failure }
# mv pom.xml.bk pom.xml
# vi App.java { welcome to Devops }
# mvn compile { after changing code we do compilation right }
{ this generates a new structure - # tree -a . with class files}
# mvn test { test the application }
# mvn package { generates the artifact - jar }
# java -cp target/xxxx.jar groupid(com.digital.proj1).App
Plugins
=======
We saw maven is only performing phases like validate, compile, test, package, install, deploy but if you remember there is
no execution of a jar file,
Can you see any of the phases running jar file, no right ??
Executing jar file is not part not the part of life cycle,
apart from the above phases such as validate, compile, test, package, install, deploy all the other come under <build>
under <plugins> </plugins>
These plugins will define, other then regular maven lifecycle phases,
<build>
<plugins>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>exec-maven-plugin</artifactId>
<version>1.2.1</version>
<configuration>
<mainClass>com.digi.App</mainClass>
<arguments>
<argument>-jar</argument>
<argument>target/*.jar</argument>
</arguments>
</configuration>
</plugin>
</plugins>
</build>
</project>
We need to run
# mvn exec:java
WEB APP SETUP
# vi conf/tomcat-users.xml
<user username="tomcat1" password="tomcat1" roles="manager-script"/>
Add Maven-Tomcat Authentication
============================
# vi ~/.m2/settings.xml
<settings>
<servers>
<server>
<id>TomcatServer</id>
<username>tomcat1</username>
<password>tomcat1</password>
</server>
</servers>
</settings>
# mvn tomcat7:deploy
# mvn tomcat7:undeploy
# mvn tomcat7:redeploy
Profiles
========
So in general, what is the meaning of profile, let’s take windows as example for each profile there would be some different
settings right.
I mean in same machine we can have different different profiles. Similarly in pom.xml, the project is same, but you can
create multiple profiles, for multiple purposes.
So for my project i want to have different different profiles like dev, qa and prod env.
Here the requirements are different for each and every env, like in dev env we don’t need any of the test to be run.
Let’s say there 4 people who have different different req, now i need to create 4 projects instead of 4 projects, within a
single project i can have 4 profiles, that’s the profile concept.
<profiles> will not be there under <build>, they will be below </dependencies>,
So the pom.xml looks like this with <profiles>
</dependencies>
<profiles>
<profile>
<id>DEV</id>
<build>
<plugins>
<plugin>
https://github.com/ravi2krishna/Maven-Build-Profiles.git
NEXUS
⇒ Nexus is a Binary Repository Manager
Now the simple solution for this problem is, Dev B should ask Dev A to provide the HelloWorld.jar, so that DevB can keep
the Hello.jar in DevB machines .m2 directory,
This works coz the maven will look first in .m2 directory.[Local Repo]
If not then it maven central [Apache Repo]
Now imagine DevA, keeps on changing the Hello.jar code, let’s say DevA changed 100 times now DevB should ask DevA
100 times which doesn’t make sense.
Tomorrow in your project, you have 100 Developers, now they have to exchange their libraries, now it won’t be that easy
to exchange the libraries.
It’s really cumbersome process, Nobody understands which version is there with whom.
The solution is there, but this is a complex and time taking solution.
This may workout if there are only 2-3 developers, but not more then that.
Now we will introduce a new server within our organization. This server we call it as Remote Repository.
Just like how apache is maintaining a MavenCentral, similarly we will maintain our own remote repository.
Now DevA instead of sharing his Hello.jar with DevB, DevC etc
He will push it to Our Remote Repository and now DevB, DevC everyone who needs that Hello.jar will pull it from the
Remote Repository.
I mean they[DevB, DevC] will add that info in the pom.xml, now pom will take care of downloading it from remote repo.
ow we got totally three kinds of Repo’s.
1. Local
2. Public Repo
3. Private Repo
Now even the libraries are secured, coz they are present within your organization.
Sonatype Nexus
==============
Installation of Nexus
=================
ERROR: Change the permissions of two nexus directories with logged in username.
http://ip-addr:8081/nexus
Once the nexus is up we will create two repositories, Snapshot & Release.
Search for distribution management tag in google for maven & nexus, and paste it after </dependencies> tag.
Login to nexus using admin & admin123, now we already we have some default repos, we have something Central, this is
Apache Maven Central.
Add → Hosted Repo → Repo Id: releaseRepo → Repo Name: releaseRepo → Repo Policy: Release → Deployment Policy:
Disable Redeploy → Save
Add → Hosted Repo → Repo Id: snapshotRepo → Repo Name: snapshotRepo → Repo Policy: Snapshot → Deployment
Policy: Allow Redeploy → Save
<distributionManagement>
<repository>
<id>releaseRepo</id>
<name>releaseRepo</name>
<url>http://192.168.56.101:8081/nexus/content/repositories/releaseRepo/</url>
</repository>
<snapshotRepository>
<id>snapshotRepo</id>
<name>snapshotRepo</name>
<url>http://192.168.56.101:8081/nexus/content/repositories/snapshotRepo/</url>
</snapshotRepository>
</distributionManagement>
Goto the maven project and do # mvn deploy again, now we get new error like 401.
Nexus is strictly authenticated, you cannot deploy until and unless you login,
We don’t provide usernames and passwords in pom.xml, coz pom.xml files are stored in VCS, which will be shared to
other[most] developers as well.
Nexus Assignment
================
https://github.com/ravi2krishna/CalculatorTestCases.git
Package the application and implement the add(), substract() and multiply() using the JAR which will be generated from
the above repository.
SONARQUBE
Prerequisites :
1. Java 1.7 + { recommended 1.8 }
2. MySql 5.6+
# wget http://repo.mysql.com/mysql-community-release-el7-5.noarch.rpm
# sudo mysql_secure_installation
Sonarqube Installation
===================
Visit https://www.sonarqube.org/downloads/
# wget the latest version [LTS]
# unzip <file>
# cd sonar/conf
# vim sonar.properties
# cd sonarqube-6.7/bin
# cd linux-x86-64
# ./sonar.sh start
# cd SampleApp
# mvn compile sonar:sonar
JENKINS
INTRODUCTION
=============
What is jenkins ??
Jenkins is an application that monitors executions of repeated jobs, such as building a software project.
Now jenkins can do a lot of things in an automated fashion and if a task is repeatable and it can be done in a same way
over time, jenkins can do it not only doing it but it can automate the process, means jenkins can notify a team when a build
fails, jenkins can do automatic testing(functional & performance) for builds.
Traditionally, development makes software available in a repository, then they give a call to operations/submit a ticket to
helpdesk and then operations builds and deploys that software to one or more environments, one this is done, there is
usually a QA team which loads and executes performance test on that build and makes it ready for production.
So what jenkins does is a lot of these are repeatable tasks, which can be automated by using the jenkins.
Jenkins has a large number of plugins which helps in this automation process.
Continuous Integration
===================
is a development practise that requires developers to integrate code into a shared repository several times per day (repos
in subversion, CVS, mercurial or git). Each check-in is then verified by an automated build, allowing everyone to detect and
be notified of problems with the package immediately.
Build Pipeline
============
is a process by which the software build is broken down in sections:
• Unit test
• Acceptance test
• Packaging
• Reporting
• Deployment
• Notification
The concepts of Continuous Integration, Build Pipeline and the new “DevOps” movement are revolutionizing how we
build, deploy and use software.
Tools that are effective in automating multiple phases of these processes (like Jenkins) become more valuable in
organizations where resources, time or both are at a premium.
Installation of Jenkins
=====================
● We need java to work with jenkins.
○ # sudo yum -y install java-1.8.0-openjdk
○ # sudo yum -y install java-1.8.0-openjdk-devel
○ # java -version { confirm java version}
● Generally jenkins runs on port 8080, so we need to make sure that there is no other service that is running and
listening on port 8080
● Now we need to add jenkins repo, to our repository list, so that we can pull down and install jenkins package.
○ # sudo wget -O /etc/yum.repos.d/jenkins.repo https://pkg.jenkins.io/redhat-stable/jenkins.repo
○ We will run a key so that we can trust this repo and pull down jenkins package
○ # sudo rpm --import https://pkg.jenkins.io/redhat-stable/jenkins.io.key
● Now our key has been imported we can do
○ # sudo yum -y install jenkins
● Now let’s enable the service
○ # sudo systemctl enable jenkins
● Now let’s enable the service
○ # sudo systemctl start jenkins
● Now if you do # ip-address:8080 you can see jenkins home
Creates a user called jenkins, and keeps this user away from login, this is the default
behaviour of jenkins coz we are right now on master and building everything here.
But in real time we have two things here master and build slaves
Master: where jenkins is installed and administration console is
Build slaves: these are servers that configured to off load jobs so that master is free
Creating Users
============
Manage Jenkins → Manage Users → Create User Left Side → Fill details
Create Users : tester 1, tester 2, developer 1 and developer 2
Now login with the tester1 and see the list of jobs.
Now login with the developer1 and see the list of jobs.
As you can see, both the testing and development team jobs are visible across different teams, which would be a security
concern, but this is the default behavior of jenkins.
Now let’s see how we can secure the jenkins to make jobs only visible to testing and development teams.
For this we need to install new plugin called Role-based Authorization Strategy.
Installing Plugin
==============
Manage Jenkins → Manage Plugins → Available → Search Role-based → Install without restart.
Manage Jenkins → Con igure Global Security → Authorization, now we can see the new option Role-Based Strategy.
Now let’s see how we can go and create some roles and based on roles we should grant access to users:
Manage Jenkins → Manage and Assign Roles → Manage roles →
Global Roles: check Role to add, give something like employee and click add
and give overall read access and over all view access
Role to add: developer && Pattern: dev.* and check everything, now developers will only have access to projects that
start with dev but nothing else. Click Apply and Save
Role to add: tester && Pattern: test.* and check everything, now testers will only have access to projects that start with
test but nothing else. Click Apply and save
● So we have created an employee role at global level and we created two roles developer and tester at project
level.
Assigning Roles To Users
======================
● Manage Jenkins → Manage and assign roles → Assign roles → Global Roles → User/group to add → Add
Users tester1, tester2, develoepr1 and developer2 to Global roles as employee → Apply
Under Item Roles User/group to add → Add Users tester1, tester2, developer1 and developer2 to Item roles →
Now add Users tester1 & tester2 to Tester Roles and Users developer1 & developer2 to Developer Roles →
Apply
○ So we created users, we created roles and we assigned roles
Now if you login with tester users you can only see jobs related to testing, and similarly if you login with developer you
can see only development related jobs.
We know we have user jenkins, who has no shell and this user is the owner of jenkins application but beyond that it’s a
normal user. So what we are going to do is manage the global credentials.
Now we should make a decision that when we run jobs either locally or remotely we are going to run them with the
jenkins user using ssh so that we can control slaves.
So on our system(master) we are going to change jenkins user, so that we can login with jenkins user, let’s go and do it
Click on Credentials on homepage of jenkins → Click on Global credentials → Adding some credentials → Kind (select SSH
username with private key) → Scope (global) →
Username (jenkins) → Private Key (From jenkins master ~/.ssh) → Ok
Now this sets jenkins user account to be available for SSH key exchange with other servers. This is imp coz we want the
ability so that single jenkins user can be in control to run our jobs remotely so that master can off load its jobs to slaves.
Slave
=====
Now create a new centos server(machine) where you will be getting new ip {1.2.3.4}
Create a user jenkins # useradd jenkins
# usermod -aG google-sudoers jenkins
# sudo su jenkins -
# cd { make sure you are in jenkins home /home/jenkins}
We are going to use this slave nodes in order to off-load the build processing to other machines so that the master server
doesn’t get CPU, IO or N/W load etc in managing large number of jobs across multiple servers multiple times a day.
# Manage Jenkins (scroll down) → Manage Nodes → New Node → Node name : Remote slave 1 → # of executors : 3 (up to 3
concurrent jobs) → Remote root Directory: (/home/jenkins) Labels: remote1 → Usage: (as much as possible) → Launch
method: Launch slave via SSH → Host: ip of slave → Credentials: Service acc(Give settings of Kind: SSH username with
private key && Private key: From jenkins master ~/.ssh) → Availability: Keep online as much as possible → Save
Install java and javac on all slaves.
Then click on node and see the log.
Now if i goto jenkins home i can see both master and slave and their executors
VAGRANT
Vagrant
=======
Vagrant is a tool for building and managing virtual machine environments.
With an easy-to-use workflow and focus on automation, Vagrant lowers development environment setup time, increases
production parity, and makes the "works on my machine" excuse a relic of the past.
Vagrant is suitable for development environment.
Why Vagrant ??
==============
Vagrant provides easy to configure, reproducible, and portable work environments built on top of industry-standard
technology and controlled by a single consistent workflow to help maximize the productivity and flexibility of you and
your team.
To achieve its magic, Vagrant stands on the shoulders of giants. Machines are provisioned on top of VirtualBox, VMware,
AWS, or any other provider. Then, industry-standard provisioning tools such as shell scripts, Chef, or Puppet, can
automatically install and configure software on the virtual machine.
For Developers
If you are a developer, Vagrant will isolate dependencies and their configuration within a single disposable, consistent
environment, without sacrificing any of the tools you are used to working with (editors, browsers, debuggers, etc.). Once
you or someone else creates a single Vagrantfile, you just need to vagrant up and everything is installed and configured for
you to work. Other members of your team create their development environments from the same configuration, so
whether you are working on Linux, Mac OS X, or Windows, all your team members are running code in the same
environment, against the same dependencies, all configured the same way. Say goodbye to "works on my machine" bugs.
For Operators
If you are an operations engineer or DevOps engineer, Vagrant gives you a disposable environment and consistent
workflow for developing and testing infrastructure management scripts. You can quickly test things like shell scripts, Chef
cookbooks, Puppet modules, and more using local virtualization such as VirtualBox or VMware. Then, with the same
configuration, you can test these scripts on remote clouds such as AWS or RackSpace with the same workflow. Ditch your
custom scripts to recycle EC2 instances, stop juggling SSH prompts to various machines, and start using Vagrant to bring
sanity to your life.
For Everyone
Vagrant is designed for everyone as the easiest and fastest way to create a virtualized environment!
Let's say you need some machine for R & D purpose quickly, configure that machine quickly and able to use that machine
very quickly, now using vagrant we can reduce installation time.
Typically when we setup OS, you may take around 30-45 min to go along with that.
But with vagrant you can spin up the development environment very very quickly.
We see something like find boxes right, let’s understand some terminology:
So when we are going with Base Installation of Linux, we need couple of things
● You need a CD or ISO image
● You need a physical machine
● You define some CPU & RAM
● Storage
● Network
So whenever we are going with installation we need to go with all these steps always.
So if we are working with cloud we have some images like CentOS, ubuntu etc
● AMI (Amazon Machine Images)
● Virtualization Layer
● CPU {how much CPU i need}
● RAM {how much RAM i need}
● Storage {how much storage i need}
● Network
As i told to setup the OS it takes around 30-45 min, but using the vagrant it hardly takes 5-10 min depending on internet
speed.
Now i don't have any box right, let’s download the box:
# goto vagrantup.com
# find BOXES {gives list of all boxes}
So once you go to find boxes, it shows what’s the virtualization layer is and the diff boxes available.
Getting Started
=============
We will use Vagrant with VirtualBox, since it is free, available on every major platform, and built-in to Vagrant.
But do not forget that Vagrant can work with many other providers.
Providers
========
While Vagrant ships out of the box with support for VirtualBox, Hyper-V, and Docker, Vagrant has the ability to manage
other types of machines as well. This is done by using other providers with Vagrant.
Before you can use another provider, you must install it. Installation of other providers is done via the Vagrant plugin
system.
Once the provider is installed, usage is straightforward and simple, as you would expect with Vagrant.
Your project was always backed with VirtualBox. But Vagrant can work with a wide variety of backend providers, such as
VMware, AWS, and more.
Once you have a provider installed, you do not need to make any modifications to your Vagrantfile, just vagrant up with
the proper provider and Vagrant will do the rest:
# vagrant up --provider=vmware_fusion
# vagrant up --provider=aws
Up and Running
===============
# vagrant init centos/7
# vagrant up
After running the above two commands, you will have a fully running virtual machine in VirtualBox running.
You can SSH into this machine with # vagrant ssh, and when you are done playing around, you can terminate the virtual
machine with # vagrant destroy.
Project Setup
============
The first step in configuring any Vagrant project is to create a Vagrantfile. The purpose of the Vagrantfile is:
1. Mark the root directory of your project. Many of the configuration options in Vagrant are relative to this root directory.
2. Describe the kind of machine and resources you need to run your project, as well as what software to install and how
you want to access it.
Vagrant has a built-in command for initializing a directory for usage with Vagrant:
# vagrant init
This will place a Vagrantfile in your current directory. You can take a look at the Vagrantfile if you want, it is filled with
comments and examples. Do not be afraid if it looks intimidating, we will modify it soon enough.
You can also run vagrant init in a pre-existing directory to setup Vagrant for an existing project.
# vagrant init centos/7
Vagrantfile
=========
The primary function of the Vagrantfile is to describe the type of machine required for a project, and how to configure
and provision these machines.
Vagrant is meant to run with one Vagrantfile per project, and the Vagrantfile is supposed to be committed to version
control.
The syntax of Vagrantfiles is Ruby, but knowledge of the Ruby programming language is not necessary to make
modifications to the Vagrantfile, since it is mostly simple variable assignment.
# vagrant up
==========
In less than a minute, this command will finish and you will have a virtual machine running centos 7. You will not actually
see anything though, since Vagrant runs the virtual machine without a UI.
# vagrant ssh
===========
This command will drop you into a full-fledged SSH session. Go ahead and interact with the machine and do whatever you
want.
Open Vagrantfile
→ It's a ruby con iguration ile
→ I'll remove unwanted things like commented section, just keep
Vagrant.configure("2") do |config|
config.vm.box = "base"
end
# vagrant up
Network
=======
In order to access the Vagrant environment created, Vagrant exposes some high-level networking options for things such
as forwarded ports, connecting to a public network, or creating a private network.
Port Forwarding
=============
Port forwarding allows you to specify ports on the guest machine to share via a port on the host machine. This allows you
to access a port on your own machine, but actually have all the network traffic forwarded to a specific port on the guest
machine.
Let us setup a forwarded port so we can access Apache in our guest. Doing so is a simple edit to the Vagrantfile, which now
looks like this:
Vagrant.configure("2") do |config|
config.vm.box = "hashicorp/precise64"
config.vm.provision :shell, path: "bootstrap.sh"
config.vm.network :forwarded_port, guest: 80, host: 4567
end
Run a # vagrant reload or # vagrant up. Once the machine is running again, load http://127.0.0.1:4567 in your browser.
You should see a web page that is being served from the virtual machine that was automatically setup by Vagrant.
Vagrant also has other forms of networking, allowing you to assign a static IP address to the guest machine, or to bridge
the guest machine onto an existing network.
By default vagrant uses NAT network provided by your provider aka Virtual Box.
But let say you want to use bridge network to get connected to router, just uncomment the “public_network”, and when
you do # vagrant up, it asks for the network to connect.
# config.vm.network "public_network"
# config.vm.network :public_network, bridge: "en0: Wi-Fi (AirPort)"
Configure Hostname
=================
# config.vm.hostname = “centos”
Changing RAM
=============
# Customize the amount of memory on the VM:
config.vm.provider "virtualbox" do |vb|
vb.memory = "1024"
end
Changing CPU
=============
# Customize the number of CPU’s on the VM:
config.vm.provider "virtualbox" do |vb|
vb.cpus = "2"
end
Provisioning
===========
Provisioners in Vagrant allow you to automatically install software, alter configurations, and more on the machine as part
of the vagrant up process.
Of course, if you want to just use vagrant ssh and install the software by hand, that works. But by using the provisioning
systems built-in to Vagrant, it automates the process so that it is repeatable. Most importantly, it requires no human
interaction.
Vagrant gives you multiple options for provisioning the machine, from simple shell scripts to more complex, industry-
standard configuration management systems.
Provisioning happens at certain points during the lifetime of your Vagrant environment:
● On the first vagrant up that creates the environment, provisioning is run. If the environment was already created
and the up is just resuming a machine or booting it up, they will not run unless the --provision flag is explicitly
provided.
● When vagrant provision is used on a running environment.
● When vagrant reload --provision is called. The --provision flag must be present to force provisioning.
You can also bring up your environment and explicitly not run provisioners by specifying --no-provision.
If we want to install web server in our server. We could just SSH in and install a webserver and be on our way, but then
every person who used Vagrant would have to do the same thing. Instead, Vagrant has built-in support for automated
provisioning. Using this feature, Vagrant will automatically install software when you vagrant up so that the guest machine
can be repeatedly created and ready-to-use.
We will just setup Apache for our basic project, and we will do so using a shell script. Create the following shell script and
save it as bootstrap.sh in the same directory as your Vagrantfile.
bootstrap.sh
==========
yum -y install git
yum -y install httpd
systemctl start httpd
systemctl enable httpd
git clone https://github.com/devopsguy9/food.git /var/www/html/
systemctl restart httpd
Next, we configure Vagrant to run this shell script when setting up our machine. We do this by editing the Vagrantfile,
which should now look like this:
Vagrant.configure("2") do |config|
config.vm.box = "centos/7"
config.vm.provision :shell, path: "bootstrap.sh"
end
The "provision" line is new, and tells Vagrant to use the shell provisioner to setup the machine, with the bootstrap.sh file.
The file path is relative to the location of the project root (where the Vagrantfile is).
After everything is configured, just run vagrant up to create your machine and Vagrant will automatically provision it. You
should see the output from the shell script appear in your terminal. If the guest machine is already running from a
previous step, run vagrant reload --provision, which will quickly restart your virtual machine.
After Vagrant completes running, the web server will be up and running. You can see the website from your own browser.
This works because in shell script above we installed Apache and setup the website.
Load Balancing
=============
In this project we are going to work with three different machines,
In our previous Load balance example we took one nginx and two apache servers.
end
provision-nginx.sh
================
#!/bin/bash
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
server_name localhost;
root /usr/share/nginx/html;
#index index.html index.htm;
location / {
proxy_pass http://testapp;
}
}" >> /etc/nginx/sites-enabled/default
sudo service nginx start
echo "MACHINE: LOAD BALANCER" >> /usr/share/nginx/html/index.html
echo "Provision LB1 complete"
provision-web1.sh
==================
echo "Starting Provision on A"
sudo apt-get install -y nginx
echo "<h1>MACHINE: A</h1>" >> /usr/share/nginx/html/index.html
echo "Provision A complete"
provision-web2.sh
==================
echo "Starting Provision on B"
sudo apt-get install -y nginx
echo "<h1>MACHINE: B</h1>" >> /usr/share/nginx/html/index.html
echo "Provision B complete"
# vagrant status
# vagrant global-status
# vagrant port lb1
# vagrant port web1
Running Provisioners
==================
Provisioners are run in three cases:
# vagrant up
# vagrant provision
# vagrant reload --provision
DOCKER
Docker
=======
The whole idea of Docker is for developers to easily develop applications, ship them into containers which can then be
deployed anywhere.
Release of Docker was in March 2013 and since then, it has become the buzzword for modern world development.
Features of Docker
=================
● Docker has the ability to reduce the size of development by providing a smaller footprint of the operating
system via containers.
● With containers, it becomes easier for teams across different units, such as development, QA and Operations to
work seamlessly across applications.
● You can deploy Docker containers anywhere, on any physical and virtual machines and even on the cloud.
● Since Docker containers are pretty lightweight, they are easily scalable.
Why Virtualization ??
==================
● Hardware Utilization
● To reduce no of physical servers
● Reduce Cost
● More different OS
Your whole design of virtualization, is to target the Applications.
Let say we are having 3 VM’s and minimum requirements for this system are:
1 CPU & 1 GB RAM now like this i need 3 CPU and 3 GB RAM
If the same things is needed for like 1000+ machines, it becomes more cumbersome.
Sometimes it happens like this application works fine on Linux OS, but doesn’t work on Unix and Win, these kind of
problems can be avoided by using Docker
Using docker we can create entire application with OS itslef(OS dependent files).
Box → VM's
AMI → Instances
Images → Containers
Installation
============
# get.docker.com
# sudo usermod -aG docker <user-name>
Docker comes in two components SERVER & CLIENT
# systemctl start docker
# docker info
# docker images
# open two sessions of the same instance {we can do things simultaneously}
# s1 - sudo docker ps {shows running containers}
# s2 - sudo docker run -it centos /bin/bash
{now we are inside container}
it - terminal interactive
If we use container it will take at least 2 min but. here its 3 seconds
# s1 - top {so many tasks}
# s2 - top {literally 2 process}
# s2 - ps -ef {same 2 process}
# s2 - cat /etc/hosts {my hostname is container_id}
s1 - sudo docker ps
# s1 - sudo docker ps
# s2 - exit {bash is finished - container is gone}
# s1 - sudo docker ps
# docker images
# docker run -it { attached mode runs in foreground }
naming container
================
s2 - # sudo docker run --rm -ti --name "web-server01" docker.io/centos /bin/bash
s1 - sudo docker ps
s2 - exit
setting hostname
================
s2 - sudo docker run --rm -ti --name "web-server1" --hostname "web-server" docker.io/centos /bin/bash
s2 - cat /etc/hostname
s2 - hostname
s2 - exit
s1 - sudo docker ps
# apt-get update
I’ll do exit
# exit
Now what is command to see the exit ed containers:
# docker ps -a
Container lifetime and Persistent data
================================
Containers are usually immutable and ephemeral, just fancy buzzwords for unchanging and temporary or
disposable, but the idea here is that we can just throw away the container and create a new one from an image right!!!!.
Containers are Ephemeral and once a container is removed, it is gone.
What about scenarios where you want the applications running inside the container to write to some files/data and then
ensure that the data is still present. For e.g. let’s say that you are running an application that is generating data and it
creates files or writes to a database and so on. Now, even if the container is removed and in the future you launch another
container, you would like that data to still be there.
In other words, the fundamental thing that we are trying to get over here is to separate out the container lifecycle from the
data. Ideally we want to keep these separate so that the data generated is not destroyed or tied to the container lifecycle
and can thus be reused. This is done via Volumes, which we shall see via several examples.
So we are not talking about actual limitation of containers, but more of design goal or best practise, this is the idea of
immutable infrastructure
So docker has two solutions to this problem known as Volumes and Bind mounts.
By Docker Volumes, we are essentially going to look at how to manage data within your Docker containers.
In the container do # ls
Notice that a volume named data is visible now.
Let us a do a cd inside the data volume and create a file named file1.txt
# cd data
# touch file1.txt
# exit
# docker ps
# docker ps -a
# docker inspect container1
# sudo ls var/lib/docker/volumes
Now, let’s do an interesting thing. Exit the container and remove the container.
# docker rm container1
# sudo ls var/lib/docker/volumes
This shows to you that though you have removed the container1, the data volume is still present on the host. This is a
dangling or ghost volume and could remain there on your machine consuming space.
Do remember to clean up if you want. Alternatively, there is also a -v option while removing the container.
Anonymous volumes: don’t have a name it’s not so easy to work with Anonymous volumes if there are multiple
Anonymous volumes.
# docker run -it -v /my-data --name container1 ubuntu bash
Named volumes: have a name to identify them so it’s easy to work with names if there are multiple volumes we can easily
identify them with names.
# docker run -it -v vol1:/my-data --name container1 ubuntu bash
Bind Mounts: same process of mounting a volume but this time we will mount an existing host folder in the Docker
container. This is an interesting concept and is very useful if you are looking to do some development where you
regularly modify a file in a folder
# docker run -it -v /home/path:/my-data --name container1 ubuntu bash
# goto hub.docker.com
# search for mysql and goto → Details → Click on latest Docker ile → Scroll down and you can see VOLUME
/var/lib/mysql, this is the default location of MySQL Databases.
This mysql image is programmed in a way to tell docker, when we start a new container from it, it actually creates a new
volume location and assign it to this directory /var/lib/mysql, in the container,
Which means any files we put in the container will outlive the container, until we manually delete the volume.
Volumes need manual deletion, you can’t clean them up just by removing the container that’s an extra setup with volumes,
the whole point of volume command is to say that this data is particularly important at least much more important than
container itself.
# docker volume ls
And if you go up in the output, you can see Mounts, and this is actually the running container
so actually the container actually thinks it’s getting data or writing data is from /var/lib/mysql,
But in this case, we can see the data is actually living in Source above line of /var/lib/mysql on the host.
So let’s do:
# docker volume ls
If you are doing this on a linux machine, You can actually navigate to the volume Source location { /var/lib/docker } and
can see the data, i.e some databases.
We can see two volumes but we can see the problem right ??
There is no easy way to tell which volume belongs to which container.
# docker container ls
# docker volume ls
# docker container rm -f mysql mysql2 mysql3
# docker volume ls
{ my volumes are still there, my data is still safe, so we solved one prob }
That is where named volumes come in, the ability for us to specify names for docker volumes.
Named volume [ i can put a name in front of it with : that is known as named vol ]
# docker volume ls
{ you can see my new container is using a new volume and it’s using a friendly name }
And if i run another container with some other name and same volume:
# docker container run -d --name mysql3 -e MYSQL_ALLOW_EMPTY_PASSWORD=True -v mysql-
db:/var/lib/mysql mysql
{ and we can see Volumes, changed the Source location to be a little friendlier as well }
The following command line will give you a bash shell inside your mysql container:
# docker exec -it some-mysql bash
# mysql -u root -p
Bind mounts are actually cool, this helps how to use docker for local development.
So really a bind mount is just a mapping of the host files or directories into a container file or directory.
In background, it’s just having two locations pointing to the same physical location(file) on the disk.
Full path rather than just a name like volumes, the way actually docker can tell the difference between named volume
and bind mount, is that bind mounts starts with a forward slash / { root }
Now where it really comes to shine is with development and running services inside your container, that are accessing the
files you are using on your host which you are changing. So let’s do that with nginx:
installation of Nginx
==================
Nginx
=====
# docker container run -d --name nginx1 -p 8080:80 nginx
# docker container run -d --name nginx2 -p 80:80 nginx
I do this because every time i don’t want to go into the container and check the logs.
Now i have all the logs in host machine itself rather than container.
## Creating DB
# docker run -d --name=wp-mysql -e MYSQL_ROOT_PASSWORD=mypassword -v ~/mysql-data:/var/lib/mysql mysql
# docker exec -it wp-mysql bash
# mysql -u root -p
# create database wordpress;
## Creating WP
# docker run --name my-wordpress --link wp-mysql:mysql -p 8080:80 -d -v ~/wp-data:/var/www/html wordpress
In your organization, you will be working on different different environments like dev, test, pre-prod, prod etc.
Now let's say you have 1000 systems in your infrastructure, so daily it's a tedious task to go and understand what is the
status of these 1000 devices.
What we monitor ??
================
1. Health {device is up/down}
2. Performance {RAM & CPU utilization}
3. Capacity {Watch HDD capacity}
Threshold of Monitoring
======================
Warning 85%
Critical 95%
Parameters to monitor
=====================
CPU
RAM
Storage
Network etc
NAGIOS CORE
=============
On Host Machine
# mkdir nagios-software
# cd nagios-software
# sudo yum install -y wget httpd php gcc glibc glibc-common gd gd-devel make net-snmp unzip openssl-devel
# sudo yum install httpd php php-cli gcc glibc glibc-common gd gd-devel net-snmp openssl-devel wget unzip -y
NAGIOS PLUGINS
================
# wget <link-nagios-plugins>
# extract the tar
# cd nagios-plugin-x
# sudo ./configure
# make
# sudo make install
NRPE PLUGIN
=============
To get monitor a system we are going to install NRPE plugin,
NRPE - Nagios Remote Plugin Executor
Goto nagios.org → Nagios core plugin → Find more plugins → General Addons → NRPE → Copy download URL
# wget <link-nrpe>
# cd nrpe
# ./configure
NAGIOS PLUGINS
================
# wget <link-nagios-plugins>
# extract the tar
# cd nagios-plugin-x
# ./configure
# make all
# sudo make install
NRPE PLUGIN
To get monitor a system we are going to install NRPE plugin,
NRPE - Nagios Remote Plugin Executor
Goto nagios.org → Nagios core plugin → Find more plugins → General Addons → NRPE → Copy download URL
# wget <link-nrpe>
# cd nrpe
service nrpe
{
flags = REUSE
port = 5666
socket_type = stream
wait = no
user = nagios
group = nagios
server = /usr/local/nagios/bin/nrpe
server_args = -c /usr/local/nagios/etc/nrpe.cfg --inetd
log_on_failure += USERID
disable = no
only_from = 127.0.0.1 <ip-add-server>
}
# vi /etc/services
Add → nrpe 5666/tcp # NRPE service
# ls -l /usr/local/nagios {i should be nagios:nagios}
# chown -R nagios:nagios /usr/local/nagios
# sudo service xinetd start
# netstat -ntpl
Configuring Agent
In Server machine
# cd /usr/local/nagios/etc
# sudo touch hosts.cfg
# sudo touch services.cfg
# sudo vi /usr/local/nagios/etc/nagios.cfg [ goto OBJECT CONFIGURATION FILE(S) ]
Add the following lines below templates.cfg
# This config is to add agents
cfg_file=/usr/local/nagios/etc/hosts.cfg
cfg_file=/usr/local/nagios/etc/services.cfg
## Default
define host{
use generic-host ; Inherit default values from a template
host_name c1 ; The name we're giving to this server
alias CentOS 7 ; A longer name for the server
address 192.168.44.11; IP address of Remote Linux host
max_check_attempts 5;
}
# sudo vi /usr/local/nagios/etc/services.cfg
define service{
use generic-service
host_name c1
service_description CPU Load
check_command check_nrpe!check_load
}
define service{
use generic-service
host_name c1
service_description Total Processes
check_command check_nrpe!check_total_procs
}
# sudo vi /usr/local/nagios/etc/objects/commands.cfg
CHEF
When it comes to learning chef
● You bring your business and problems
● You know about your infrastructure, you knew the challenges you face within your infrastructure and
you know how your infrastructure works
● Chef will provide a framework to solve those problems
COMPLEXITY
===========
System administrators will have a lot of complexity to manage.
This complexity can be from many items (Resources) across the infrastructure.
We call these items as resources.
Resources
=========
The resources in your infrastructure can be files, directories, users that you need to manage, packages that should be
installed, services that should be running and list goes on.
To set up this node we may need to install packages, manage their configurations, installing a database, installing web
server, installing application server and lots of things to make this application up and running.
Now overtime you wanna take this application and make it available to the public/client.
So you are going to have a database server, so maybe now we are going to have multiple environments like staging, dev,
qa or even production i.e multi tier environment.
We have one server that handles all the Application requests and a separate server for Database.
Of course once we have a database server, we want to make sure we don’t lose data, so we decide to make database
server redundant. We want to prevent failure and loss of the data.
========
But of course as time goes on, load increases on our application, and we need to scale out the no of application servers that
we have and in order to do that we need to put a load balancer in front of our app servers, so that requests can be evenly
distributed.
============
Eventually we gonna reach web scale, this application has grown and grown and getting bigger and bigger and we are now
at web scale. Look at all these servers now we have to manage.
========
Now we ran into another problem, our database just isn’t keeping up with the demands from our users, so we decided to
add a caching layer.
Now i got even more servers and more complexity to infrastructure.
==========
As if this no of servers are not complex enough, and of course each infrastructure has its own topology. There are
connections between the load balancer and Application servers.
The load balancers need to know which application servers to talk to, which application server they are sending requests
to.
The Application server in turn needs to know about the database server.
The Database cache servers need to know what are the backend database servers which they need to do caching.
All of this just adds up more and more complexity to your infrastructure.
=========
And your complexity is only going to increase and its increasing quickly.
============
========
So how can we manage complexity with chef ??
Chef gives us a no of items/tools with which we can manage this complexity, we are going to look at this
We are going to look at organizations, and how organizations can help you manage complexity.
If you think about organizations, let’s take digital lync itself it has its own infrastructure and TCS has its own infrastructure
etc etc
=====
Organizations are independent tenants on enterprise chef.
Nothing is shared across the organizations, your organizations may represent different companies take Tata group as
example.
=======
========
Roles is a way of identifying or classifying different types of servers that you have within your infrastructure.
Within your infrastructure you have multiple instances of servers that are in each one of these roles, you may have
multiple application servers, multiple database servers etc
You will specify this as roles in chef.
===========
Roles allow you to define policy, they may include list of chef configuration files that should be applied to the servers or
nodes that are within that role.
We call this list of chef configuration files that should be applied we call this a Run list.
================
Roles : Each node may have zero or more roles, so a node maybe a database server or an application server or it can have
both roles the same node can have both database and the application servers.
===========
Chef client will do all of the heavy lifting, it will make updates, it will configure the node, such that it adheres to the policy
that is specified on the chef server.
========
By capturing your infrastructure as code, you can reconstruct all of your business applications from three things a code
repository, a backup of your data and the bare metal resources or compute resources that are required to run your
applications.
This puts you in a really really nice state and it’s a great way to manage your complexity of your infrastructure, with these
three simple things i can rebuild my applications.
================
Store the configuration of infrastructure in version control:
So gone are the days when you have a server that was hand crafted lovingly by a system administrator and that system
administrator is the only person in your organization that knows all of the knobs, dials, tweaks, tricks, packages etc that
have been placed onto that server, now we can take the knowledge of that system administrator has and move it into a
framework that can be stored in a Source code repository.
Framework that allows for abstraction so that you can build up bits and pieces and transform the way you manage your
infrastructure.
============
With chef you can define policies that your infrastructure should follow,
Your policy states what each resource should be in, but not how to get there
For example we will in our chef configuration file, we will say that a package should be installed but we will not need to
specify how to install that particular package, chef is smart enough to sort that out.
So for example if you want to install a package on a debian based system you may use apt package manager to install that
package, # apt-get install <pkg-name> and
If you are running on a redhat based distribution such as centos then apt will not work here we would rather use yum
package manager # yum install <pkg-name>
Chef abstracts all of that away from you so that you can write configuration files that will work across the various
platforms.
========
========
You will take resources and gather them together into recipes.
=========
Recipes are the real work forces within chef.
So resources are the building blocks and will take those building blocks and we would gather them together into recipes
that would help bring our systems in line with policy.
I will walk through you what happens when the chef client encounters this recipe code
Again we are not telling the chef-client, how to install apache2, it’s smart enough to figure that out on its own.
So our policy states that the package apache2 should be installed, if this is the case already, then the chef-client will move
on to the next resource in our recipe.
If that package is not yet installed, chef-client will take care to install it and then move on to next resource in our recipe.
With chef we are going to take this recipes and package them up into cookbooks.
=========
So a cookbook is a container that we will use to describe, configuration data and configuration policies about our
infrastructure.
A cookbook may certainly contain recipes, but it can also include templates, files etc
So the recipe we just looked at had template resource in it, the template resource itself had a source file
apache2.conf.erb, that source file is stored as part of the same apache2 cookbook.
This cookbooks allow code reuse and modularity.
========
Let’s look what happens at very very high level, when the chef-client runs on the node.
So node is a server in our infrastructure, on the node we have an application called chef-client, this is typically configured
to execute on a regular interval maybe every 15-30 min using cron job.
When chef-client executes it will ask the enterprise-chef(chef-server), what policy should i follow, all of our policy is
described on the chef-server, our policy includes things like our environments, our roles and our cookbooks.
So the chef-client will download all of the necessary components, that make up the runlist and will move those down to
the node, so you’ll see here the run-list includes the recipe ntp-client, recipe to manage users, and the role to make this
node a web server.
==========
Once the run-list has been downloaded to the node,the chef-clients job is to look at each of the recipes within that run-list
and ensure that policy is enforced on that particular node, so it brings the node inline with policy.
=========
So run-list is, how we specify policy for each node within our infrastructure.
The run-list is a collection of policies that node should follow.
Chef-client will obtain this run-list from the chef server.
Then chef-client ensures the node complies with the policy in the run-list.
=====
So with Chef this is how you manage complexity:
The first thing you will do, is to determine the desired state of your infrastructure.
One we done that, you will identify the resources required to meet that state, what resources are required to meet that
state, what resources are required users, services, packages etc.
You gather up each of these resources into recipe.
Again the run-list is the thing that gets applied to the node, you apply a run-list to each node within your environment and
as the chef client runs on those nodes, your infrastructure will adhere to the policy modeled in the chef.
This is a great way to launch new servers to add the capacity to your infrastructure, this is how you add new capacity to an
existing infrastructure.
Adding new capacity to an existing infrastructure is not, the only challenge that we face, the other challenge we face is
Configuration Drift.
=======
Install Chef-Server
===============
Let’s get the Chef-server from https://downloads.chef.io/chef-server
# mkdir chef-sw
# wget <link-of-rhel-7-distro-rpm>
# sudo rpm -ivh chef-server.rpm
status of chef server we can use: # sudo chef-server-ctl status which is going to give all the services which are running in
the background.
Now let’s setup the web interface to manage chef-server, to get the GUI we need to install another package which is
chef-manage.
This installation of chef-manage will glue the chef-manage to chef-core means it will integrate chef-manage with your
chef-core.
Next accept the license agreement and go on with further steps say q for quit and yes.
Once the installation is successful use the ip of machine to see the web interface:
https://ip_address_machine
Creating User
============
We need to create an admin user. This user will have access to make changes to the infrastructure components in the
organization we will be creating.
Below command will generate the RSA private key automatically and should be saved to a safe location.
pem file is your private key which will authenticate you while logging into the server and the corresponding public key
will be stored in the server. We should not share this pem file coz now anyone with this pem file can login into the server.
https://ip_address_machine
Once we logged in we can see Nodes, but we didn’t setup any node so will see this in later part.
Setting Up Workstation
===================
Setting up workstation is very important coz even if you don’t know how to set up chef-server it’s okay coz in real time
chef-server is already set-up but work station we need to set up yourself.
Steps involved
============
● Create a new centos machine for workstation
● Login to the workstation machine
● Setup hostname and add all three in /etc/hosts
● Download chef-dk on workstation (laptop/desktop/vm)
○ https://downloads.chef.io/chefdk
○ # wget <link-of-chef-dk>
○ # sudo rpm -ivh chefdk-1.3.43-1.el7.x86_64.rpm
○ # sudo chef-client -v
● Download chef-dk on workstation (laptop/desktop/vm)
● Install the chef-dk through rpm
● # cd ~ { workstation }
● Generate Chef-Repo using “chef generate repo” command.
○ # chef generate repo chef-repo
○ This command places the basic chef repo structure into a directory called “chef-repo” in your home
directory.
● In chef server we created two pem files one is user and another is organization, now we need to bring this two
pem files into the workstation
● Now we need to create a directory .chef in chef-repo where we put RSA keys,
○ # cd ~/chef-repo; # mkdir .chef; cd .chef;
● Copy the RSA keys to the workstation ~/chef-repo/.chef
○ Make the handshake b/w chef-server & workstation
○ # scp /etc/chef/*.pem vagrant@workstation:~/chef-repo/.chef/
Run the above in chef-server
● Knife is a command line interface for between a local chef-repo and the Chef server. To make the knife to work
with your chef environment, we need to configure it by creating knife.rb in the “~/chef-repo/.chef/” directory.
○ Goto Web interface of chef-server → Click Administration → Select Organization (dl) → Click on
settings wheel(extreme right) → Click on the Generate Knife Config
○ Copy the file contents and paste it in ~/chef-repo/.chef/knife.rb
● Testing knife: test the configuration by running knife client list command. Make sure you are in ~/chef-repo/
directory.
○ # cd ~/chef-repo/
○ # knife client list
○ # knife ssl check
○ To resolve this issue, we need to fetch the Chef server’s SSL certificate on our workstation
○ # knife ssl fetch
The above command will add the Chef server’s certificate file to trusted certificate directory. ( #
tree .chef)
● Once the SSL certificate has been fetched, run the previous command to test the knife configuration.
# knife client list { output then verification completed successfully }
Bootstrapping a node is a process of installing chef-client on a target machine so that it can run as a chef-client node and
communicate with the chef server.
From the workstation, you can bootstrap the node with elevated user privileges.
Let’s write first recipe [ Recipes are in Workstation under cookbooks dir ]
============================================================
# chef generate cookbook <cook-book> → New style (Chef 12+)
knife cookbook create is the legacy way to create a cookbook, it builds the structure of a cookbook with all possible
directories.
chef generate cookbook is part of Chef-DK and aims at generating a testable cookbook structure with minimal
directories to use in test-kitchen.
Both can be tweaked, the chef generate is easiest to tweak as the command has been written in this way to allow all to
build the cookbook structure that better fits their needs.
Once we create this cook we need to understand the structure of this cookbook, so let’s see the tree structure of our
cookbook # tree file-test
The no.1 directory you need to check is recipes under which we have default.rb where you will write your code, write
your work
1 default.rb
=========
So what will you write in recipe?? What is recipe ??
Recipe is collection of resources with desired state.
So your resource can be a file, directory, package etc.
2 metadata.rb
===========
Metadata.rb has the information about version no of cookbook, dependencies, organization info and so on all the
information related with cookbook will be here.
If this cookbook is dependent on other bookbook then this info is also stored here.
default.rb
========
file '/tmp/sample.txt' do
content 'This file is created with CHEF'
owner 'root'
group 'root'
mode '644'
#action :create
end
Every resource has got a default action, the action here is to create.
Now we created the cookbook but we need to upload this cookbook to server.
Now if we check the same in web interface [Policy are cookbooks], i don’t have file-test cookbook, so will upload the
cookbook.
Goto web interface → Select node1 → Click on Edit run list { nothing }, let’s add one
This command makes node to talk to server and asks server, do you have any runlist for me ?? Then it will run all the
cookbooks mentioned in run_list, then it starts executing them.
# sudo chef-client
1. Runs the ohai process
2. It will upload the latest host info to server
3. It will get the list of runlist and its dependencies
4. Downloads the cookbooks as mentioned in run_list
5. Execute the run_list
Let’s do one thing, forcefully damage the system (someone made changes to sample file content)
Now you can see that the file was not in desired state, bring it back to desired state.
Now if you open and see the file content # vim /tmp/sample.txt it’s back to original all the old tampered data is gone away.
That’s the desired state, if you define the desired state, it will make sure that it has the desired state.
Now that’s what chef is trying to do, whatever you define, it will try to control and make sure it stays that way, but
if it’s already there in desired state then it will not make any changes and this is called IDEMPOTENCY.
That means if the state is already achieved then it will not disturb the state.
Now imagine that you have 10000 systems, so you don’t have to worry about any of the system. You will write the
recipe and will execute them that’s it.
Now let’s say you want to edit some content in sample.txt then you will be doing that on recipe, then it will automatically
reflect on your node machines.
This is the basic resource we worked on, but in the upcoming session we would see some more recipes to work.
Today will install web server that is httpd, and will make sure that httpd package is up and running then will create one
index.html and will see everything is working fine as expected. Let’s do it one by one.
Now let’s login to node machine and disable firewall and set selinux to permissive
Now we need to install the package, here in chef we have a resource with name package:
# package ‘httpd’ do
action :install {by default}
# end
By default the action is install, but we want more information like what we can do more with package, so chef has
excellent documentation, lets go and check the document and see what more we can do with the package.
# docs.chef.io/resource_package.html
Now if you want to remove the package you can figure it out from document (actions)
package ‘httpd’ do
action :remove
end
Now i need to start the service, so i will use the second resource module,
Now let’s go to document section again and see resource_service
https://docs.chef.io/resources.html { select service}
service ‘httpd’ do
action [:start, :enable] now i can specify two actions also
end
Or
service ‘httpd’ do
action :start
end
service ‘httpd’ do
action :enable
end
Now the third thing is we need to go with index.html, so i’ll use the same resource file
file ‘/var/www/html/index.html’ do
content “<h1>Welcome to APACHE - By CHEF</h1>”
owner ‘root’
group ‘root’
mode ‘644’
end
Now need to save the changes then test it and upload it.
# knife cookbook test webtest { good no syntax error }
Or
# cookstyle webtest
# knife node show node-name {show what’s already attached to this node}
# knife node show node1
Now we have done the association, now our task is to execute the chef-client on the node, that will install httpd service
and starts the service and it will set index page.
# sudo chef-client
Now i’m always running the chef-client manually, i want to do it automatically, for that to happen we need to setup a cron
job.
What is cron ??
Cron is a linux scheduler with which you can schedule a task at some regular frequency or at regular interval of times.
# crontab -l
Now let’s see some dynamic things with our node, maybe we can use some of our node attributes to update the home page
on our web server.
How do i make this particular recipe platform, of course there is no apache2 package in centos or RHEL, it’s called httpd
in centos/rhel.
Chef - Attributes
===============
Now we know that in centos the package name, service name and document root is also different, from ubuntu.
Now i’ll create some variables(attributes), that chef will be able to use, and i’ll call this variable's/attributes with
package_name, service_name
case node[‘platform’]
when “centos”,”rhel”
default[“package_name”]=”httpd”
default[“service_name”]=”httpd”
default[“document_root”]=”/var/www/html”
when “ubuntu”,”debain”
default[“package_name”]=”apache2”
default[“service_name”]=”apache2”
default[“document_root”]=”/var/www”
end
package node[“package_name”] do
action :install
end
service node[“service_name”] do
action :start
end
service node[“service_name”] do
action :enable
end
file ‘/var/www/html/index.html’ do
content “<h1>Welcome to APACHE - By CHEF</h1>”
owner ‘root’
group ‘root’
mode ‘644’
end
What is template ??
index.html.erb → index.html
httpd.conf.erb → httpd.conf
# cd webtest
# mkdir templates
# vi webtest/templates/default/index.html.erb
<html>
<h1>Welcome to Chef <%= node[“nodename”] %></h1>
<br>
<h3>The host has total memory <%= node[“memory”][“total”] %></h3>
</html>
SUPER MARKET
===============
With chef there is a beautiful feature, which is called SUPER MARKET.
Super market is place where we have huge no of readymade recipes, so just we can download them, install and
start working also its FREE.
# knife supermarket
Roles
Roles help you configure multiple cookbooks to node at once.
Normally we are going and running one cookbook at a time which would be cumbersome task to avoid this and
add multiple cookbooks at once we use roles.
cookbook_file '/var/www/html/index.php' do
source 'index.php'
end
# knife role from file webserver.rb
# knife role list
# knife role show web-server
# knife node run_list add NODE_NAME 'role[ROLE_NAME]'
Go to chef server and check roles, now we got new role.
And click on node, say edit run list, and you can see available roles.
Now let’s do one thing, remove the apachetest from Current run list and save run list.
Now goto the node1 and change the index.html file, and do # sudo chef-client, nothing is changed right, coz we have
removed the apachetest from the run list.
Let’s apply the role now, Drag the webserver role from Available roles to Current Run List and save run list.
Environments
=============
Using environments combined with roles we can execute cookbook on multiple nodes. I can add multiple nodes to
environment and can execute cookbooks on those multiple nodes.
Let’s see what are environments and where they can be used.
Will see the simple apache cookbook,
# knife cookbook show apache
# knife environment list
The _default environment is read-only and sets no policy at all, you can’t modify any configuration under this
environment.
Similar to roles
# vim environments/dev.rb
name ‘dev’
description ‘My Dev Env’
By default the nodes are part of _default environment, now will put our server in dev env
# knife node environment_set node1 dev
# knife node show node1
# sudo chef-client
Databags
=========
Let’ see a scenario where Data Bags can be used, let’s say we created a cookbook for creating users,
So what we do is we will create a new redhat user with some password.
# useradd redhat
# password: redhat
# openssl passwd -1 -salt bacon lync123
# perl -e 'print crypt("password*","\$6\$salt\$") . "\n"'
# perl -e 'print crypt("redhat","\$6\$cx56hui\$") . "\n"'
# cd cookbooks
# chef generate cookbook user-test
# vim user-test/recipes/default.rb
group 'redhat' do
action :create
end
user 'redhat' do
uid '2000'
password
'$1$hPx7gY1X$pDm6ir7zJ2jirr4rriJTZ0$1$aISz6I6w$Q1X/dWJ7F2DBWzKZdABon.$1$oviO3aG4$DdUI5iFi6kSwX
wWDo5qhc.'
group 'redhat'
shell ‘/bin/bash’
action :create
end
Now we have copied and pasted password in recipe, but never do it, coz recipes are easily accessible.
Databags are a global variables that are stored as JSON data format.
Databags commands:
# knife data bag list
# knife data bag create redhat_password
# knife data bag from file redhat_password redhat.json
# knife data bag list
# knife data bag show redhat_password
# knife data bag show redhat_password redhat
# vi cookbooks/user-test/recipes/default.rb
# redhat_password = data_bag_item(‘dbname’,’key’)
redhat_password = data_bag_item(‘redhat_password’,’redhat’)
group 'redhat' do
action :create
end
user 'redhat' do
uid '2000'
home '/home/redhat'
password redhat_password[‘password’]
group 'redhat'
shell ‘/bin/bash’
action :create
end
In node machine:
# sudo cat /var/chef/cache/cookbooks/user-test/recipes/default.rb
Earlier i was able to see the password.
Configuring JENKIN
Jenkins
1) Download and install JDK.
2) Define JAVA_HOME to <C:\Program Files\Java\jdk1.8.0_**>. Version Used – 8u73
3) Define JRE_HOME to <C:\Program Files\Java\jre1.8.0_**>.
4) Point PATH to <C:\Program Files\Java\jdk1.8.0_**>\bin.
5) Download and install Git from https://git-scm.com
Version Used– 2.8.3
6) Download and install Eclipse IDE for Java Developers from https://eclipse.org
7) Download Jenkins **.*.zip (for Windows) from https://jenkins.io
Version Used – 2.7.1
8) Download ANT from http://ant.apache.org, define ANT_HOME as an environment variable (if
you would be using ANT). Version Used – 1.9.7
9) Download MAVEN from http://maven.apache.org. Install by entering in command prompt
“mvn –v”. Define MAVEN_HOME as an environment variable. Version Used – 3.3.9
10)Install Jenkins in C:\. Don’t install in C:\Program Files (x86) as it may cause a permissions
issue.
11)Go to localhost:8080 (Default URL of Jenkins). Enter the secret key.
12)Create a new user. This user will be the “Administrator” of the CI Server.
13)After login using admin credentials, create a new user like this –
Configuring proxy in Jenkins
1) Click “Manage Jenkins->Manage Plugins->Advanced”.
2) Enter “proxy.wdf.sap.corp” in “Server”.
3)
4) Enter “8080” in “Port Number”.
5) Don’t enter your SAP ID in “Username”, just enter your SAP password in “Password”.
6) Click “Submit” and then click “Check Now”.
Note – This configuration is only needed if you are connected to SAP-Corporate. If you are
connected to SAP-Internet or any other network, clear these settings. Assuming you have done
everything correctly in the project and you have properly defined your POM, run your project
first, on SAP Internet, as SAP-Corporate blocks POM from downloading plugins, JARs and
dependencies. Also, do not select the option “Delete workspace before build starts”, as it will
delete all the plugins, JARs and dependencies and download them again. When on SAP-Internet,
don’t use SonarQube as it will be configured for use on SAP-Corporate.
Plugin to be installed
Click “Manage Jenkins -> Manage Plugins”. Check which are installed by default and which
need to be installed. The required plugins are listed below –
Jenkins Global Configuration
1) Configure these only after all the tools are installed!
2) Click “Manage Jenkins”.
3) To configure JDK, Git, SonarQube Scanner, Ant, Maven & Node.js, click “Global Tool
Configuration”.
4) Click “Global Settings” to manage other settings. Please note that you need to have “http://”
before the IPAddress in both Jenkins URL and SonarQube Server URL because the job will fail
since Jenkins would not be able to call SonarQube Server.
ESLint with Jenkins
1) Download and install Node.js from https://nodejs.org/en/download/
Version Used – 4.4.5
2) Add <Install Directory of NodeJS> and <Install Directory of NodeJS>\node_modules\npm\bin
to PATH. C:\User\<SAP User ID>\AppData\Roaming\npm should also be in PATH.
Install ESLint by using commanXd “npm install –g eslint” on command prompt (admin).
3) Use the command “eslint –c eslintrc.js –f checkstyle **.js > eslint.xml” in the Build Step
“Execute Windows Batch Command” where ->
I) –c points to the file whose config you have to use.
II) “eslintrc.js” is the file which you get after running “eslint --init” at the location
“C:\User\<SAP User ID>\AppData\Roaming\npm\node_modules\eslint” (though the
name of the file generated is “.eslintrc.js”, remove the leading “.”). This file decides the
styling format of the js file.
III) -f points to the ruleset which you are supposed to use.
IV) Checkstyle is the ruleset which I am using.
V) **.js is the name of the file(s) which you want to be analyzed.
VI) “>” is used for pipelining the results to a file, here I have used “eslint.xml”.
4) Now, you have to publish the contents of the “eslint.xml” file, so I used “Publish Checkstyle
Results” in the Post-Build step and mentioned file name as “eslint.xml”.
5) Now, when you make a build, you will see that execution of build fails, here is the solution->
I) Go to the “Services” app.
II) Find the “Jenkins” service and right-click on it.
III) Select “Properties” and go-to the “Log On” tab.
IV) Select “This Account” and enter your Account ID like this -> “GLOBAL\<Your SAP
ID>”.
V) Enter and reconfirm your SAP password.
VI) Click “Apply” and “OK”.
VII) Stop the Jenkins service and then start the service.
6) Now, click “Build Now” and you will see that “Batch command” executes successfully.
7) In case, the above command doesn’t run, run this command “eslint –f checkstyle > eslint.xml”.
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-site-plugin</artifactId>
<version>3.5.1</version>
<configuration>
<reportPlugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-checkstyle-plugin</artifactId>
<version>2.17</version>
</plugin>
</reportPlugins>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-site-plugin</artifactId>
<version>3.5.1</version>
<configuration>
<reportPlugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-pmd-plugin</artifactId>
<version>3.6</version>
<configuration>
<!-- PMD options -->
<targetJdk>1.8</targetJdk>
<aggregate>true</aggregate>
<format>xml</format>
<rulesets>
<ruleset>/pmd-rules.xml</ruleset>
</rulesets>
<!-- CPD options -->
<minimumTokens>50</minimumTokens>
<ignoreIdentifiers>true</ignoreIdentifiers>
</configuration>
</plugin>
</reportPlugins>
</configuration>
</plugin>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>cobertura-maven-plugin</artifactId>
<version>2.4</version>
<configuration>
<instrumentation>
<includes>
<include>**/*.class</include>
</includes>
</instrumentation>
</configuration>
<executions>
<execution>
<id>clean</id>
<phase>pre-site</phase>
<goals>
<goal>clean</goal>
</goals>
</execution>
<execution>
<id>instrument</id>
<phase>site</phase>
<goals>
<goal>instrument</goal>
<goal>cobertura</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
<reporting>
<plugins>
<plugin>
<!-- use mvn cobertura:cobertura to generate cobertura reports -->
<groupId>org.codehaus.mojo</groupId>
<artifactId>cobertura-maven-plugin</artifactId>
<version>2.4</version>
<configuration>
<formats>
<format>html</format>
<format>xml</format>
</formats>
</configuration>
</plugin>
</plugins>
</reporting>
<dependencies>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.11</version>
<scope>test</scope>
</dependency>
</dependencies>
pom.xml
</project>
17)Now, set Schema privileges of User “sonar” to access only the DB “sonar” and select all
privileges, except “GRANT OPTION”.
18)Now create a new server instance. Give it a name and IP Address. Clear the password and enter
the password you entered before. Select the default schema later.
19)Go to <Directory of SonarQube>\conf –
Postman
1) Install “postman” by using command “npm install –g newman” on command prompt (admin).
2) Get a public postman collections JSON file.
3) Use the command “newman -c *.json -H *.html” in the Build Step “Execute Windows Batch
Command” where –H points to the HTML file where test results should be written or else, use
“-o” instead of “-H” to use other file format.
4) Click “Build Now” and see the report published in the file.
QTP/UFT
1) Ensure that your laptop is connected to SAP-Corporate.
2) Download and install QTP/UFT.
3) Create a QTP script and save it.
4) Launch QTP. Select Tools-> Options->Run Sessions-> Configure the results to be saved as .html.
5) Create a simple VB script which launches QTP, run the test and save the results in a specified
location. Ensure that it works while running the script on command prompt.
6) Create a new job in Jenkins.
7) Use the command “CScript “<Path of VBScript>.vbs” “, which will launch QTP, run the test and
close it after execution.
8) If you are making more than 1 build, use “Execute Windows Batch Command” to move the
existing .html files to another folder and save the latest report to original folder so that while
publishing the HTML Report, there are no issues. The command is “move “<Location of .html
files>" "<New Location>" “. Now the “CScript” command should follow after the “move”
command.
Creating .zip using Hudson Post-Build Task Plugin
1) Install Post-Build task “Add Hudson Post-Build Task” plugin.
2) Select Logical Operation “OR”.
3) In the Script box, enter the command “jar –cMf *.zip .”
Here, * is the name of the zip we want to create. “.” Is used to address the parent of the
folder/location where the job is. So, whenever this command is run, it will create a ZIP of the
workspace. “jar” is used because we are using Java to run this command. Use “jar –cMf” on
command prompt first for information on the flags.
Nagios:
Nagios is an open source software that can be used for network and
infrastructure monitoring. Nagios will monitor servers, switches, applications
and services. It alerts the System Administrator when something went wrong and
also alerts back when the issues has been rectified.
Nagios is useful for keeping an inventory of your servers, and making sure your
critical services are up and running. Using a monitoring system, like Nagios, is
an essential tool for any production server environment.