0% found this document useful (0 votes)
75 views62 pages

AWS Devops Integration Tools

This document discusses AWS tools for build and deployment using services like CodePipeline, CodeCommit, CodeBuild, and CodeDeploy. CodePipeline provides a visual representation of the end-to-end delivery process. CodeCommit is a version control service that hosts Git repositories. CodeBuild fetches source code from repositories and runs builds based on build specifications. CodeDeploy automates deployment of applications to EC2 instances based on deployment scripts. Together these services provide an integrated continuous delivery pipeline in AWS.

Uploaded by

sowjanyaaa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
75 views62 pages

AWS Devops Integration Tools

This document discusses AWS tools for build and deployment using services like CodePipeline, CodeCommit, CodeBuild, and CodeDeploy. CodePipeline provides a visual representation of the end-to-end delivery process. CodeCommit is a version control service that hosts Git repositories. CodeBuild fetches source code from repositories and runs builds based on build specifications. CodeDeploy automates deployment of applications to EC2 instances based on deployment scripts. Together these services provide an integrated continuous delivery pipeline in AWS.

Uploaded by

sowjanyaaa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 62

Build and Deployment using AWS Tools

From a Build and Deployment point of view, we will look at the following AWS
services
 AWS CodePipeline
 AWS CodeCommit
 AWS CodeBuild
 AWS CodeDeploy
1. AWS CodePipeline
AWS CodePipeline is similar to the Jenkins Pipeline which helps to have a visual view of
the end to end delivery process.

So in a CodePipeline, you will typically configure the following


 Source Code Repository – So your source code would need to be either in
AWS CodeCommit or GitHub repository.
 Build Service – AWS CodeBuild details will be configured as part of the pipeline.
 Deploy – AWS CodeDeploy will be configured into the pipeline.
 During the deploy process to different environments if any approvals are needed
they could be configured as well
So if there is a code change by the developer the visual representation of Build and
Deploy can be seen to be automated.

Source code repository configuration in AWS CodePipeline

Build configuration in AWS CodePipeline which uses Maven build


Deployment configuration in AWS CodePipeline
Complete Execution is seen in AWS CodePipeline
2. AWS CodeCommit
AWS CodeCommit is a secure online version control service which hosts private Git
repositories. A team need not maintain their own version control repository instead they
use AWS CodeCommit to store their source code or even binaries like the
WAR/JAR/EAR files generated out of the build.

With AWS CodeCommit you create a repository and every developer will clone it to their
local machine, add files to it and push it back to the AWS CodeCommit repository. One
uses the standard GIT commands with the AWS CodeCommit repository.
For E.g. once the AWS CodeCommit repository is cloned to local machine you would
use commands like ‘git pull’, ‘git add’, ‘git commit’, ‘git push’ etc..
Illustrative AWS CodeCommit empty repository created

Clone the repository to the local machine

Files added to AWS CodeCommit repository


3. AWS CodeBuild
As we have seen the source code and other project artifacts are stored in AWS
CodeCommit repository.

To implement Continuous Integration AWS CodeBuild like Jenkins fetches the latest
changes of the source code from AWS CodeCommit or GitHub repository as configured
and based on the build specification YAML file (created as buildspec.yml) the commands
are run based on the four phases like Install, Pre-build, Build and Post-build.
Once the build is completed the artifacts (WAR/ZIP/JAR/EAR) are stored in the AWS
Storage which is an S3 bucket.

Sample buildspec.yml file


version: 0.2
phases:
install:
commands:
- echo Nothing in the install phase...
pre_build:
commands:
- echo Nothing in the pre_build phase...
build:
commands:
- echo Build started on `date`
- mvn clean install
post_build:
commands:
- echo Build completed on `date`
artifacts:
files:
- target/HelloWorld-Maven.war
Sample AWS Codebuild project
Build Success

Artifact (WAR file) copied to S3 bucket


4. AWS CodeDeploy
As the name suggests AWS Codedeploy is the deployment service which automates the
deployment of the application (in this case WAR file) to the Amazon EC2 Linux or
Windows instances.

Since we now have the artifacts stored in S3 bucket which was completed using AWS
CodeBuild the artifacts are then picked up from the S3 bucket and deployed
appropriately to the app server Tomcat or JBoss etc. in the AWS EC2 instance
provisioning.

AWS CodeDeploy depends on a YAML file called appspec.yml which has instructions on
the deployment to the EC2 instances.

Sample appspec.yml file where the index.html file is copied and deployed to the
Apache server
version:10.0

os:linux

files:

-source: /opt/deploy/index.html

destination:/var/www/html/

hooks:

BeforeInstall:
-location:scripts/before_install

runas:niranjan

AfterInstall:

-location:scripts/restart_server

runas:niranjan

before_install script

restart_server script

GitHub repo of all files needed to run AWS CodeDeploy


Deployment execution in AWS CodeDeploy

Jenkins Integration with AWS Services


As mentioned earlier, nowadays teams are using Jenkins much as the defacto CI tool
and in most case,s they would not really like to move away from it but rather integrate
with the AWS services which we discussed. While there are certain procedures involved
and I have shown screenshots of the integration.
1. Jenkins integration with AWS CodeCommit

2. Jenkins integration with AWS CodeBuild


3. Jenkins integration with AWS CodeDeploy
Putting it All Together for AWS DevOps Stack:
The stack looks below for the AWS services that are discussed above.

Hope this tutorial on, tools for a pipeline, source code repository, build and
deployment with Amazon Web Services, was helpful to you.
Edureka AWS devops integration tools

Technology has evolved over time. And with technology, the ways and needs to
handle technology have also evolved. Last two decades have seen a great shift in
computation and also software development life cycles. We have seen a huge
demand for online DevOps training & AWS certification. Today’s blog focuses on one
such approach known as DevOps and AWS DevOps in particular.

This blog focuses on the following points:

1. What Is DevOps?
2. What Is AWS?
3. AWS DevOps

So let us get started then, shall we?

What Is DevOps?
In these fast-paced times, we see more emphasis being laid on faster delivery of
software deployment. Because in order to stay competitive in the market the
companies are expected to deploy quality software in defined timelines. Hence the
roles of software developer and system admin have become very important. A lot of
juggling of responsibilities happens between the two teams. Let us take a look at
how do these individuals contribute to the deployment process.
A programmer or a software developer is responsible for developing the software. In
simple words he is supposed to develop a software which has:

 New features
 Security Upgrades
 Bug Fixes

But a developer may have to wait for weeks for the product to get deployed which is
also known as ‘Time To Market’ in business terms. So this delay may put pressure
on the developer because he is forced re-adjust his dependent activities like:

 Pending code
 Old code
 New products
 New features

Also when the product is put into the production environment, the product may
exhibit some unforeseen errors. This is because the developer writes code in the
development environment which may be different from the production environment.

Let us go ahead and take a look at this process from the operations point of view.
Now the operations team or the system administrators team is responsible for
maintaining and assuring the up time of the production environment. Now as the
company invests time and money in more products and services, the number of
servers, admins have to take care of also keep growing.
This gives rise to more challenges because the tools that were used to manage the
earlier amount of servers may not be sufficient to cater the needs of upcoming and
growing number of servers. The operations team also needs to make slight changes
to the code so that it fits into the production environment. Hence the need to
schedule these deployments accordingly also grows, which leads to time delays.

AWS Certified DevOps Engineer Training

 Instructor-led Sessions
 Real-life Case Studies
 Assignments
 Lifetime Access

Explore Curriculum

When the code is deployed the operations team is also responsible to handle code
changes or minor errors to the code. At times the operation team may feel
pressurised and it may seem like developers have pushed their responsibilities to the
operations side of the responsibility wall. As you may come to realise that none of
the sides can be held culprit.
What if these two teams could work together? What if they:

 could break down silos


 share responsibilities
 start thinking alike
 work as a team

Well, this is what DevOps does, it helps you get software developers and operations
in sync to improve productivity. To simply define it with jargon terms. DevOps is the
process of integrating Developers and Operations teams in order to improve
collaborations and productivity. This is done with automation of workflows and
productivity and continuous measurement of application performance.
DevOps focuses on automating everything that lets them write small chunks of code
that can be tested, monitored and deployed in hours which is different from writing
large chunks of codes that takes weeks to deploy. So this was about DevOps. Let us
move ahead and understand what is AWS and how it forms a crucial pairing with
DevOps to give you AWS DevOps.

What Is AWS?
If you go back a decade, the scenario of handling or more precisely storing data was
different. Companies preferred storing data using their private servers. However,
with more and better usage of internet, the trend has seen a paradigm shift for
companies, as they are moving their data to cloud. This enables companies to focus
more on core competencies and stop worrying about the storing and
computation. These two points below talk about the significance of cloud:

Fact: Netflix is a popular video streaming service which the whole world uses today,
back in 2008 Netflix suffered a major database corruption, and for three days their
operations were halted. The problem was scaling up, that is when they realized the
need for a highly reliable, horizontally scalable, distributed systems in the cloud.
Came in cloud services, and since then their growth has been off the charts.

Prediction: Gartner says, By 2020, a Corporate “No-Cloud” Policy Will Be as Rare


as a “No-Internet” Policy Today. Interesting, isn’t it?

DevOps Training
CONTINUOUS INTEGRATION WITH JENKINS CERTIFICATION TRAINING

Continuous Integration with Jenkins Certification Training

Next

Since every company has started to adopt the cloud services. It can be claimed that
cloud is the talk of the town. And AWS, in particular, is the leading cloud service
provider in the market. Let us understand more about it.

AWS

AWS which stands for Amazon Web Services is an


‘Amazon.com‘ subsidiary which offers cloud-computing services at very
affordable rates, therefore making its customer base strong from small-
scale companies like Pinterest (which has just 5 employees) to big enterprises like
D-Link.

What Is Cloud Computing?

It is the use of remote servers on the internet to store, manage and process data
rather than a local server or personal computer.

There are basically 3 categories in cloud computing:

IaaS(Infrastructure as a service)

 IaaS gives you a server in the cloud(virtual machine) that you have complete control
over.
 In Iaas, you are responsible for managing everything from the Operating System on
up to the application you are running.

PaaS(Platform as a Service)

 With PaaS, you have a combination of flexibility and simplicity.


 Flexible because it can be tailored to the application’s needs.
 Simple as no need for OS maintenance, versions, patches.

SaaS(Software as a Service)

 A software distribution model in which a third-party provider hosts applications.


 Instead of installing and maintaining software, you simply access it via the Internet.
 Automatic updates reduce the burden on in-house IT staff.

When we refer to AWS, it is more of an IAAS.

In case you wish to know about cloud computing in detail refer this link What Is
Cloud Computing?

AWS DevOps
AWS is one of the best cloud service provider and DevOps on the other hand is the
‘need of the hour’ implementation of software development lifecycle. Following
reason make AWS DevOps a highly popular amalgamation:

AWS CloudFormation

DevOps teams are required to create and release cloud instances and services more
frequently than traditional development teams. AWS CloudFormation enables you to
do just that.‘Templates’ of AWS resources like EC2 instances, ECS containers, and
S3 storage buckets let you set up the entire stack without you having to bring
everything together yourself.

AWS EC2

AWS EC2 speaks for itself. You can run containers inside EC2 instances. Hence you
can leverage the AWS Security and management features. Another reason why
AWS DevOps is a lethal combo.

AWS CloudWatch

This monitoring tool lets you track every resource that AWS has to offer. Plus it
makes it very easy to use third party tools for monitoring like Sumo Logic etc

AWS Certified DevOps Engineer Training

Weekday / Weekend BatchesSee Batch Details

AWS CodePipeline

CodePipeline is one popular feature from AWS which highly simplifies the way you
manage your CI/CD tool set. It lets you integrate with tools like GitHub, Jenkins, and
CodeDeploy enabling you to visually control the flow of app updates from build to
production.

Instances In AWS

AWS frequently creates and adds new instances to their list and the level of
customisation with these instances allow you make it easy to use AWS DevOps
together.

All these reasons make AWS on of the best platforms for DevOps. This pretty much
brings us to the end of this AWS DevOps blog. Please let me know in the comments
section below, whether you liked the blog or not.

Last two decades have seen a great shift in computation and also software
development life cycles. Thus we see a huge demand for Online DevOps & AWS
certification training, which concern the domains responsible for this paradigm
shift. This article on AWS Certified DevOps Engineer, tells you why a combined
AWS DevOps Certification would be a great choice.

Before we dive deeper let us a take a look at the agenda of this article:

AWS Certified DevOps Engineer Training

 Instructor-led Sessions
 Real-life Case Studies
 Assignments
 Lifetime Access

Explore Curriculum

1. What Is AWS?
2. What Is DevOps?
3. Why AWS DevOps together?
4. AWS Certified DevOps Engineer

Let us get started then,


What Is AWS?
AWS which stands for Amazon Web Services, which is an ‘Amazon.com‘ subsidiary
that offers cloud-computing services at very affordable rates, therefore making its
customer base strong from small-scale companies as small as having five
employees to big enterprises with lakhs of employees.

Amazon Web Services (AWS) is a comprehensive, evolving cloud


computing platform. It provides a mix of infrastructure as a service (IaaS), platform
as a service (PaaS) and software as a service (SaaS) offerings. If you wish to know
more, this article may help: What Is AWS?

Next

What Is DevOps?
In these fast-paced times, we see more emphasis being laid on faster delivery of
software deployment. Because in order to stay competitive in the market, companies
are expected to deploy quality software in defined timelines. Hence the roles of
software developer and system admin have become very important. A lot of juggling
of responsibilities happens between the two teams.

A developer may have to wait for weeks for the product to get deployed which is
also known as ‘Time To Market’ in business terms. So this delay may put pressure
on the developer because he is forced to re-adjust his dependent activities like:

 Pending code
 Old code
 New products
 New features

Also when the product is put into the production environment, the product may
exhibit some unforeseen errors. This is because the developer writes code in the
development environment which may be different from the production environment.

The operations team on the other hand, is responsible for maintaining and assuring
the up time of the production environment. This gives rise to more challenges
because the tools that were used to manage the earlier amount of servers may not
be sufficient to cater the needs of upcoming and growing number of servers.

The operations team also needs to make slight changes to the code so that it fits
into the production environment. Hence the need to schedule these deployments
accordingly also grows, which leads to time delays.

At times the operations team may feel pressurised and it may seem like developers
have pushed their responsibilities to the operations side of the responsibility wall.
As you may come to realise that none of the sides can be held culprit.
What if these two teams could work together? What if they:

 could break down silos


 share responsibilities
 start thinking alike
 work as a team

Well, this is what DevOps does, it helps you get software developers and operations
in sync to improve productivity. If you want to know more about DevOps, refer
this : DevOps Tutorial

Why AWS DevOps Together?


AWS is one of the best cloud service providers and DevOps on the other hand is the
‘need of the hour implementation of software development lifecycle. The above
reasons make AWS DevOps a highly popular amalgamation.

Well DevOps as we know helps bring developers and administrators under one roof.
How does it do that? Well, it uses a methodolgy of continuous integration and
deployment. These are some of the services provided by AWS that go very well with
the DevOps Approach:

 AWS CloudFormation
 AWS EC2
 AWS CloudWatch
 AWS CodePipeline
 Instances In AWS

All these services help in automating the process of continous integration and
deployment, they also help in improving and automating monitoring and scalability
activities thus making these two ie DevOps and AWS a potent combo.

Now that we know about all these terms let us try to learn about AWS Certfied
DevOps Engineer and see how this certification benefits you.

AWS Certified DevOps Engineer


The AWS Certified DevOps Engineer has technical expertise in provisioning,
operating, and managing distributed application systems on the AWS
platform. The individual is repsonsible to:

 Implement and manage continuous delivery systems and methodologies on AWS


 Understand, implement, and automate security controls, governance processes, and
compliance validation
 Define and deploy monitoring, metrics, and logging systems on AWS
 Implement systems that are highly available, scalable, and self healing on the AWS
platform
 Design, manage, and maintain tools to automate operational processes

So, if someone undergoes AWS certification for the same he or she will be skilled in
above skills. Plus a DevOps Engineer job is bound to pay very well. Here is an article
if you wish to know about the salary for DevOps Engineer

So how does one get certified?

This certification requires the applicants to compete the Associate-level AWS


Certified Developer or AWS Certified SysOps Administrator certification exams and
have two or more years of experience provisioning and managing AWS
architectures. Students must comprehend specific concepts involving continuous
deployment (CD) and automation of AWS processes and know how to implement
them into AWS architectures.

Response Limits

The examinee selects from four or more response options that best complete the
statement or answer the question. Distracters or wrong answers are response
options that examinees with incomplete knowledge or skill would likely choose, but
are generally plausible responses fitting into the content area defined by the test
objective.
Test item formats used in this examination are:

Multiple-choice

Examinee selects one option that best answers the question or completes
a statement. The option can be embedded in a graphic where the examinee “points
and clicks” on their selection choice to complete the test item.

Multiple-response

Examinee selects more than one option that best answers the question or completes
a statement.

Sample Directions

Read the statement or question and, from the response options, select only the
options that represent the most correct or best answers given the information.

Content Limits

The examination blueprint includes weighting, test objectives, and example content.
Example topics and concepts are included to clarify the test objectives. They should
not be construed as a comprehensive listing of all of the content of this examination.

Syllabus and Weightage: AWS Certified DevOps Engineer

Domain Weightage(%)

Continuous Delivery and Process Automation 55

Monitoring, Metrics, and Logging 20

Security, Governance, and Validation 10

High Availability and Elasticity 15

So proper planning and dedication should definitely help you become an AWS
Certified DevOps Engineer and have a successful career in this domain.

Some people love self preparation and take up the exam. While others prefer
structured training. If you too are looking for a strcutured training approach then
check out our certification program for AWS DevOps Engineer which comes with
instructor-led live training and real-life project experience. This training will help you
understand AWS DevOps Fundamnetals in depth and help you master various
concepts that are a must for a successful AWS DevOps Career.
As a developer wouldn’t you like to keep your entire focus on production instead of
repository administration and maintenance? That’s where AWS CodeCommit comes
into the picture. Providing a secure and fully managed service, it has proved to boost
an organization’s performance in various aspects.

Topics Covered:

 Introduction to AWS CodeCommit


 AWS CodeCommit vs GitHub
 AWS CodeCommit Workflow
 Case Study: How Edmunds.com Reduced Administration & Maintainance Time by
95%
 Demo: Create A Repository In CodeCommit And Explore Its Features

Introduction To AWS CodeCommit


AWS CodeCommit is a source control storage and version code service provided by
Amazon. It helps the team with better code management and collaboration,
exploiting the benefits of CI/CD. It eliminates the need for a third party version
control. This service can be used to store assets such as documents, source code,
and binary files. It also helps you manage these assets. Managing includes scaling,
integrating, merging, pushing and pulling code changes. Let’s have a better look at
services provided by CodeCommit:

Fully Managed Service:

If you’re a DevOps engineer, wouldn’t you like to keep your entire focus on
production instead of maintaining updates, managing your own hardware or
software? AWS CodeCommit eliminates the boring tasks of managing your
resources providing high service availability and durability.

Store Code Securely:

Since its a version control system, it stores your code. For a matter of fact, it stores
any kind of data, be it documents or binary files. Data stored is pretty secure as
they’re encrypted at rest as well as in transit.

Work Collaboratively With Code:

AWS CodeCommit lets you collaboratively work with the code. You can work on a
section of the code and the other person/team can work on the other section, the
changes/updates can be pushed and merged in the repository. Users can review,
comment on each other’s code helping them write code to their highest potential.

Highly Scalable:

AWS CodeCommit lets you scale up or down to meet your needs. The service can
handle large repositories, a large number of files with large branches and lengthy
commit histories.

Integration:
You can easily integrate AWS CodeCommit with other AWS services. It keeps these
services close to other resources making it easier and faster to fetch and use
increasing the speed and frequency of development life cycle. It also lets you
integrate third-party services pretty easily.

Migration:

You can easily Migrate any Git-based repository to CodeCommit easily.

Interactions Using Git:

Interacting with CodeCommit is pretty simple as its Git-based. You can use Git
Commands to pull, push, merge or perform other actions. It also gives you the
feature to use AWS CLI commands along with its very own API’s.

Cross-Account Access:

CodeCommit lets you cross-link two different AWS accounts making it easier to
share repositories between two accounts securely. There are a few things to keep in
mind like you shouldn’t share your ssh keys or AWS credentials.

Introduction to AWS CodeCommit | AWS Certified DevOps Engineer Training

This video will give you an introduction to the version control system like pushing,
pulling, merging, and committing code using AWS DevOps Service – CodeCommit.

AWS CodeCommit vs GitHub

GitHub is also one of the version control systems. Let’s first look at the similarities
between GitHub and CodeCommit.
1. CodeCommit and GitHub use Git repositories.
2. Both of them support code review.
3. They can be integrated with AWS CodeBuild.
4. Both of them use two methods of authentications, SSH and HTTPS.

Lets now have a look at the differences between them.

1. Security: Git is administered using GitHub users while CodeCommit uses AWS’s
IAM Roles and users. This makes it highly secure. Using IAM roles lets you share
your repositories with only specific people while letting you limit their access to the
repository. For example, few users can view the repository, few people can make
edits, etc. CodeCommit lets you have a third step authentication using MFA.
2. Hosting: Git is like home for GitHub but not when used with AWS. Hence when
GitHub is used with AWS, it’s like a third-party tool. Whereas, CodeCommit is hosted
on AWS and managed by AWS, making integrations with CodeBuild and its usage
much simpler.
3. User Interface: GitHub is fully featured and has a really nice UI. Whereas
CodeCommit user interface is pretty average.

AWS CodeCommit Workflow


Have a look at the below flow diagram to understand the workflow of CodeCommit. It
consists of three parts – Development Machine, AWS CLI/CodeCommit
Console, AWS CodeCommit Service.

 You can use the AWS CLI or AWS CodeCommit Console to create a
repository(remote) which will be reflected onto your AWS CodeCommit Service to
start off with your project.
 Do a git clone from your development machine, a git clone request will be received
at the CodeCommit service end. This will end up syncing the remote repository
created in step 1 and the local repository that was just cloned.
 Use the local repository on the development machine to modify the code. Run git
add to stage the modified files locally, git commit to commit the files locally and git
push to push the modified changes to CodeCommit. This will, in turn, modify the
remote repository.
 Download changes or modifications that are done by other team members working
on the same repository using git pull. Update the remote repository and send those
updates to your development machine to keep the local repository updated.

Case Study
Let’s have a look at a case study to point out my views better.

About the company:

I’m going to talk about this company called Edmunds.com. It’s an online website/app
that lets buyers browse cars, view photos, videos, etc about cars that are out for
sale.

Challenges:

Previously used on-premises SCM had a few issues as mentioned below:

 Adding new users to the SCM was difficult


 SCM has a huge operational burden
 Difficult and time-consuming to manage and maintain hardware and software
 Repositories lacked backup
 Repositories lacked clustering capabilities
 Service would suffer from downtime

AWS CodeCommit to the rescue:

Edmunds.com started using AWS’s CodeCommit after researching about many


other services. They migrated more than 1,000 repositories and more than 270 users
to AWS. CodeCommit handles hosting, maintenance, backup and scaling for the
company.

1. Fully managed: The company has experienced about 95 percent reduction in the
time spent on administration and maintenance.
2. Highly Available: Made git repositories highly available by using Amazon’s S3 to
store the backup across different Availability Zones.
3. Code Efficient: Company is saving across $450 per user manually.
4. Flexible: Using Amazon’s CodeCommit made their website to be easily scalable in
terms of the number of users making it very flexible.

Demo: Create a Repository In CodeCommit And Explore Its Features


In this section, I’ll demonstrate the creation of a repository on CodeCommit, create a
branch, commit changes, view the changes and merge repositories. Let’s have a
look.

Step 1: Go to your AWS login page and log into your AWS account. If you do not
have an account, proceed by creating a free account. Once you log-in, you should
see a page as shown below:
Search for CodeCommit and click on that service. Further, click on Create
Repository to create a repository.

AWS Certification Training - Solutions Architect

Explore Curriculum
You’ll be prompted to add your Repository Name and Description. Add those and
click on Create.

You should get a success message as I got.


There are two ways of connecting your repository – SSH and HTTPS. In this case,
I’ll be using HTTPS. Now that a repository has been created, go ahead and create
files in the repository. When you create a repository, it’s always empty. You’ll have to
create and add files. Get inside the repository that you’ve created and click
on Create file.

Once you’ve created the file. Go ahead and add code to the file.
Now that you’ve written your code, you need to commit these changes.
Add Filename, Author name, Email ID, Commit message and click on Commit
Changes.

Now when you navigate to the Repository section by clicking on Repository, you
should see your repository there.

Go ahead and click on your repository, you should see the file that you just created.

What are branches and why are they used

Now that you’ve created a repository, a file and added the code into the file, let’s
learn how to create branches. Do you guys know why branches are used? In a Dev
or Prod environment, you are not the only one working on these repositories. There
are going to be other people working on different sections of the same repository.
Different people working on the same file can get confusing.

It’s better to use branches in such situations. Branches are basically copies of the
original file which can be allocated to different people. They can make changes,
commit them and push it to CodeCommit. After certain tests when these codes are
verified, they can be merged with the master branch. In the next section, I’ll explain
how to create branches, edit branches, view and compare changes, view commit
history and how to merge these branches with the master branch.
Step 2: To create branches, click on Branches on the extreme right.

And then click on Create branch on the extreme right top corner as shown below:

Add branch name and description and click on Create branch.


Next

You should see something similar to this:

Once you click on the branch, you’ll see that it contains all the files that exist on your
master branch.

Let’s go ahead and make changes to this branch. Click on the file ec2.txt.
Click on Edit as highlighted below.

Make the changes as you wish and commit these changes by adding the Author
name, Email Address, Commit message. Go ahead and click on Commit
changes.

You should get a success message as I got.


Now that you have a master branch and another branch which is a little different than
the master branch, let’s compare them to look for differences. Click on Create Pull
Request.

Select the master branch as you’re comparing the current branch with the master
branch. Click on Compare.

This highlights all the differences in the master and the other branch.
You can also check the commit history. Just click on Commits, next to changes.

Step 3: Suppose you agree with the changes made in this branch and you’d like to
reflect these changes to your master branch, you can merge the two branches.
Add Title and Description.

And click on Create.


AWS Certification Training - Solutions Architect

Weekday / Weekend BatchesSee Batch Details

You get a success pull request notification.

Click on Merge to finally merge the two branches.

This brings us to the end of AWS CodeCommit blog. You can integrate this service
with various DevOps tools and can make the building process easier. I hope this
blog was helpful. For more such blogs, visit “Edureka Blog“.

Code deploy
Trends have shown an accent in the popularity of DevOps. AWS being a popular
cloud vendor, many wondered if AWS could incorporate DevOps approach. So,
AWS responded with several services that catered the mentioned requirement and
also launched an AWS Certified DevOps Engineer Certification in support. In this
article, we would be discussing a popular service for DevOps on AWS known as
AWS CodeDeploy.

This article would precisely focus on the following pointers:

1. Why AWS DevOps?


2. What is AWS CodeDeploy?
3. AWS CodeDeploy Platforms
4. Working of AWS CodeDeploy

Let us get started then.

Why AWS DevOps?


AWS is one of the best cloud service providers and DevOps, on the other hand, is
the ‘need of the hour’implementation of software development life-cycle.

Following reasons make AWS DevOps a highly popular amalgamation:

1. AWS CloudFormation

DevOps teams are required to create and release cloud instances and services more
frequently than traditional development teams. AWS CloudFormation enables you to
do just that. ‘Templates’ of AWS resources like EC2 instances, ECS containers, and
S3 storage buckets let you set up the entire stack without you having to bring
everything together by yourself.

2. AWS EC2
AWS EC2 speaks for itself. You can run containers inside EC2 instances. Hence you
can leverage the AWS Security and management features. Another reason why
AWS DevOps is a lethal combo.

3. AWS CloudWatch

AWS CloudWatch lets you track every resource that AWS has to offer. Plus it
makes it very easy to use third-party tools for monitoring Sumo Logic, Botmetric,
AppDynamics, etc

4. AWS CodePipeline

AWS CodePipeline is one popular feature from AWS which highly simplifies the way
you manage your CI/CD toolset. It lets you integrate with tools like GitHub, Jenkins,
and CodeDeploy enabling you to visually control the flow of app updates from build
to production.

AWS Certified DevOps Engineer Training

 Instructor-led Sessions
 Real-life Case Studies
 Assignments
 Lifetime Access

Explore Curriculum

5. Instances In AWS

AWS frequently creates and adds new instances to their list and the level of
customization with these instances allow you to make it easy to use AWS DevOps
together.

All these reasons make AWS one of the best platforms for DevOps.

What Is AWS CodeDeploy?


This is what the definition says,
‘CodeDeploy is a deployment service that automates application deployments to Amazon
EC2 instances, on-premises instances, serverless Lambda functions, or Amazon ECS
services.’

With AWS CodeDeploy you can deploy a variety of content and applications. Here is a list of
the same:

 Code
 Serverless AWS Lambda functions
 Web and configuration files
 Executables
 Packages
 Scripts
 Multimedia files

Here are some of the benefits of using AWS CodeDeploy:

 Lets you deploy server, serverless, and container applications


 You can automate deployments
 Minimize downtime
 Stop and rollback
 Get centralized control
 It is easy to adopt
 Supports concurrent deployments

AWS CodeDeploy Platforms


With AWS CodeDeploy, you can deploy code to three different platforms:

1. EC2/On-Premise

Think of it as an instance or virtual machine of Physical server which can be on-


premises or on AWS. The applications composed on top of it can be executable files
or configuration files. It supports both types for managing traffic that is ‘In-Place’ or
‘Blue Green deployment’.

2. AWS Lambda Functions

If your applications have updated version of Lambda Function, you can deploy those
in a serverless environment using AWS Lambda Functions and AWS CodeDeploy.
This arrangement gives you a highly available compute structure.
Next

3. Amazon ECS:

If you wish to deploy containers, you can perform Blue/Green deployment with AWS
ECS and AWS CodeDeploy.

Now let us go ahead and understand how AWS CodeDeploy actually works:

Working of AWS CodeDeploy


So let us try and understand how AWS Codeploy works with the help of the image
below:

In order to deploy applications, we need to create or have applications in first place.


These applications consist of revisions which can be source codes or executable
files that can be uploaded to Github repository or AWS S3 bucket.

Then you have a deployment group, which can be a set of instances associated
with the application to be deployed. These instances can be added by using a tag or
can be added by using AWS Autoscaling group.

Finally, the deployment configuration which holds AppSpec files that give
CodeDeploy, the specifications on what to deploy and where to deploy applications.
These configuration files (AppSpec) come with .yml extension.

If I were to put all the blocks above in order, they would answer three questions:

1. What to deploy?
2. How to deploy?
3. Where to deploy?

This was about the conceptual knowledge that concerns this topic. In case if you
wish to explore the actual working of AWS CodeDeploy service, then checkout the
video below:
AWS Certified DevOps Engineer Training

Weekday / Weekend BatchesSee Batch Details

CodeBuild CodePipeline CodeDeploy CodeCommit in AWS | AWS DevOps


Certification Training | Edureka

This Edureka “CodeBuild CodePipeline CodeDeploy CodeCommit in AWS” video will give
you a thorough and insightful overview of all the concepts related to CI/CD services in AWS.

So this is it folks. This brings us to the end of this article on ‘AWS CodeDeploy’. If
you are looking for a structured training approach then check out our certification
program for AWS Certified DevOps Engineer which comes with instructor-led live
training and real-life project experience. This training will help you understand AWS
DevOps Fundamentals in depth and help you master various concepts that are a
must for a successful AWS DevOps Career.
EC2 Trouble shoot

Insufficient instance capacity


Description
You get the InsufficientInstanceCapacity error when you try to launch a new
instance or restart a stopped instance.

Cause
If you get this error when you try to launch an instance or restart a stopped instance,
AWS does not currently have enough available On-Demand capacity to fulfill your
request.

Solution
To resolve the issue, try the following:

 Wait a few minutes and then submit your request again; capacity can shift frequently.
 Submit a new request with a reduced number of instances. For example, if you're making a
single request to launch 15 instances, try making 3 requests for 5 instances, or 15 requests for
1 instance instead.
 If you're launching an instance, submit a new request without specifying an Availability
Zone.
 If you're launching an instance, submit a new request using a different instance type (which
you can resize at a later stage). For more information, see Change the instance type.
 If you are launching instances into a cluster placement group, you can get an insufficient
capacity error. For more information, see Placement group rules and limitations.
Instance terminates immediately
Description
Your instance goes from the pending state to the terminated state.

Cause
The following are a few reasons why an instance might immediately terminate:

 You've exceeded your EBS volume limits. For more information, see Instance volume limits.
 An EBS snapshot is corrupted.
 The root EBS volume is encrypted and you do not have permissions to access the CMK for
decryption.
 A snapshot specified in the block device mapping for the AMI is encrypted and you do not
have permissions to access the CMK for decryption or you do not have access to the CMK to
encrypt the restored volumes.
 The instance store-backed AMI that you used to launch the instance is missing a required part
(an image.part.xx file).

For more information, get the termination reason using one of the following methods.

To get the termination reason using the Amazon EC2 console

1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.


2. In the navigation pane, choose Instances, and select the instance.
3. On the first tab, find the reason next to State transition reason.
To get the termination reason using the AWS Command Line Interface

1. Use the describe-instances command and specify the instance ID.


aws ec2 describe-instances --instance-id instance_id
2. Review the JSON response returned by the command and note the values in
the StateReason response element.

The following code block shows an example of a StateReason response element.


"StateReason": {
"Message": "Client.VolumeLimitExceeded: Volume limit exceeded",
"Code": "Server.InternalError"
},

To get the termination reason using AWS CloudTrail


For more information, see Viewing events with CloudTrail event history in the AWS
CloudTrail User Guide.

Solution
Depending on the termination reason, take one of the following actions:

 Client.VolumeLimitExceeded: Volume limit exceeded — Delete unused volumes. You


can submit a request to increase your volume limit.
 Client.InternalError: Client error on launch — Ensure that you have the permissions
required to access the CMKs used to decrypt and encrypt volumes. For more information,
see Using key policies in AWS KMS in the AWS Key Management Service Developer Guide.

LOAD Balancer

Clients cannot connect to an internet-facing


load balancer
If the load balancer is not responding to requests, check for the following issues:

Your internet-facing load balancer is attached to a private subnet


You must specify public subnets for your load balancer. A public subnet has a route
to the Internet Gateway for your virtual private cloud (VPC).
A security group or network ACL does not allow traffic
The security group for the load balancer and any network ACLs for the load
balancer subnets must allow inbound traffic from the clients and outbound traffic to
the clients on the listener ports.

VPC

PC (Virtual Private Cloud) is such an AWS service that’s getting more recognition in the
technology job market nowadays. Knowing the essentials of VPC can give an upper hand to job
hunters, who are aspired to an AWS career. Our role is to make you ready for that. So here, we
bring the best AWS VPC interview questions that usually repeat in AWS interviews. Before that,
let’s go through some basics about this technology a beginner needs to know while
pursuing AWS training.

As most of you know, AWS is an Amazon subsidiary that provides access to cloud computing
services based on user demand. Users have to pay on a subscription basis. Amazon provides
different services to seamlessly blend your local resources with the cloud. AWS S3 (Simple
Storage Service) is an AWS service that allows object storage through different web service
interfaces like SOAP, BitTorrent, etc. Knowing how to answer top AWS interview questions can
help you to gain an upper edge over candidates who wish to be a part of the AWS teams.

If S3 is for storage, then there’s Amazon EC2 (Elastic Compute Cloud) for the compute domain
in AWS. It allows its users to access instances or virtual machines within AWS infrastructure.
EC2 is generally considered as the pioneer in modern cloud computing technologies. For
developers, EC2 provides scalable compute capacity. If you are one who wants to work in a fast-
evolving computing environment aspiring to solve hard problems along with smart people, then
practicing AWS EC2 interview questions will be a decisive step in your career.

Finally, VPC; It is a service that allows AWS customers to access their services in a customized
private network. We can find this service under Networking & Content Delivery menu of AWS
dashboard. This private cloud from Amazon is known to be one of the most secure private cloud
services available now. Here, users will have absolute control of their private cloud. They can
choose their own IP range, can configure network gateways and create subnets. It’s best used in
conjunction with EC2.

Also Read: Top 30 AWS Cloud Support Engineer Interview Questions

Now, you’d have understood about at least some of the basic services AWS offers. This
understanding can help not only you but also us, who want to suggest you some of the top AWS
VPC interview questions and answers. We’re not claiming as this guide is all inclusive but it’ll
definitely help you out if you are approaching this career option seriously. So, let’s get started.

Top 20 AWS VPC interview questions


Below we’ve detailed a list of 20 most popular AWS VPC interview questions. First, you go
through the title of each question and then get to the heart of their answer one by one. Answers
have been as much simplified as possible.
1. What is the actual definition of the term “VPC”?
Answer: Well, VPC is a private network space within the Amazon cloud that enables you to
launch AWS resources. It’s the actual networking layer of Amazon EC2, about which we have
already discussed. Each private network you create on the cloud will be logically separated from
other virtual networks in the cloud.

Although the structure of VPC looks similar to a standard network that you’d operate in a data
center, a VPC will have the benefits of the scalable infrastructure of AWS. Another major
advantage of VPC is that it is fully customizable. You can create subnets, set up root tables,
configure network gateways, setup network access control lists, choose IP address range, and
many more in a Virtual Private Cloud.

2. What are the components of Amazon VPC?


Answer: The foremost element in Amazon VPC architecture is VPC network itself. It’s a
logically separated part of AWS cloud. It’s possible to define your Virtual Private Cloud’s IP
address from the range you’ve chosen. The second element is the Internet Gateway which is the
connecting point between your VPC and the public internet. Subnets are the functional parts of
your private cloud’s IP address range.

NAT Gateways are used to connect between instances of your private subnet with internet or
other AWS services. Customer Gateways are your side of a VPN connection in AWS while
Virtual Private Gateways are Amazon VPC side of VPN connection. This type of questions lies
under the general or basic AWS VPC interview questions. Whether you are a fresher or have
some experience, you may come across such questions so get prepared with the answer.

Components of Amazon VPC with Brief description:

Element Brief description

Virtual Private Cloud A logically isolated virtual network in the AWS cloud. You define a VPC’s IP
(VPC) address space from a range you select.

Subnet A segment of a VPC’s IP address range where you can place groups of isolated
resources.
Internet Gateway The Amazon VPC side of a connection to the public Internet.

NAT Gateway A highly available, managed Network Address Translation (NAT) service for
your resources in a private subnet to access the Internet.

Hardware VPN A hardware-based VPN connection between your Amazon VPC and your
Connection datacenter, home network, or co-location facility.

Virtual Private The Amazon VPC side of a VPN connection. The Customer gateway is the
Gateway customer side of a VPN connection.

Peering Connection A peering connection enables you to route traffic via private IP addresses
between two peered VPCs

VPC Endpoint Enables Amazon S3 access from within your VPC without using an Internet
gateway or NAT, and allows you to control the access using VPC endpoint
policies.

3. What are Internet Gateways in VPC?


Answer: An Internet Gateway is highly available, horizontally scaled VPC component.
Gateways establish coherent connections between your Amazon VPC network and the internet.
There can be only one gateway associated with each VPC. These are the VPC components that
provide NAT (Network Address Translation) for instances which have already assigned public IP
addresses. In the case of internet routable traffic, such a gateway provides a target in your VPC
route tables.

Also Read: How to Build Virtual Private Cloud (VPC) in AWS

4. What is a NAT Device?


Answer: A NAT device in your VPC will enable instances in the private subnet to trigger
outbound IPv4 traffic to other AWS services/internet while hindering inbound traffic initiated on
the internet. Here when traffic goes out to the internet, IP address gets replaced by NAT device’s
address and when the response comes back to the instances, the device translates the address of
instances back to the private IP addresses. AWS has two types of NAT devices – NAT instance
and NAT gateway. Linux AMIs are configured to run as NAT instances. NAT does not support
IPv6 as well.

5. What is a subnet in VPC?


Answer: According to AWS documentation, subnets are nothing but a range of IP addresses in
your VPC. It is possible to launch the resources of AWS into your desired subnet. For resources
that need internet access, you can use a public subnet. Whereas for resources that don’t need the
internet, a private subnet is sufficient.

The default subnet in your VPC must have the netmask value 20 that can give up to 4096
addresses per subnet. The subnet is always confined within a single availability zone whereas
VPC can span across multiple zones.

Want to become an AWS Certified Architect? Start your preparation now for the AWS Certified
Solutions Architect Associate exam.

6. What is the default VPC? Explain its advantages.


Answer: The questions based on default VPC are among the top AWS VPC interview
questions. It’s a logically isolated virtual network that gets created automatically in AWS cloud
for an account when the user makes use of Amazon EC2 resources for the first time.

You can alter the components of the default VPC as per your need. There are several advantages
of a default VPC. Here, a user can access high-level features such as different IPs, network
interfaces without creating a separate VPC or launching instances.

7. What is ELB (Elastic Load Balancing) and how does


it affect VPC?
Answer: As the name implies ELB is a load balancer service for AWS deployments. A load
balancer divides the amount of work a computer has to do into more computers and get it done
faster. In the same way here ELB distributes incoming application traffic into multiple targets like
EC2 instances.

There are 3 types of ELBs to ensure scalability, availability, and security for ensuring your
applications as fault tolerant. These are classic, network, and application load
balancers. Network and application load balancers can be used in conjunction with VPC and
these can route traffics to targets within VPCs.
Also, learn about Amazon Route 53 and Route 53 Pricing.

8. What do you know about VPC Peering?


Answer: You may be asked about the AWS VPC peering bandwidth in AWS VPC interview.
It’s simply the networking connection between two VPs in the same network. It’s possible to
create a VPC peering connection between your own VPs or VPC with another AWS account
within the same region. It’s not needed for AWS to break the existing VPC infrastructure to
enable VPC peering. There is no need of a special hardware for this purpose. It’s not creating a
VPN connection or network gateway within the AWS.

The main intention behind such a connection is to facilitate data transfer across multiple VPNs
spanning different AWS accounts. This type of peering is a one-to-one relationship wherein
transitive connection is not supported. And while talking about AWS VPC peering bandwidth,
there are no bandwidth limitations for peering connections as well.

Know More: https://www.whizlabs.com/blog/vpc-peering-basics/

9. What are the differences between Private, Public &


Elastic IP Addresses?
Answer: The questions based on Elastic Network Interfaces are among the most common
AWS VPC interview questions.

As the name implies, private IP addresses are IP addresses that aren’t accessible over the internet.
If you want to communicate between instances in the same network, private IPs are used. At an
instance launching time, a private IP from subnet’s IP address range and a DNS hostname is
assigned to eth0 of the instance (default network interface).

A private IP address remains associated with the network interface will get released only when
the instance is terminated (not when the instance is stopped or restarted). On the contrary, a
public IP address is easily accessible over the internet.

When you launch a VPC instance, one public IP will automatically assign to the instance which
isn’t associated with your AWS account. Every time you restart and stop the instance, AWS will
allocate a new public IP to the instance. The main difference between a public and elastic IP is
that elastic IP is persistent. It’ll be associated with your AWS account until you terminate it.
Anyhow, you can detach elastic IP from one instance and attach the same IP to a different
instance. Elastic IP is also accessible over the internet.
10. Is there any limit to the number of VPCs, subnets,
gateways, VPNs that I can create?
Answer: Yes, there is definitely a limit. You can create 5 VPCs per region. If you want
to increase this limit, you’ve to increase the number of internet gateways by the same number.
And, per VPC 200 subnets are allowed. 5 elastic IP addresses are allowed per region. The number
of Internet, VPN and NAT gateways per region is also set to 5.

Anyhow, customer gateways are allowed to 50 per region. One can create 50 VPN connections
per region. It is highly recommended to cover questions based on connectivity while going
through the top AWS VPC interview questions.

Read Now: Amazon Braket

11. Can you illustrate what is CIDR Routing in VPC?


Answer: The questions based on IP address are the common among frequently-asked AWS
VPC interview questions. This CIDR question can be answered in the following
manner. Classless inter-domain routing (CIDR) is a set of Internet protocol (IP) standards that are
used to allocate IP addresses for networks and individual devices. With CIDR, a single IP address
can be used to pick many unique IP addresses.

Generally, A CIDR IP looks like a normal IP address except there is a slash followed by a
number in CIDR. This part is called the IP network prefix. In VPC, CIDR block size can be
from /16 to /28 in case of IPv4. When you’re creating a VPC, you actually have to specify a range
of IP address in form of CIDR just like 10.0.0.0/16. This CIDR is the primary CIDR block of
your VPC.

CIDR offers the benefits of effective management of available IP address space and reduce the
number of routing table entries. If you are still wondering what does CIDR stand for, learn more!

12. What are Security Groups in VPC?


Answer: In VPC, a security group’s function is to manage the traffic for the instances.
Instances can be single in number or many. Actually, it does act as a virtual firewall that can
control inbound and outbound traffic for different EC2 instances. You can manually add rules to
each security group to control the traffic within the associated instances.

In AWS console, security groups can be located in both VPC and EC2 sections. By default, all
security groups allow outbound traffic. In the same way, you can define rules to allow inbound
traffic. But one thing- you are only allowed to create “allow” rules rather setting up denial rules
to restrict security permissions. Also, it’s possible to change the rules of a security group
irrespective of the time and the process of changing rules will take place instantly. You may come
across questions on security in an AWS VPC interview, so we’ve included it in our list of the best
AWS VPC interview questions.

Must Read: How to improve connectivity and secure your VPC resources?

13. What do you mean by Network ACLs (Access


Control List) in VPC?
Answer: Network ACLs does the similar function of a network security group in VPC; IE
controlling inbound and outbound traffic in VPC. The main difference between a network ACL
and a security group is that the latter’s role is to act as a firewall for associated EC2 instances
whereas an ACL’s role is to serve firewall job for associated subnets. Your VPC generates an
ACL automatically by default and it’s modifiable. Unlike a security group, this default network
ACL allows all inbound and outbound traffic by default. And it’s possible to associate an ACL
with multiple subnets. But at a time, only one subnet can be associated with a network ACL.

You can also create your own custom ACL and it can be associated with a subnet. Such an ACL
denies all types of inbound/outbound traffic until you add rules to it.

14. What is stateful and stateless filtering?


Answer: A stateful filtering checks the origin of the request and triggers automatic replay to
the originating computer. On the other hand, stateless filtering only examines the source and
destination IPs ignoring whether it’s a new request or replay to a request.

In VPC, security groups carry out stateful filtering whereas network ACLs perform stateless
filtering. Filtering based questions are generally asked in the interview among other popular
AWS VPC interview questions so you need to prepare yourself with the answer.

Also Read: AWS OpsWorks

15. What are the functions of an Amazon VPC router?


Answer: VPC router allows Amazon EC2 instances within subnets to interact with Amazon
EC2 instances in other subnets within the same VPC. Virtual private gateways, subnets and
Internet gateways, etc. can also communicate with each other by means of a VPC router.

Amazon KMS is a managed service that is integrated with various


other AWS Services. You can use it in your applications to create,
store and control encryption keys to encrypt your data. Learn AWS
KMS Key Management Service.

16. How much Amazon charge you for sharing their


cloud space with you?
Answer: Basically for a VPN connection to your VPC, Amazon charges nearly $0.5 for
an hour. There is an option to terminate your VPN connection through AWS consoled if you
don’t want to charge for this.

AWS internet gateway pricing charges vary through different geographic locations. You’ll be
charged from $0.045 up to $0.054 per gateway-hour and GBs of data processed based on
your location. Similarly, in the case of VPC peering pricing, the rates depend on the location of
VPCs and peering connection. If both are in the same region, the charge of transferring data
within a peering connection remains same as the transfer of data within the zone itself.

In case if they are placed in different regions, region data rate costs will apply. You may come
across at least one question based on VPC peering pricing so here we’ve covered
it under the most common AWS VPC interview questions and answers.

17. What is PrivateLink from AWS?


Answer: PrivateLink provides utmost availability and scalability for AWS customers to access
their services maintaining the traffic within the AWS network. It delivers private connections
between VPCs, on-premises applications, etc. securely on Amazon network.

18. What is ClassicLink in VPC?


Answer: If you want to connect Amazon EC2-classic instances to VPC, you have to use
ClassicLink. This work only within the same region and this makes use of private IP addresses.
Its working is simple- you just have to enable ClassicLink in your VPC account and associate a
security group from VPC to EC2-classic instance.

This type of questions are the additions AWS VPC interview questions that you shouldn’t miss so
prepare yourself with the answer.

19. What is so special about VPC that stands out it


from other private clouds?
Answer: There’s no need for a particular hardware, physical data centers or virtual private
networks if you want a private network within the cloud – AWS VPC will provide it. The
advanced security features of VPC makes it almost invulnerable to privacy & security threats.

20. What is a VPS?


Answer: Beginners who were trying AWS VPC interview questions for the first time used to
get confused with this question, since these terms look similar.

Actually, VPS or Virtual Private Server is none other than the host server offered by
web hosting companies like BlueHost and GoDaddy (These companies also provide shared
hosting services wherein the server is shared by several users). Here, a single host divided to
multiple virtual units, each having an independent function. Each of these units is virtual private
servers which can work without depending on one another. You’ll get access to the complete
physical server including root access.

In the case of VPC, its functions are similar to that of a VPS but its servers don’t have to place in
a single

Linux interview questions


What is Linux?

Linux is an operating system based on UNIX and was first introduced by Linus
Torvalds. It is based on the Linux Kernel and can run on different hardware
platforms manufactured by Intel, MIPS, HP, IBM, SPARC, and Motorola. Another
popular element in Linux is its mascot, a penguin figure named Tux.

2. What is the difference between Linux and Unix?

The main differences between Linux and UNIX are as follows:

Parameter Linux Unix

Both free distributions and paid distributions are Different levels of UN


Price
available. cost structure

Mainly Internet Serve


Target User Everyone (Home user, Developer, etc.)
Mainframes.

Ext2, Ext3, Ext4, Jfs, ReiserFS, Xfs, Btrfs, FAT, FAT32,


File System Support jfs, gpfs, hfs, hfs+, ufs
NTFS.

GUI KDE and Gnome Common Desktop En

Viruses listed 60-100 80-120

Bug Fix Speed Faster because Linux is Community driven Slow

Portability Yes No

Ubuntu, Fedora, Red Hat, Kali Linux, Debian, Archlinux,


Examples OS X, Solaris, All Linu
Android, etc.

What is BASH?

BASH is short for Bourne Again SHell. It was written by Steve Bourne as a
replacement to the original Bourne Shell (represented by /bin/sh). It combines
all the features from the original version of Bourne Shell, plus additional
functions to make it easier and more convenient to use. It has since been
adapted as the default shell for most systems running Linux.

4) What is Linux Kernel?

The Linux Kernel is a low-level systems software whose main role is to manage
hardware resources for the user. It is also used to provide an interface for user-
level interaction.

5) What is LILO?

LILO is a boot loader for Linux. It is used mainly to load the Linux operating
system into main memory so that it can begin its operations.

6) What is a swap space?

Swap space is a certain amount of space used by Linux to temporarily hold


some programs that are running concurrently. This happens when RAM does not
have enough memory to hold all programs that are executing.

7) What is the advantage of open source?

Open source allows you to distribute your software, including source codes
freely to anyone who is interested. People would then be able to add features
and even debug and correct errors that are in the source code. They can even
make it run better and then redistribute these enhanced source code freely
again. This eventually benefits everyone in the community.

8 ) What are the basic components of Linux?

Just like any other typical operating system, Linux has all of these components:
kernel, shells and GUIs, system utilities, and an application program. What
makes Linux advantageous over other operating system is that every aspect
comes with additional features and all codes for these are downloadable for
free.

9) Does it help for a Linux system to have multiple desktop environments


installed?

In general, one desktop environment, like KDE or Gnome, is good enough to


operate without issues. It's all a matter of preference for the user, although the
system allows switching from one environment to another. Some programs will
work in one environment and not work on the other, so it could also be
considered a factor in selecting which environment to use.

10) What is the basic difference between BASH and DOS?

The key differences between the BASH and DOS console lie in 3 areas:

- BASH commands are case sensitive while DOS commands are not;

- Under BASH, / character is a directory separator and \ acts as an escape


character. Under DOS, / serves as a command argument delimiter and \ is the
directory separator

- DOS follows a convention in naming files, which is 8 character file name


followed by a dot and 3 characters for the extension. BASH follows no such
convention.

11) What is the importance of the GNU project?

This so-called Free software movement allows several advantages, such as the
freedom to run programs for any purpose and freedom to study and modify a
program to your needs. It also allows you to redistribute copies of software to
other people, as well as the freedom to improve software and have it released
for the public.

12) Describe the root account.

The root account is like a systems administrator account and allows you full
control of the system. Here you can create and maintain user accounts,
assigning different permissions for each account. It is the default account every
time you install Linux.

13.What are environmental variables?

Environmental variables are global settings that control the shell's function as
well as that of other Linux programs. Another common term for environmental
variables is global shell variables.

Soft and Hard links in Unix/Linux


A link in UNIX is a pointer to a file. Like pointers in any programming
languages, links in UNIX are pointers pointing to a file or a directory.
Creating links is a kind of shortcuts to access a file. Links allow more than
one file name to refer to the same file, elsewhere.

There are two types of links :


1. Soft Link or Symbolic links
2. Hard Links

1. Hard Links
2. Each hard linked file is assigned the same Inode value as the original,
therefore they reference the same physical file location. Hard links
more flexible and remain linked even if the original or linked files are
moved throughout the file system, although hard links are unable to
cross different file systems.
3. ls -l command shows all the links with the link column shows number
of links.
4. Links have actual file contents
5. Removing any link, just reduces the link count, but doesn’t affect other
links.
6. Even if we change the filename of the original file then also the hard
links properly work.
Soft Links

 A soft link is similar to the file shortcut feature which is used in Windows
Operating systems. Each soft linked file contains a separate Inode value
that points to the original file. As similar to hard links, any changes to the
data in either file is reflected in the other. Soft links can be linked across
different file systems, although if the original file is deleted or moved, the
soft linked file will not work correctly (called hanging link).
 ls -l command shows all links with first column value l? and the link points
to original file.
 Soft Link contains the path for original file and not the contents.
 Removing soft link doesn’t affect anything but removing original file, the
link becomes “dangling” link which points to nonexistent file.
 A soft link can link to a directory.

Crone Jobs
Cron allows Linux and Unix users to run commands or scripts at a given date and time.
You can schedule scripts to be executed periodically. Cron is one of the most useful tool
in a Linux or UNIX like operating systems. It is usually used for sysadmin jobs such as
backups or cleaning /tmp/ directories and more.

You might also like