AWS Interview Questions

Download as pdf or txt
Download as pdf or txt
You are on page 1of 58

AWS Interview Questions

Q1. What is auto-scaling?

Ans. Auto-scaling is a feature of AWS which allows you to configure and


automatically provision and spin-up new instances without the need for
your intervention.

Q2. What are the different types of cloud services?

Ans. Software as a Service (SaaS), Data as a Service (DaaS), Platform as a


Service (PaaS), and Infrastructure as a Service (IaaS).

Q3. What is Amazon S3?


Ans. Amazon S3 (Simple Storage Service) is an object storage with a
simple web service interface to store and retrieve any amount of data
from anywhere on the web.

Q4. What is SimpleDB?

Ans. It is a structured data store that supports indexing and data queries
to both EC2 and S3.

Q5. What is an AMI?


Ans. AMI (Amazon Machine Image) is a snapshot of the root filesystem.

Q6. What is the type of architecture, where half of the workload is


on the public load while at the same time half of it is on the local
storage?

Ans. Hybrid cloud architecture.

Q7. Can I vertically scale an Amazon instance? How do you do it?

Ans. Yes. Spinup a new larger instance than the one you are running,
then pause that instance to detach the root ebs volume from this server
and discard. After that, stop the live instance and detach its root volume.
Note the unique device ID and attach that root volume to the new
server, and start again. This way you will have scaled vertically.

Q8. How can you send request to Amazon S3?

1|Page
Ans. You can send request by using the REST API or the AWS SDK
wrapper libraries that wrap the underlying Amazon S3 REST API.

Q9. How many buckets can be create in AWS by default?

Ans. By default, 100 buckets can be created.

Q10. Should encryption be used for S3?

Ans. Encryption should be considered for sensitive data as S3 is a


proprietary technology.

Q11. What are the various AMI design options?

Ans. Fully Baked AMI, JeOS (just enough operating system) AMI, and
Hybrid AMI.

Q12. What is Geo Restriction in CloudFront?

Ans. Geo restriction, also known as geoblocking, is used to prevent users


in specific geographic locations from accessing content that you’re
distributing through a CloudFront web distribution.

Q13. Explain what is T2 instances?


Ans. T2 instances are designed to provide moderate baseline
performance and the capability to burst to higher performance as
required by workload.

Q14. What is AWS Lambda?

Ans. AWS Lambda is a compute service that lets you run code in the
AWS Cloud without provisioning or managing servers.
Q15. What is a Serverless application in AWS?
Ans. The AWS Serverless Application Model (AWS SAM) extends AWS
CloudFormation to provide a simplified way of defining the Amazon API
Gateway APIs, AWS Lambda functions, and Amazon DynamoDB tables
needed by your serverless application.

Q16. What is the use of Amazon ElastiCache?


Ans. Amazon ElastiCache is a web service that makes it easy to deploy,
operate, and scale an in-memory data store or cache in the cloud.

Q17. Explain how the buffer is used in Amazon web services?

2|Page
Ans. The buffer is used to make the system more robust to manage
traffic or load by synchronizing different component.

Q18. Differentiate between stopping and terminating an instance


Ans. When an instance is stopped, the instance performs a normal
shutdown and then transitions to a stopped state.
When an instance is terminated, the instance performs a normal
shutdown, then the attached Amazon EBS volumes are deleted unless
the volume’s deleteOnTermination attribute is set to false.

Q19. Is it possible to change the private IP addresses of an EC2 while


it is running/stopped in a VPC?
Ans. The primary private IP address cannot be changed. Secondary
private addresses can be unassigned, assigned or moved between
interfaces or instances at any point.

Q20. Give one instance where you would prefer Provisioned IOPS
over Standard RDS storage?
Ans. When you have batch-oriented workloads.
These are some of the popular questions asked in AWS architect
interviews. Always be prepared to answer all types of questions —
technical skills, interpersonal, leadership or methodology. If you are
someone who has recently started your career in cloud computing, you
can always get certified in one of the technical courses like AWS
Architect to get the requisite knowledge and skills.

1. Compare AWS and OpenStack

Criteria AWS OpenStack

License Amazon proprietary Open Source

Operating System Whatever cloud Whatever AMIs


administrator provides provided by AWS

Performing Through templates Through text files


repeatable
operations

2. What is AWS?

3|Page
AWS (Amazon Web Services) is a platform to provide secure cloud
services, database storage, offerings to compute power, content delivery,
and other services to help business level and develop.
3. What is the importance of buffer in Amazon Web Services?
An Elastic Load Balancer ensures that the incoming traffic is distributed
optimally across various AWS instances. A buffer will synchronize
different components and makes the arrangement additional elastic to a
burst of load or traffic. The components are prone to work in an unstable
way of receiving and processing the requests. The buffer creates the
equilibrium linking various apparatus and crafts them effort at the
identical rate to supply more rapid services.
4. What is the way to secure data for carrying in the cloud?
One thing must be ensured that no one should seize the information in
the cloud while data is moving from point one to another and also there
should not be any leakage with the security key from several storerooms
in the cloud. Segregation of information from additional companies’
information and then encrypting it by means of approved methods is
one of the options.
Amazon Web Services offers you a secure way of carrying data in the
cloud. Looking to master AWS platform?
5. Name the several layers of Cloud Computing.
Here is the list of layers of the cloud computing

• PaaS – Platform as a Service


• IaaS – Infrastructure as a Service
• SaaS – Software as a Service

6. What are the components involved in Amazon Web Services?


There are 4 components involved and are as below.
Amazon S3 : with this, one can retrieve the key information which are
occupied in creating cloud structural design and amount of produced
information also can be stored in this component that is the
consequence of the key specified.
Amazon EC2 instance : helpful to run a large distributed system on the
Hadoop cluster. Automatic parallelization and job scheduling can be
achieved by this component.

4|Page
Amazon SQS : this component acts as a mediator between different
controllers. Also worn for cushioning requirements those are obtained by
the manager of Amazon.
Amazon SimpleDB : helps in storing the transitional position log and
the errands executed by the consumers.
7. Distinguish between scalability and flexibility
The aptitude of any scheme to enhance the tasks on hand on its present
hardware resources to grip inconsistency in command is known as
scalability. The capability of a scheme to augment the tasks on hand on
its present and supplementary hardware property is recognized as
flexibility, hence enabling the industry to convene command devoid of
putting in the infrastructure at all. AWS has several configuration
management solutions for AWS scalability, flexibility, availability and
management.

8. Name the various layers of the cloud architecture


There are 5 layers and are listed below

• CC- Cluster Controller


• SC- Storage Controller
• CLC- Cloud Controller
• Walrus
• NC- Node Controller

9. Define auto-scaling.
Auto- scaling is one of the remarkable features of AWS where it permits
you to arrange and robotically stipulation and spin up fresh examples
without the requirement for your involvement. This can be achieved by
setting brinks and metrics to watch. If those entrances are overcome, a
fresh example of your selection will be configured, spun up and copied
into the weight planner collection.

10. Which automation gears can help with spinup services?


The API tools can be used for spinup services and also for the written
scripts. Those scripts could be coded in Perl, bash or other languages of
your preference. There is one more option that is patterned
5|Page
administration and stipulating tools such as a dummy or improved
descendant. A tool called Scalr can also be used and finally we can go
with a controlled explanation like a Rightscale.

11. Is it possible to scale an Amazon instance vertically? How?


Yes. This is an incredible characteristic of cloud virtualization and AWS.
Spinup is a huge case when compared to the one which you are running
with. Let up the instance and separate the root EBS volume from this
server and remove. Next, stop your live instance, remove its root volume.
Note down the distinctive device ID and attach root volume to your new
server and start it again. This is the way to scaling vertically in place.

12. How the processes start, stop and terminate works? How?
Starting and stopping of an instance: If an instance gets stopped or
ended, the instance functions a usual power cut and then change over to
a clogged position. You can establish the case afterward since all the EBS
volumes of Amazon remain attached. If an instance is in stopping state,
then you will not get charged for additional instance.
Finishing the instance: If an instance gets terminated it tends to
perform a typical blackout, so the EBS volumes which are attached will
get removed except the volume’s deleteOnTermination characteristic is
set to zero. In such cases, the instance will get removed and cannot set it
up afterward.
13. What is the relation between an instance and AMI?
AMI can be elaborated as Amazon Machine Image, basically, a template
consisting software configuration part. For example an OS, applications,
application server. If you start an instance, a duplicate of the AMI in a
row as an unspoken attendant in the cloud.

6|Page
Area AWS Azure
Security AWS Shield DDos Protection Service
DB Migration available as Azure also provides DB
DB migration
preview service Migration
NoSQL Dynamo Data Base Azure Cosmos Data Base
Content delivery
CloudFront Azure Content Delivery NW
network
Container instances EC2 Container Service (ECS) Azure Container Service
Azure Command Line Interface
Programmatic access Command Line Interface
(CLI)
Batch computing AWS Batch Azure Batch

Q: What do you mean by classic link?


The Amazon virtual private cloud classic link will permit EC2 instances in
the EC2 classic platform. This occurs so that it can communicate with the
instances that are present in the virtual private cloud. The
communication occurs with the help of private IP addresses. In order to
use a classic link it is important that you enable it to for virtual private
cloud in your account. Then you will need to associate a security group
with an instance in the EC2 classic. This security group is from the VPC
for which you enabled the classic link in your account. Each and every
rule that is there for the VPC security group is applicable for the
communications between the instances in EC2 classic and those
instances in the VPC.
Q: What is the process to use classic link?
For the purpose of using classic link, you will need to enable minimum
one virtual private cloud on your account for classic link. After doing this,

7|Page
you can associate a security group from that VPC to the EC2 classic
instance that you would prefer. This will make sure that your EC2 classic
instance is linked to VPC. It will become a member of the chosen security
group in the VPC. It should be remembered that you cannot connect
your EC2 classic instance to more than one virtual private cloud at the
same time.
Q: Is it possible for an EC2 classic instance to become a member of a
virtual private cloud?
No, it is not possible for an EC2 classic instance to be a member of a VPC
though it can become a member of the security group of virtual private
cloud. The security group should be associated with the EC2 classic
instance.
Q: Is it possible for classic link settings on EC2 classic interface to
persist through start or stop cycles?
It is not possible for a classic link connection to persist through the start
or stop cycles of the EC2 classic interface. After the EC2 classic interface
is stopped it will need to be linked back to a virtual private cloud. But the
classic link will persist through the instance reboot cycles.
Q: Is it possible to have more than two network interfaces to be
attached to EC2 instance?
The number of network interfaces that are to be attached with an EC2
instance will depend on the type of the instance.
Q: Can a network interface in one availability zone be attached with
an instance in another availability zone?
The instances that are present in the same availability zone can be
attached with network interfaces.
Q: Can a network interface in one VPC be attached to an instance
that is present in another VPC?
It is possible for the network interfaces to be attached to instances that
are in the same virtual private cloud as that of the interface.
Q: Is it possible to use elastic network interfaces in a way so that it
can host multiple websites which are required to separate IP
addresses on a single instance?

8|Page
Yes it is a possible scenario but not the best suited use case in case of
multiple interfaces. Apart from doing this it is much more logical to
assign an additional private IP address to the instance and to associate
the EIPs to the private IPs as per requirement.
Q: Can a primary interface be detached on EC2 instance?
It is possible. You can only attach and detach secondary interfaces on an
instance of EC2 but you would not be able to detach eth0 interface.
Q: In order to access VPCs that you are peered with, can you make
use of AWS direct connect or hardware VPN connections?
This is not a possible concept. Amazon VPC does not support edge to
edge routing.
Q: Is it possible to peer two VPCs with matching IP address ranges?
No, it is not possible to peer two VPCs with matching IP address ranges
since peered VPCs should posses IP ranges that are non-overlapping.
Q: In order to use peering connections, is it necessary to have an
Internet gateway?
No, you do not need an Internet gateway in order to virtual private cloud
peering connections.
Q: The VPC peering traffic that is present with the region, is it
encrypted?
No, the VPC peering traffic within the region is not encrypted. The traffic
between instances that is present in peered VPCs does remain isolated
and private. This is similar to the fact the traffic between two instances in
the same VPC are also isolated and private.
Q: In case of peering connections, is there any limitation on
bandwidth?
There is no difference in bandwidth between instances in peered VPCs
and also between instances in the VPC. Peered VPCs can be spanned by
a placement group. But you will not be provided with full bisects on
bandwidth that is present between instances in peered VPCs.
Q: Is it possible to modify the route tables of virtual private cloud? If
possible then how?

9|Page
Yes, it is possible go modify the route table of VPC. In order to specify
which subnets are to be routed to VPC, Internet gateway or any other
instances you are allowed to create route rules.
Q: Is it possible to specify the subnet that will be used by a gateway
as its default?
Yes, it possible to specify which subnet will be used by which gateway as
its default. You are entitled to make a default route for each and every
subnet. Via the VPC, Internet gateway or the NAT gateway, the default
route will be able to direct traffic to egress the virtual private cloud.
Q: In order to control and mane Amazon VPC, is it possible to make
use of AWS management console?
It is possible to use AWS management console to manage and control
Amazon VPC objects that include subnets, virtual private cloud, IPsec
VPN connections, and Internet gateways. Also you can make use of a
simple wizard in order to create a virtual private cloud.
Q: What are the VPCs, elastic IP addresses, subnets, Internet
gateways, virtual private gateways, customer gateways and VPN
connections can be created?
There are:-
1. There are five Amazon VPCs per AWS account per region.
2. For per Amazon VPC there are two hundred subnets
3. For per AWS account per region there are five Amazon VPC
elastic IP addresses.
4. For per AWS per region there are five virtual private gateways.
5. For each VPC there is one Internet gateway.
6. There are fifty customer gateways for every AWS account per
region.
7. For every virtual private gateway, there are ten IPsec VPN
connections.
Q: Is there a service level Agreement (SLA) for the Amazon VPC VPN
connection?
No there is no service level agreement for Amazon VPC VPN connection.
Q: Mention the work of an Amazon VPC router.

10 | P a g e
Enabling of Amazon EC2 instances that is within the subnet so that it can
communicate with Amazon EC2 instances on other subnets that are in
the same VPC is done by an Amazon VPC router. It also helps in enabling
Internet gateways, subnets, and virtual private gateways so that it can
communicate with each other. You will not get between usage data from
the router. But you are entitled to obtain network usage statistics from
the instances which are using Amazon cloud watch.
Q: Is the property of multicast or broadcast supported by Amazon
VPC?
No, Amazon VPC do not support multicast or broadcast.
Q: Mention the process in which a VPC access the Internet.
In order to give instances in the VPC the power to both direct
communicate outbound to the Internet and also to get the unsolicited
inbound traffic from the Internet, you can make use of public IP
addresses which include elastic IP addresses.
Q: Mention the process in which instances without public IP
addresses access the Internet.
There are two ways in which instances without public addresses can
make use of the Internet.
Those instances that are without public IP addresses can route their
traffic through a NAT instance or a NAT gateway so that it can access the
Internet. In order to traverse the Internet, these instances make use of
public IP address of the NAT gateway or the NAT instance. Outbound
communication is allowed by the NAT instance or NAT gateway but it do
not permit machines on the Internet to start a connection with the
addressed instances privately.
For those VPCs that are provided by a hardware VPN connection or
direct connect connection, the instances can route the Internet traffic
through the virtual private gateway to the existing data centre. It can
then access the Internet through the existing egress points and also new
tweak security or monitoring devices.
Q: Mention the process in which a hardware VPN connection turns
work with Amazon VPC.

11 | P a g e
The virtual private cloud is connected to the data centre with the help of
a hardware VPN connection. Internet protocol security VPN connections
are supported by Amazon. In order to intern the integrity and
confidentiality of a data which is in transit, this data is transferred
between the VPN and the data centres are routed over an encrypted
VPN connection. To establish a hardware VPN connection you do not
need an Internet gateway.
Q: How can one connect a VPC to corporate data centre?
In order to establish a hardware VPN connection among an existing
network and Amazon, VPC will permit you to interact with Amazon EC2
instances that are present within a VPC as if they were already present
within the existing network. Network address translation is not
performed by AWS on Amazon EC2 instances that are present within a
VPN connection that is VPC accessed through hardware.
Q: Name the customer gateway devices that are used to connect to
Amazon VPC
Statically routed VPN connections and dynamically routed VPN
connections are the two types of VPN connections. The customer
gateway devices that supports statically routed VPN connections must
be able to do:-
1. Using pre-shared keys, establish IKE security association.
2. In tunnel mode, establish IPsec security associations.
3. Utilization of AES 128 bit or 256 bit encryption function
4. Prior to encryption, perform packet fragmentation.
5. Utilization of SHA 1 or SHA 2 having function
The custom gateway devices that supports dynamically routed VPN
connections must be able to:-
1.Establishing border gateway protocol peering
2.Utilization of IPsec dead peer detection
3.Binding of tunnels to logical interfaces which have VPN route
based
Q: Mention the VPCs for which the classic link cannot be enabled.
A VPC which has a classless inter domain routing is one type of VPCs for
which you cannot enable classic link. Another one is the VPC which has a
route table entry that points to 10.0.0.0/8 CIDR space.
12 | P a g e
Q: Is it possible for traffic from an EC2 classic instance to travel
through the Amazon VPC and then egress through the internet
gateway, virtual private gateway or to peer VPCs?
It is only possible to route the traffic from an EC2 classic instance to the
private IP addresses that is within the VPC. They cannot be routed to any
other destination which is outside the VPC.
Q: Is the access control between the EC2 classic instance and other
instances which are present in the EC2 classic platform be affected
by classic link?
The access control that is defined for an EC2 classic instance through its
existing security groups from the EC2 classic platform cannot be
changed with classic link.
Q: Name the tools that are available to help troubleshoot the
hardware VPN configuration.
The status of the VPN connection is displayed by the Describe VPN
connection API. It also includes the Up or down state of each and every
VPN tunnel and it shows corresponding error messages if either one of
the tunnel is down.
Q. What is Amazon Machine Image (AMI)?
A Machine Image on Amazon (AMI) contains a software configuration
information like OS information, app server, and app information. We
can even launch multiple instances of an AMI.
Q. What is Amazon Machine Image and what is the relation between
Instance and AMI?
Amazon Web Services provides several ways to access Amazon EC2, like
web-based interface, AWS Command Line Interface (CLI) and Amazon
Tools for Windows Powershell. First, you need to sign up for an AWS
account and you can access Amazon EC2.
Amazon EC2 provides a Query API. These requests are HTTP or HTTPS
requests that use the HTTP verbs GET or POST and a Query parameter
named Action.

13 | P a g e
1) Explain what is AWS?
AWS stands for Amazon Web Service; it is a collection of remote
computing services also known as cloud computing platform. This new
realm of cloud computing is also known as IaaS or Infrastructure as a
Service.
2) Mention what are the key components of AWS?
The key components of AWS are
• Route 53: A DNS web service
• Simple E-mail Service: It allows sending e-mail using RESTFUL API
call or via regular SMTP
• Identity and Access Management: It provides enhanced security
and identity management for your AWS account
• Simple Storage Device or (S3): It is a storage device and the most
widely used AWS service
• Elastic Compute Cloud (EC2): It provides on-demand computing
resources for hosting applications. It is very useful in case of
unpredictable workloads
• Elastic Block Store (EBS): It provides persistent storage volumes
that attach to EC2 to allow you to persist data past the lifespan of a
single EC2
• CloudWatch: To monitor AWS resources, It allows administrators
to view and collect key Also, one can set a notification alarm in case of
trouble.
3) Explain what is S3?
S3 stands for Simple Storage Service. You can use S3 interface to store
and retrieve any amount of data, at any time and from anywhere on the
web. For S3, the payment model is “pay as you go”.
4) Explain what is AMI?
AMI stands for Amazon Machine Image. It’s a template that provides the
information (an operating system, an application server and applications)
required to launch an instance, which is a copy of the AMI running as a

14 | P a g e
virtual server in the cloud. You can launch instances from as many
different AMIs as you need.
5) Mention what is the relation between an instance and AMI?
From a single AMI, you can launch multiple types of instances. An
instance type defines the hardware of the host computer used for your
instance. Each instance type provides different compute and memory
capabilities. Once you launch an instance, it looks like a traditional host,
and we can interact with it as we would with any computer.
6) What does an AMI include?
An AMI includes the following things
• A template for the root volume for the instance
• Launch permissions decide which AWS accounts can avail the AMI
to launch instances
• A block device mapping that determines the volumes to attach to
the instance when it is launched

7) How can you send request to Amazon S3?


Amazon S3 is a REST service, you can send request by using the REST API
or the AWS SDK wrapper libraries that wrap the underlying Amazon S3
REST API.
8) Mention what is the difference between Amazon S3 and EC2?
The difference between EC2 and Amazon S3 is that
EC2 S3

It is a cloud web service used for It is a data storage system


hosting your application where any amount of data can
be stored

It is like a huge computer machine It has a REST interface and uses


which can run either Linux or secure HMAC-SHA1
Windows and can handle authentication keys
application like PHP, Python,
Apache or any databases

Amazon S3 Amazon EC2


The meaning of S3 is Simple Storage The meaning of EC2 is Elastic Compute

15 | P a g e
Service. Cloud.
It is just a data storage service which It is a cloud web service which is used to
is used to store large binary files. host the application created.
It is not required to run a server. It is enough to run a server.
It has a REST interface and uses It is just like a huge computer machine
secure HMAC-SHA1 authentication which can handle application like
keys. Python, PHP, Apache and any other
database.

9) How many buckets can you create in AWS by default?


By default, you can create upto 100 buckets in each of your AWS
accounts.
10) Explain can you vertically scale an Amazon instance? How?
Yes, you can vertically scale on Amazon instance. For that
• Spin up a new larger instance than the one you are currently
running
• Pause that instance and detach the root webs volume from the
server and discard
• Then stop your live instance and detach its root volume
• Note the unique device ID and attach that root volume to your
new server
• And start it again
11) Explain what is T2 instances?
T2 instances are designed to provide moderate baseline performance
and the capability to burst to higher performance as required by
workload.
12) In VPC with private and public subnets, database servers should
ideally be launched into which subnet?
With private and public subnets in VPC, database servers should ideally
launch into private subnets.

13) Mention what are the security best practices for Amazon EC2?
For secure Amazon EC2 best practices, follow the following steps
• Use AWS identity and access management to control access to your
AWS resources

16 | P a g e
• Restrict access by allowing only trusted hosts or networks to access
ports on your instance
• Review the rules in your security groups regularly
• Only open up permissions that your require
• Disable password-based login, for instance, launched from your AMI

14) Explain how the buffer is used in Amazon web services?


The buffer is used to make the system more robust to manage traffic or
load by synchronizing different component. Usually, components
receive and process the requests in an unbalanced way, With the help of
buffer, the components will be balanced and will work at the same speed
to provide faster services.
15) While connecting to your instance what are the possible
connection issues one might face?
The possible connection errors one might encounter while connecting
instances are
• Connection timed out
• User key not recognized by the server
• Host key not found, permission denied
• Unprotected private key file
• Server refused our key or No supported authentication method
available
• Error using MindTerm on Safari Browser
• Error using Mac OS X RDP Client

1. Question 1. What Is Aws?


Answer :
AWS (Amazon Web Services) is a platform to provide secure cloud
services, database storage, offerings to compute power, content
delivery, and other services to help business level and develop.
2. Question 2. What Are The Key Components Of Aws?
Answer :
The fundamental elements of AWS are:
Route 53: A DNS web service

17 | P a g e
Easy E-mail Service: It permits addressing e-mail utilizing RESTFUL
API request or through normal SMTP
Identity and Access Management: It gives heightened protection
and identity control for your AWS account
Simple Storage Device or (S3): It is warehouse equipment and the
well-known widely utilized AWS service
Elastic Compute Cloud (EC2): It affords on-demand computing
sources for hosting purposes. It is extremely valuable in trouble of
variable workloads
Elastic Block Store (EBS): It presents persistent storage masses that
connect to EC2 to enable you to endure data beyond the lifespan of a
particular EC2
Cloud Watch: To observe AWS sources, It permits managers to
inspect and obtain key Additionally, one can produce a notification
alert in the state of crisis.
3. Question 3. What Is The Importance Of Buffer In Amazon
Web Services?
Answer :
An Elastic Load Balancer ensures that the incoming traffic is
distributed optimally across various AWS instances. A buffer will
synchronize different components and makes the arrangement
additional elastic to a burst of load or traffic. The components are
prone to work in an unstable way of receiving and processing the
requests. The buffer creates the equilibrium linking various apparatus
and crafts them effort at the identical rate to supply more rapid
services.

4. Question 4. What Is The Way To Secure Data For Carrying In


The Cloud?
Answer :
One thing must be ensured that no one should seize the information
in the cloud while data is moving from point one to another and also
there should not be any leakage with the security key from several
storerooms in the cloud. Segregation of information from additional
companies’ information and then encrypting it by means of approved
methods is one of the options.
5. Question 5. Name The Several Layers Of Cloud Computing?
Answer :
Here is the list of layers of the cloud computing
o PaaS – Platform as a Service
o IaaS – Infrastructure as a Service
o SaaS – Software as a Service

18 | P a g e
6. Question 6. Explain Can You Vertically Scale An Amazon
Instance ? How?
Answer :
Surely, you can vertically estimate on Amazon instance. During that
• Twist up a fresh massive instance than the one you are currently
governing
• Delay that instance and separate the source webs mass of server
and dispatch
• Next, quit your existing instance and separate its source quantity
• Note the different machine ID and connect that source mass to your
fresh server
• Also, begin it repeatedly Study AWS Training Online From Real Time
Experts
7. Question 7. What Are The Components Involved In Amazon
Web Services?
Answer :
There are 4 components involved and are as below. Amazon S3: with
this, one can retrieve the key information which are occupied in
creating cloud structural design and amount of produced information
also can be stored in this component that is the consequence of the
key specified. Amazon EC2 instance: helpful to run a large distributed
system on the Hadoop cluster. Automatic parallelization and job
scheduling can be achieved by this component.
Amazon SQS: this component acts as a mediator between different
controllers. Also worn for cushioning requirements those are obtained
by the manager of Amazon.
Amazon SimpleDB: helps in storing the transitional position log and
the errands executed by the consumers.
8. Question 8. What Is Lambda@edge In Aws?
Answer :
• In AWS, we can use Lambda@Edge utility to solve the problem of
low network latency for end users.
• In Lambda@Edge there is no need to provision or manage
servers. We can just upload our Node.js code to AWS Lambda and
create functions that will be triggered on CloudFront requests.
• When a request for content is received by CloudFront edge
location, the Lambda code is ready to execute.
• This is a very good option for scaling up the operations in
CloudFront without managing servers.
9. Question 9. Distinguish Between Scalability And Flexibility?
Answer :

19 | P a g e
The aptitude of any scheme to enhance the tasks on hand on its
present hardware resources to grip inconsistency in command is
known as scalability. The capability of a scheme to augment the tasks
on hand on its present and supplementary hardware property is
recognized as flexibility, hence enabling the industry to convene
command devoid of putting in the infrastructure at all. AWS has
several configuration management solutions for AWS scalability,
flexibility, availability and management.
10. Question 10. Name The Various Layers Of The Cloud
Architecture?
Answer :
There are 5 layers and are listed below
o CC- Cluster Controller
o SC- Storage Controller
o CLC- Cloud Controller
o Walrus
o NC- Node Controller
11. Question 11. What Are The Different Types Of Events
Triggered By Amazon Cloud Front?
Answer :
Different types of events triggered by Amazon CloudFront are as
follows:
Viewer Request: When an end user or a client program makes an
HTTP/HTTPS request to CloudFront, this event is triggered at the Edge
Location closer to the end user.
Viewer Response: When a CloudFront server is ready to respond to a
request, this event is triggered.
Origin Request: When CloudFront server does not have the
requested object in its cache, the request is forwarded to Origin
server. At this time this event is triggered.
Origin Response: When CloudFront server at an Edge location
receives the response from Origin server, this event is triggered.
12. Question 12. Which Automation Gears Can Help With Spinup
Services?
Answer :
The API tools can be used for spinup services and also for the written
scripts. Those scripts could be coded in Perl, bash or other languages
of your preference. There is one more option that is patterned
administration and stipulating tools such as a dummy or improved
descendant. A tool called Scalr can also be used and finally we can go
with a controlled explanation like a Rightscale.
13. Question 13. What Is An Ami ? How Do I Build One?
Answer :

20 | P a g e
AMI holds for Amazon Machine Image. It is efficiently a snap of the
source filesystem. Products appliance servers have a bio that shows
the master drive report of the initial slice on a disk. A disk form though
can lie anyplace physically on a disc, so Linux can boot from an
absolute position on the EBS warehouse interface.
Create a unique AMI at beginning rotating up and instance from a
granted AMI. Later uniting combinations and components as needed.
Comprise wary of setting delicate data over an AMI (learn salesforce
online). For instance, your way credentials should be joined to an
instance later spinup. Among a database, mount an external volume
that carries your MySQL data next spinup actually enough.
14. Question 14. What Are The Main Features Of Amazon Cloud
Front?
Answer :
Some of the main features of Amazon CloudFront are as follows:
Device Detection Protocol Detection Geo Targeting Cache Behavior
Cross Origin Resource Sharing Multiple Origin Servers HTTP Cookies
Query String Parameters Custom SSL.
15. Question 15. What Is The Relation Between An Instance And
Ami?
Answer :
AMI can be elaborated as Amazon Machine Image, basically, a
template consisting software configuration part. For example an OS,
applications, application server. If you start an instance, a duplicate of
the AMI in a row as an unspoken attendant in the cloud.
16. Question 16. What Is Amazon Ec2 Service?
Answer :
Amazon Elastic Compute Cloud (Amazon EC2) is a web service that
provides resizable (scalable) computing capacity in the cloud. You can
use Amazon EC2 to launch as many virtual servers you need. In
Amazon EC2 you can configure security and networking as well as
manage storage.Amazon EC2 service also helps in obtaining and
configuring capacity using minimal friction.
17. Question 17. What Are The Features Of The Amazon Ec2
Service?
Answer :
As the Amazon EC2 service is a cloud service so it has all the cloud
features. Amazon EC2 provides the following features:
• Virtual computing environment (known as instances)
• Pre-configured templates for your instances (known as Amazon
Machine Images – AMIs)

21 | P a g e
• Amazon Machine Images (AMIs) is a complete package that you
need for your server (including the operating system and additional
software)
• Amazon EC2 provides various configurations of CPU, memory,
storage and networking capacity for your instances (known as
instance type)
• Secure login information for your instances using key pairs (AWS
stores the public key and you can store the private key in a secure
place)
• Storage volumes of temporary data is deleted when you stop or
terminate your instance (known as instance store volumes)
• Amazon EC2 provides persistent storage volumes (using Amazon
Elastic Block Store – EBS)
• A firewall that enables you to specify the protocols, ports, and
source IP ranges that can reach your instances using security groups
• Static IP addresses for dynamic cloud computing (known as Elastic
IP address)
• Amazon EC2 provides metadata (known as tags)
• Amazon EC2 provides virtual networks that are logically isolated
from the rest of the AWS cloud, and that you can optionally connect
to your own network (known as virtual private clouds – VPCs)
18. Question 18. What Is Amazon Machine Image And What Is
The Relation Between Instance And Ami?
Answer :
Amazon Web Services provides several ways to access Amazon EC2,
like web-based interface, AWS Command Line Interface (CLI) and
Amazon Tools for Windows Powershell. First, you need to sign up for
an AWS account and you can access Amazon EC2.
Amazon EC2 provides a Query API. These requests are HTTP or HTTPS
requests that use the HTTP verbs GET or POST and a Query parameter
named Action.
19. Question 19. What Is Amazon Machine Image (ami)?
Answer :
An Amazon Machine Image (AMI) is a template that contains a
software configuration (for example, an operating system, an
application server, and applications). From an AMI, we launch an
instance, which is a copy of the AMI running as a virtual server in the
cloud. We can even launch multiple instances of an AMI.

20. Question 20. What Is The Relation Between Instance And


Ami?
22 | P a g e
Answer :
We can launch different types of instances from a single AMI. An
instance type essentially determines the hardware of the host
computer used for your instance. Each instance type offers different
compute and memory capabilities.
After we launch an instance, it looks like a traditional host, and we can
interact with it as we would do with any computer. We have complete
control of our instances; we can use sudo to run commands that
require root privileges.
21. Question 21. Explain Storage For Amazon Ec2 Instance.?
Answer :
Amazon EC2 provides many data storage options for your instances.
Each option has a unique combination of performance and durability.
These storages can be used independently or in combination to suit
your requirements.
There are mainly four types of storages provided by AWS:
Amazon EBS: Its durable, block-level storage volumes can attached
in running Amazon EC2 instance. The Amazon EBS volume persists
independently from the running life of an Amazon EC2 instance. After
an EBS volume is attached to an instance, you can use it like any other
physical hard drive. Amazon EBS encryption feature supports
encryption feature.
Amazon EC2 Instance Store: Storage disk that is attached to the host
computer is referred to as instance store. The instance storage
provides temporary block-level storage for Amazon EC2 instances. The
data on an instance store volume persists only during the life of the
associated Amazon EC2 instance; if you stop or terminate an instance,
any data on instance store volumes is lost.
Amazon S3: Amazon S3 provides access to reliable and inexpensive
data storage infrastructure. It is designed to make web-scale
computing easier by enabling you to store and retrieve any amount of
data, at any time, from within Amazon EC2 or anywhere on the web.
Adding Storage: Every time you launch an instance from an AMI, a
root storage device is created for that instance. The root storage
device contains all the information necessary to boot the instance. You
can specify storage volumes in addition to the root device volume
when you create an AMI or launch an instance using block device
mapping.
22. Question 22. What Are The Security Best Practices For
Amazon Ec2?
Answer :
There are several best practices for secure Amazon EC2. Following are
few of them.

23 | P a g e
• Use AWS Identity and Access Management (AM) to control access
to your AWS resources.
• Restrict access by only allowing trusted hosts or networks to access
ports on your instance.
• Review the rules in your security groups regularly, and ensure that
you apply the principle of least
• Privilege — only open up permissions that you require.
• Disable password-based logins for instances launched from your
AMI. Passwords can be found or cracked, and are a security risk.
23. Question 23. Explain Stopping, Starting, And Terminating An
Amazon Ec2 Instance?
Answer :
Stopping and Starting an instance: When an instance is stopped, the
instance performs a normal shutdown and then transitions to a
stopped state. All of its Amazon EBS volumes remain attached, and
you can start the instance again at a later time. You are not charged
for additional instance hours while the instance is in a stopped state.
Terminating an instance: When an instance is terminated, the
instance performs a normal shutdown, then the attached Amazon EBS
volumes are deleted unless the volume’s deleteOnTermination
attribute is set to false. The instance itself is also deleted, and you can’t
start the instance again at a later time.
24. Question 24. Explain Elastic Block Storage? What Type Of
Performance Can You Expect? How Do You Back It Up? How Do
You Improve Performance?
Answer :
EBS is a virtualized SAN or storage area network. That means it is
RAID storage to start with, so it’s redundant and fault tolerant. If disks
die in that RAID you don’t lose data. Great! It is also virtualized, so
you can provision and allocate storage, and attach it to your server
with various API calls. No calling the storage expert and asking him or
her to run specialized commands from the hardware vendor.
Performance on EBS can exhibit variability. That is, it can go above the
SLA performance level, then drop below it. The SLA provides you with
an average disk I/O rate you can expect. This can frustrate some folks,
especially performance experts who expect reliable and consistent disk
throughout on a server. Traditional physically hosted servers behave
that way. Virtual AWS instances do not.
Backup EBS volumes by using the snapshot facility via API call or via a
GUI interface like elasticfox.
Improve performance by using Linux software raid and striping across
four volumes.

24 | P a g e
25. Question 25. What Is S3? What Is It Used For? Should
Encryption Be Used?
Answer :
S3 stands for Simple Storage Service. You can think of it like FTP
storage, where you can move files to and from there, but not mount it
like a filesystem. AWS automatically puts your snapshots there, as well
as AMIs there. Encryption should be considered for sensitive data, as
S3 is a proprietary technology developed by Amazon themselves, and
as yet unproven vis-a-vis a security standpoint.
26. Question 26. What Is An Ami? How Do I Build One?
Answer :
AMI stands for Amazon Machine Image. It is effectively a snapshot of
the root filesystem. Commodity hardware, servers have a bios that
points the master boot record of the first block on a disk. A disk
image, though can sit anywhere physically on a disk, so Linux can boot
from an arbitrary location on the EBS storage network.
Build a new AMI by first spinning up and instance from a trusted
AMI.Then adding packages and components as required. Be wary of
putting sensitive data onto an AMI. For instance, your access
credentials should be added to an instance after spinup with a
database, mount an outside volume that holds your MySQL data after
spinup as well.

27. Question 27. Can I Vertically Scale An Amazon Instance?


How?
Answer :
Yes.This is an incredible feature of AWS and cloud virtualization. Spin
up a new larger instance than the one you are currently running.
Pause that instance and detach the root ebs volume from this server
and discard. Then stop your live instance, detach its root volume.
Note down the unique device ID and attach that root volume to your
new server. And then start it again. Voila, you have scaled vertically in-
place!!
28. Question 28. What Is Auto-scaling? How Does It Work?
Answer :
Autoscaling is a feature of AWS which allows you to configure and
automatically provision and spin up new instances without the need
for your intervention.
You do this by setting thresholds and metrics to monitor. When those
thresholds are crossed, a new instance of your choosing will be spun
up, configured, and rolled into the load balancer pool. Voila, you’ve
scaled horizontally without any operator intervention!

25 | P a g e
29. Question 29. What Automation Tools Can I Use To Spin Up
Servers?
Answer :
The most obvious way is to roll-your-own scripts, and use the AWS
API tools. Such scripts could be written in bash, Perl or another
language or your choice.
The next option is to use a configuration management and
provisioning tools like puppet or better it’s successor Opscode
Chef.You might also look towards a tool like Scalr. Lastly, you can go
with a managed solution such as Rightscale.
30. Question 30. What Is Configuration Management? Why
Would I Want To Use It With Cloud Provisioning Of Resources?
Answer :
Configuration management has been around for a long time in web
operations and systems administration. Yet the cultural popularity of
it has been limited. Most systems administrators configure machines
as software was developed before version control – that is manually
making changes on servers. Each server can then and usually is
slightly different. Troubleshooting though, is straightforward as you
login to the box and operate on it directly. Configuration
management brings a large automation tool in the picture, managing
servers like strings of a puppet. This forces standardization, best
practices, and reproducibility as all configs are versioned and
managed. It also introduces a new way of working which is the
biggest hurdle to its adoption.
Enter the cloud, then configuration management becomes even more
critical.That’s because virtual servers such as amazons EC2 instances
are much less reliable than physical ones.You absolutely need a
mechanism to rebuild them as-is at any moment.This pushes best
practices like automation, reproducibility and disaster recovery into
center stage.
31. Question 31. Explain How You Would Simulate Perimeter
Security Using The Amazon Web Services Model?
Answer :
Traditional perimeter security that we’re already familiar with using
firewalls and so forth is not supported in the Amazon EC2 world. AWS
supports security groups.One can create a security group for a jump
box with ssh access – only port 22 open.From there a webserver group
and database group are created.The webserver group allows 80 and
443 from the world, but port 22 *only* from the jump box
group.Further the database group allows port 3306 from the
webserver group and port 22 from the jump box group.Add any
machines to the webserver group and they can all hit the database.
26 | P a g e
No one from the world can, and no one can directly ssh to any of
your boxe.
32. Question 32. How To Use Amazon Sqs?
Answer :
Amazon SQS (Simple Queue Service) is a message passing mechanism
that is used for communication between different connectors that are
connected with each other. It also acts as a communicator between
various components of Amazon. It keeps all the different functional
components together. This functionality helps different components
to be loosely coupled, and provide an architecture that is more failure
resilient system.

27 | P a g e
Section 1: What is Cloud Computing

1. I have some private servers on my premises, also I have


distributed some of my workload on the public cloud, what is this
architecture called?

A. Virtual Private Network


B. Private Cloud
C. Virtual Private Cloud
D. Hybrid Cloud

Answer D.
Explanation: This type of architecture would be a hybrid cloud. Why?
Because we are using both, the public cloud, and your on premises
servers i.e the private cloud. To make this hybrid architecture easy to use,
wouldn’t it be better if your private and public cloud were all on the
same network(virtually). This is established by including your public
cloud servers in a virtual private cloud, and connecting this virtual cloud
with your on premise servers using a VPN(Virtual Private Network).
Section 2: Amazon EC2 Interview Questions
For a detailed discussion on this topic, please refer our EC2 AWS blog.

2. What does the following command do with respect to the Amazon


EC2 security groups?

ec2-create-group CreateSecurityGroup

A. Groups the user created security groups into a new group for easy
access.
B. Creates a new security group for use with your account.
C. Creates a new group inside the security group.
D. Creates a new rule inside the security group.

Answer B.
Explanation: A Security group is just like a firewall, it controls the traffic
in and out of your instance. In AWS terms, the inbound and outbound
traffic. The command mentioned is pretty straight forward, it says create
security group, and does the same. Moving along, once your security
group is created, you can add different rules in it. For example, you have
28 | P a g e
an RDS instance, to access it, you have to add the public IP address of
the machine from which you want access the instance in its security
group.

3. You have a video trans-coding application. The videos are processed


according to a queue. If the processing of a video is interrupted in one
instance, it is resumed in another instance. Currently there is a huge
back-log of videos which needs to be processed, for this you need to
add more instances, but you need these instances only until your
backlog is reduced. Which of these would be an efficient way to do it?

You should be using an On Demand instance for the same. Why? First of
all, the workload has to be processed now, meaning it is urgent, secondly
you don’t need them once your backlog is cleared, therefore Reserved
Instance is out of the picture, and since the work is urgent, you cannot
stop the work on your instance just because the spot price spiked,
therefore Spot Instances shall also not be used. Hence On-Demand
instances shall be the right choice in this case.
4. You have a distributed application that periodically processes large
volumes of data across multiple Amazon EC2 Instances. The application
is designed to recover gracefully from Amazon EC2 instance failures. You
are required to accomplish this task in the most cost effective way.

Which of the following will meet your requirements?

A. Spot Instances
B. Reserved instances
C. Dedicated instances
D. On-Demand instances
Answer: A
Explanation: Since the work we are addressing here is not continuous, a
reserved instance shall be idle at times, same goes with On Demand
instances. Also it does not make sense to launch an On Demand instance
whenever work comes up, since it is expensive. Hence Spot Instances will
be the right fit because of their low rates and no long term
commitments.

29 | P a g e
5. How is stopping and terminating an instance different from each
other?

Starting, stopping and terminating are the three states in an EC2


instance, let’s discuss them in detail:

• Stopping and Starting an instance: When an instance is stopped,


the instance performs a normal shutdown and then transitions to a
stopped state. All of its Amazon EBS volumes remain attached, and
you can start the instance again at a later time. You are not
charged for additional instance hours while the instance is in a
stopped state.
• Terminating an instance: When an instance is terminated, the
instance performs a normal shutdown, then the attached Amazon
EBS volumes are deleted unless the
volume’s deleteOnTermination attribute is set to false. The instance
itself is also deleted, and you can’t start the instance again at a
later time.

6. If I want my instance to run on a single-tenant hardware, which


value do I have to set the instance’s tenancy attribute to?

A. Dedicated
B. Isolated
C. One
D. Reserved
Answer A.
Explanation: The Instance tenancy attribute should be set to Dedicated
Instance. The rest of the values are invalid.

7. When will you incur costs with an Elastic IP address (EIP)?

A. When an EIP is allocated.


B. When it is allocated and associated with a running instance.
C. When it is allocated and associated with a stopped instance.
D. Costs are incurred regardless of whether the EIP is associated with
a running instance.

Answer C.

30 | P a g e
Explanation: You are not charged, if only one Elastic IP address is
attached with your running instance. But you do get charged in the
following conditions:

• When you use more than one Elastic IPs with your instance.
• When your Elastic IP is attached to a stopped instance.
• When your Elastic IP is not attached to any instance.

8. How is a Spot instance different from an On-Demand instance or


Reserved Instance?

First of all, let’s understand that Spot Instance, On-Demand instance and
Reserved Instances are all models for pricing. Moving along, spot
instances provide the ability for customers to purchase compute capacity
with no upfront commitment, at hourly rates usually lower than the On-
Demand rate in each region. Spot instances are just like bidding, the
bidding price is called Spot Price. The Spot Price fluctuates based on
supply and demand for instances, but customers will never pay more
than the maximum price they have specified. If the Spot Price moves
higher than a customer’s maximum price, the customer’s EC2 instance
will be shut down automatically. But the reverse is not true, if the Spot
prices come down again, your EC2 instance will not be launched
automatically, one has to do that manually. In Spot and On demand
instance, there is no commitment for the duration from the user side,
however in reserved instances one has to stick to the time period that he
has chosen.

9. Are the Reserved Instances available for Multi-AZ Deployments?

A. Multi-AZ Deployments are only available for Cluster Compute


instances types
B. Available for all instance types
C. Only available for M3 instance types
D. D. Not Available for Reserved Instances

Answer B.
Explanation: Reserved Instances is a pricing model, which is available for
all instance types in EC2.

31 | P a g e
10. How to use the processor state control feature available on the
c4.8xlarge instance?

The processor state control consists of 2 states:

• The C state – Sleep state varying from c0 to c6. C6 being the


deepest sleep state for a processor
• The P state – Performance state p0 being the highest and p15
being the lowest possible frequency.

Now, why the C state and P state. Processors have cores, these cores
need thermal headroom to boost their performance. Now since all the
cores are on the processor the temperature should be kept at an optimal
state so that all the cores can perform at the highest performance.
Now how will these states help in that? If a core is put into sleep state it
will reduce the overall temperature of the processor and hence other
cores can perform better. Now the same can be synchronized with other
cores, so that the processor can boost as many cores it can by timely
putting other cores to sleep, and thus get an overall performance boost.
Concluding, the C and P state can be customized in some EC2 instances
like the c4.8xlarge instance and thus you can customize the processor
according to your workload.
How to do it? You can refer this tutorial for the same.

11. What kind of network performance parameters can you expect


when you launch instances in cluster placement group?

The network performance depends on the instance type and network


performance specification, if launched in a placement group you can
expect up to

• 10 Gbps in a single-flow,
• 20 Gbps in multiflow i.e full duplex
• Network traffic outside the placement group will be limited to 5
Gbps(full duplex).

32 | P a g e
12. To deploy a 4 node cluster of Hadoop in AWS which instance
type can be used?

First let’s understand what actually happens in a Hadoop cluster, the


Hadoop cluster follows a master slave concept. The master machine
processes all the data, slave machines store the data and act as data
nodes. Since all the storage happens at the slave, a higher capacity hard
disk would be recommended and since master does all the processing, a
higher RAM and a much better CPU is required. Therefore, you can select
the configuration of your machine depending on your workload. For e.g.
– In this case c4.8xlarge will be preferred for master machine whereas for
slave machine we can select i2.large instance. If you don’t want to deal
with configuring your instance and installing hadoop cluster manually,
you can straight away launch an Amazon EMR (Elastic Map Reduce)
instance which automatically configures the servers for you. You dump
your data to be processed in S3, EMR picks it from there, processes it,
and dumps it back into S3.

13. Where do you think an AMI fits, when you are designing an
architecture for a solution?

AMIs(Amazon Machine Images) are like templates of virtual machines


and an instance is derived from an AMI. AWS offers pre-baked AMIs
which you can choose while you are launching an instance, some AMIs
are not free, therefore can be bought from the AWS Marketplace. You
can also choose to create your own custom AMI which would help you
save space on AWS. For example if you don’t need a set of software on
your installation, you can customize your AMI to do that. This makes it
cost efficient, since you are removing the unwanted things.

14. How do you choose an Availability Zone?

Let’s understand this through an example, consider there’s a company


which has user base in India as well as in the US.
Let us see how we will choose the region for this use case :

33 | P a g e
So, with reference to the above figure the regions to choose between
are, Mumbai and North Virginia. Now let us first compare the pricing,
you have hourly prices, which can be converted to your per month
figure. Here North Virginia emerges as a winner. But, pricing cannot be
the only parameter to consider. Performance should also be kept in mind
hence, let’s look at latency as well. Latency basically is the time that a
server takes to respond to your requests i.e the response time. North
Virginia wins again!
So concluding, North Virginia should be chosen for this use case.

15. Is one Elastic IP address enough for every instance that I have
running?

Depends! Every instance comes with its own private and public address.
The private address is associated exclusively with the instance and is
returned to Amazon EC2 only when it is stopped or terminated.
Similarly, the public address is associated exclusively with the instance
until it is stopped or terminated. However, this can be replaced by the
Elastic IP address, which stays with the instance as long as the user
doesn’t manually detach it. But what if you are hosting multiple websites
on your EC2 server, in that case you may require more than one Elastic IP
address.

16. What are the best practices for Security in Amazon EC2?

There are several best practices to secure Amazon EC2. A few of them
are given below:

34 | P a g e
• Use AWS Identity and Access Management (IAM) to control access
to your AWS resources.
• Restrict access by only allowing trusted hosts or networks to access
ports on your instance.
• Review the rules in your security groups regularly, and ensure that
you apply the principle of least
• Privilege – only open up permissions that you require.
• Disable password-based logins for instances launched from your
AMI. Passwords can be found or cracked, and are a security risk.

Section 3: Amazon Storage

17. You need to configure an Amazon S3 bucket to serve static


assets for your public-facing web application. Which method will
ensure that all objects uploaded to the bucket are set to public
read?

A. Set permissions on the object to public read during upload.


B. Configure the bucket policy to set all objects to public read.
C. Use AWS Identity and Access Management roles to set the bucket
to public read.
D. Amazon S3 objects default to public read, so no action is needed.

Answer B.
Explanation: Rather than making changes to every object, its better to
set the policy for the whole bucket. IAM is used to give more granular
permissions, since this is a website, all objects would be public by
default.

18. A customer wants to leverage Amazon Simple Storage Service


(S3) and Amazon Glacier as part of their backup and archive
infrastructure. The customer plans to use third-party software to
support this integration. Which approach will limit the access of the
third party software to only the Amazon S3 bucket named
“company-backup”?

A. A custom bucket policy limited to the Amazon S3 API in three


Amazon Glacier archive “company-backup”
B. A custom bucket policy limited to the Amazon S3 API in
“company-backup”
C. A custom IAM user policy limited to the Amazon S3 API for the
Amazon Glacier archive “company-backup”.
35 | P a g e
D. A custom IAM user policy limited to the Amazon S3 API in
“company-backup”.

Answer D.
Explanation: Taking queue from the previous questions, this use case
involves more granular permissions, hence IAM would be used here.

19. Can S3 be used with EC2 instances, if yes, how?

Yes, it can be used for instances with root devices backed by local
instance storage. By using Amazon S3, developers have access to the
same highly scalable, reliable, fast, inexpensive data storage
infrastructure that Amazon uses to run its own global network of web
sites. In order to execute systems in the Amazon EC2 environment,
developers use the tools provided to load their Amazon Machine Images
(AMIs) into Amazon S3 and to move them between Amazon S3 and
Amazon EC2.
Another use case could be for websites hosted on EC2 to load their static
content from S3.
For a detailed discussion on S3, please refer our S3 AWS blog.

20. A customer implemented AWS Storage Gateway with a gateway-


cached volume at their main office. An event takes the link between
the main and branch office offline. Which methods will enable the
branch office to access their data?

A. Restore by implementing a lifecycle policy on the Amazon S3


bucket.
B. Make an Amazon Glacier Restore API call to load the files into
another Amazon S3 bucket within four to six hours.
C. Launch a new AWS Storage Gateway instance AMI in Amazon EC2,
and restore from a gateway snapshot.
D. Create an Amazon EBS volume from a gateway snapshot, and
mount it to an Amazon EC2 instance.

Answer C.
Explanation: The fastest way to do it would be launching a new storage
gateway instance. Why? Since time is the key factor which drives every

36 | P a g e
business, troubleshooting this problem will take more time. Rather than
we can just restore the previous working state of the storage gateway on
a new instance.

21. When you need to move data over long distances using the
internet, for instance across countries or continents to your Amazon
S3 bucket, which method or service will you use?

A. Amazon Glacier
B. Amazon CloudFront
C. Amazon Transfer Acceleration
D. Amazon Snowball

Answer C.
Explanation: You would not use Snowball, because for now, the
snowball service does not support cross region data transfer, and since,
we are transferring across countries, Snowball cannot be used. Transfer
Acceleration shall be the right choice here as it throttles your data
transfer with the use of optimized network paths and Amazon’s content
delivery network upto 300% compared to normal data transfer speed.

22. How can you speed up data transfer in Snowball?

The data transfer can be increased in the following way:

• By performing multiple copy operations at one time i.e. if the


workstation is powerful enough, you can initiate multiple cp
commands each from different terminals, on the same Snowball
device.
• Copying from multiple workstations to the same snowball.
• Transferring large files or by creating a batch of small file, this will
reduce the encryption overhead.
• Eliminating unnecessary hops i.e. make a setup where the source
machine(s) and the snowball are the only machines active on the
switch being used, this can hugely improve performance.

Section 4: AWS VPC

37 | P a g e
23. If you want to launch Amazon Elastic Compute Cloud (EC2)
instances and assign each instance a predetermined private IP
address you should:

A. Launch the instance from a private Amazon Machine Image (AMI).


B. Assign a group of sequential Elastic IP address to the instances.
C. Launch the instances in the Amazon Virtual Private Cloud (VPC).
D. Launch the instances in a Placement Group.

Answer C.
Explanation: The best way of connecting to your cloud resources (for
ex- ec2 instances) from your own data center (for eg- private cloud) is a
VPC. Once you connect your datacenter to the VPC in which your
instances are present, each instance is assigned a private IP address
which can be accessed from your datacenter. Hence, you can access your
public cloud resources, as if they were on your own network.

24. Can I connect my corporate datacenter to the Amazon Cloud?

Yes, you can do this by establishing a VPN(Virtual Private Network)


connection between your company’s network and your VPC (Virtual
Private Cloud), this will allow you to interact with your EC2 instances as if
they were within your existing network.

25. Is it possible to change the private IP addresses of an EC2 while


it is running/stopped in a VPC?

Primary private IP address is attached with the instance throughout its


lifetime and cannot be changed, however secondary private addresses
can be unassigned, assigned or moved between interfaces or instances
at any point.

26. Why do you make subnets?

A. Because there is a shortage of networks


B. To efficiently utilize networks that have a large no. of hosts.
C. Because there is a shortage of hosts.
D. To efficiently utilize networks that have a small no. of hosts.

Answer B.

38 | P a g e
Explanation: If there is a network which has a large no. of hosts,
managing all these hosts can be a tedious job. Therefore we divide this
network into subnets (sub-networks) so that managing these hosts
becomes simpler.

27. Which of the following is true?

A. You can attach multiple route tables to a subnet


B. You can attach multiple subnets to a route table
C. Both A and B
D. None of these.

Answer B.
Explanation: Route Tables are used to route network packets, therefore
in a subnet having multiple route tables will lead to confusion as to
where the packet has to go. Therefore, there is only one route table in a
subnet, and since a route table can have any no. of records or
information, hence attaching multiple subnets to a route table is
possible.

28. In CloudFront what happens when content is NOT present at an


Edge location and a request is made to it?

A. An Error “404 not found” is returned


B. CloudFront delivers the content directly from the origin server and
stores it in the cache of the edge location
C. The request is kept on hold till content is delivered to the edge
location
D. The request is routed to the next closest edge location

Answer B.
Explanation: CloudFront is a content delivery system, which caches data
to the nearest edge location from the user, to reduce latency. If data is
not present at an edge location, the first time the data may get
transferred from the original server, but from the next time, it will be
served from the cached edge.

39 | P a g e
29. If I’m using Amazon CloudFront, can I use Direct Connect to
transfer objects from my own data center?

Yes. Amazon CloudFront supports custom origins including origins from


outside of AWS. With AWS Direct Connect, you will be charged with the
respective data transfer rates.

30. If my AWS Direct Connect fails, will I lose my connectivity?

If a backup AWS Direct connect has been configured, in the event of a


failure it will switch over to the second one. It is recommended to enable
Bidirectional Forwarding Detection (BFD) when configuring your
connections to ensure faster detection and failover. On the other hand, if
you have configured a backup IPsec VPN connection instead, all VPC
traffic will failover to the backup VPN connection automatically. Traffic
to/from public resources such as Amazon S3 will be routed over the
Internet. If you do not have a backup AWS Direct Connect link or a IPsec
VPN link, then Amazon VPC traffic will be dropped in the event of a
failure.
Section 5: Amazon Database

31. If I launch a standby RDS instance, will it be in the same


Availability Zone as my primary?

A. Only for Oracle RDS types


B. Yes
C. Only if it is configured at launch
D. No

Answer D.
Explanation: No, since the purpose of having a standby instance is to
avoid an infrastructure failure (if it happens), therefore the standby
instance is stored in a different availability zone, which is a physically
different independent infrastructure.

32. When would I prefer Provisioned IOPS over Standard RDS storage?

A. If you have batch-oriented workloads


B. If you use production online transaction processing (OLTP)
workloads.
40 | P a g e
C. If you have workloads that are not sensitive to consistent
performance
D. All of the above

Answer A.
Explanation: Provisioned IOPS deliver high IO rates but on the other
hand it is expensive as well. Batch processing workloads do not require
manual intervention they enable full utilization of systems, therefore
a provisioned IOPS will be preferred for batch oriented workload.

33. How is Amazon RDS, DynamoDB and Redshift different?

• Amazon RDS is a database management service for relational


databases, it manages patching, upgrading, backing up of data
etc. of databases for you without your intervention. RDS is a Db
management service for structured data only.
• DynamoDB, on the other hand, is a NoSQL database service,
NoSQL deals with unstructured data.
• Redshift, is an entirely different service, it is a data warehouse
product and is used in data analysis.

34. If I am running my DB Instance as a Multi-AZ deployment, can I


use the standby DB Instance for read or write operations along with
primary DB instance?

A. Yes
B. Only with MySQL based RDS
C. Only for Oracle RDS instances
D. No

Answer D.
Explanation: No, Standby DB instance cannot be used with primary DB
instance in parallel, as the former is solely used for standby purposes, it
cannot be used unless the primary instance goes down.

35. Your company’s branch offices are all over the world, they use a
software with a multi-regional deployment on AWS, they use
MySQL 5.6 for data persistence.

The task is to run an hourly batch process and read data from every
region to compute cross-regional reports which will be distributed
41 | P a g e
to all the branches. This should be done in the shortest time
possible. How will you build the DB architecture in order to meet
the requirements?

A. For each regional deployment, use RDS MySQL with a master in


the region and a read replica in the HQ region
B. For each regional deployment, use MySQL on EC2 with a master in
the region and send hourly EBS snapshots to the HQ region
C. For each regional deployment, use RDS MySQL with a master in
the region and send hourly RDS snapshots to the HQ region
D. For each regional deployment, use MySQL on EC2 with a master in
the region and use S3 to copy data files hourly to the HQ region

Answer A.
Explanation: For this we will take an RDS instance as a master, because
it will manage our database for us and since we have to read from every
region, we’ll put a read replica of this instance in every region where the
data has to be read from. Option C is not correct since putting a read
replica would be more efficient than putting a snapshot, a read replica
can be promoted if needed to an independent DB instance, but with a
Db snapshot it becomes mandatory to launch a separate DB Instance.

36. Can I run more than one DB instance for Amazon RDS for free?

Yes. You can run more than one Single-AZ Micro database instance, that
too for free! However, any use exceeding 750 instance hours, across all
Amazon RDS Single-AZ Micro DB instances, across all eligible database
engines and regions, will be billed at standard Amazon RDS prices. For
example: if you run two Single-AZ Micro DB instances for 400 hours each
in a single month, you will accumulate 800 instance hours of usage, of
which 750 hours will be free. You will be billed for the remaining 50
hours at the standard Amazon RDS price.
For a detailed discussion on this topic, please refer our RDS AWS blog.

37. Which AWS services will you use to collect and process e-
commerce data for near real-time analysis?

A. Amazon ElastiCache
B. Amazon DynamoDB
42 | P a g e
C. Amazon Redshift
D. Amazon Elastic MapReduce

Answer B,C.
Explanation: DynamoDB is a fully managed NoSQL database service.
DynamoDB, therefore can be fed any type of unstructured data, which
can be data from e-commerce websites as well, and later, an analysis can
be done on them using Amazon Redshift. We are not using Elastic
MapReduce, since a near real time analyses is needed.

38. Can I retrieve only a specific element of the data, if I have a


nested JSON data in DynamoDB?

Yes. When using the GetItem, BatchGetItem, Query or Scan APIs, you can
define a Projection Expression to determine which attributes should be
retrieved from the table. Those attributes can include scalars, sets, or
elements of a JSON document.

39. A company is deploying a new two-tier web application in AWS.


The company has limited staff and requires high availability, and the
application requires complex queries and table joins. Which
configuration provides the solution for the company’s
requirements?

A. MySQL Installed on two Amazon EC2 Instances in a single


Availability Zone
B. Amazon RDS for MySQL with Multi-AZ
C. Amazon ElastiCache
D. Amazon DynamoDB

Answer D.
Explanation: DynamoDB has the ability to scale more than RDS or any
other relational database service, therefore DynamoDB would be the apt
choice.

40. What happens to my backups and DB Snapshots if I delete my


DB Instance?

When you delete a DB instance, you have an option of creating a final


DB snapshot, if you do that you can restore your database from that
43 | P a g e
snapshot. RDS retains this user-created DB snapshot along with all other
manually created DB snapshots after the instance is deleted, also
automated backups are deleted and only manually created DB
Snapshots are retained.

41. Which of the following use cases are suitable for Amazon
DynamoDB? Choose 2 answers

A. Managing web sessions.


B. Storing JSON documents.
C. Storing metadata for Amazon S3 objects.
D. Running relational joins and complex updates.

Answer C,D.
Explanation: If all your JSON data have the same fields eg [id,name,age]
then it would be better to store it in a relational database, the metadata
on the other hand is unstructured, also running relational joins or
complex updates would work on DynamoDB as well.

42. How can I load my data to Amazon Redshift from different data
sources like Amazon RDS, Amazon DynamoDB and Amazon EC2?

You can load the data in the following two ways:

• You can use the COPY command to load data in parallel directly to
Amazon Redshift from Amazon EMR, Amazon DynamoDB, or any
SSH-enabled host.
• AWS Data Pipeline provides a high performance, reliable, fault
tolerant solution to load data from a variety of AWS data sources.
You can use AWS Data Pipeline to specify the data source, desired
data transformations, and then execute a pre-written import script
to load your data into Amazon Redshift.

44 | P a g e
43. Your application has to retrieve data from your user’s mobile
every 5 minutes and the data is stored in DynamoDB, later every day
at a particular time the data is extracted into S3 on a per user basis
and then your application is later used to visualize the data to the
user. You are asked to optimize the architecture of the backend
system to lower cost, what would you recommend?

A. Create a new Amazon DynamoDB (able each day and drop the one
for the previous day after its data is on Amazon S3.
B. Introduce an Amazon SQS queue to buffer writes to the Amazon
DynamoDB table and reduce provisioned write throughput.
C. Introduce Amazon Elasticache to cache reads from the Amazon
DynamoDB table and reduce provisioned read throughput.
D. Write data directly into an Amazon Redshift cluster replacing both
Amazon DynamoDB and Amazon S3.

Answer C.
Explanation: Since our work requires the data to be extracted and
analyzed, to optimize this process a person would use provisioned IO,
but since it is expensive, using a ElastiCache memoryinsread to cache the
results in the memory can reduce the provisioned read throughput and
hence reduce cost without affecting the performance.

44. You are running a website on EC2 instances deployed across


multiple Availability Zones with a Multi-AZ RDS MySQL Extra Large
DB Instance. The site performs a high number of small reads and
writes per second and relies on an eventual consistency model. After
comprehensive tests you discover that there is read contention on
RDS MySQL. Which are the best approaches to meet these
requirements? (Choose 2 answers)

A. Deploy ElastiCache in-memory cache running in each availability


zone
B. Implement sharding to distribute load to multiple RDS MySQL
instances
C. Increase the RDS MySQL Instance size and Implement provisioned
IOPS
D. Add an RDS MySQL read replica in each availability zone

Answer A,C.

45 | P a g e
Explanation: Since it does a lot of read writes, provisioned IO may
become expensive. But we need high performance as well, therefore the
data can be cached using ElastiCache which can be used for frequently
reading the data. As for RDS since read contention is happening, the
instance size should be increased and provisioned IO should be
introduced to increase the performance.

45. A startup is running a pilot deployment of around 100 sensors to


measure street noise and air quality in urban areas for 3 months. It
was noted that every month around 4GB of sensor data is
generated. The company uses a load balanced auto scaled layer of
EC2 instances and a RDS database with 500 GB standard storage.
The pilot was a success and now they want to deploy at least 100K
sensors which need to be supported by the backend. You need to
store the data for at least 2 years to analyze it. Which setup of the
following would you prefer?

A. Add an SQS queue to the ingestion layer to buffer writes to the


RDS instance
B. Ingest data into a DynamoDB table and move old data to a
Redshift cluster
C. Replace the RDS instance with a 6 node Redshift cluster with 96TB
of storage
D. Keep the current architecture but upgrade RDS storage to 3TB and
10K provisioned IOPS

Answer C.
Explanation: A Redshift cluster would be preferred because it easy to
scale, also the work would be done in parallel through the nodes,
therefore is perfect for a bigger workload like our use case. Since each
month 4 GB of data is generated, therefore in 2 year, it should be around
96 GB. And since the servers will be increased to 100K in number, 96 GB
will approximately become 96TB. Hence option C is the right answer.
Section 6: AWS Auto Scaling, AWS Load Balancer

46. Suppose you have an application where you have to render images
and also do some general computing. From the following services which
service will best fit your need?

A. Classic Load Balancer


46 | P a g e
B. Application Load Balancer
C. Both of them
D. None of these

Answer B.
Explanation: You will choose an application load balancer, since it
supports path based routing, which means it can take decisions based on
the URL, therefore if your task needs image rendering it will route it to a
different instance, and for general computing it will route it to a different
instance.

47. What is the difference between Scalability and Elasticity?

Scalability is the ability of a system to increase its hardware resources to


handle the increase in demand. It can be done by increasing the
hardware specifications or increasing the processing nodes.
Elasticity is the ability of a system to handle increase in the workload by
adding additional hardware resources when the demand increases(same
as scaling) but also rolling back the scaled resources, when the resources
are no longer needed. This is particularly helpful in Cloud environments,
where a pay per use model is followed.

48. How will you change the instance type for instances which are
running in your application tier and are using Auto Scaling. Where
will you change it from the following areas?

A. Auto Scaling policy configuration


B. Auto Scaling group
C. Auto Scaling tags configuration
D. Auto Scaling launch configuration

Answer D.
Explanation: Auto scaling tags configuration, is used to attach metadata
to your instances, to change the instance type you have to use auto
scaling launch configuration.

47 | P a g e
49. You have a content management system running on an Amazon
EC2 instance that is approaching 100% CPU utilization. Which
option will reduce load on the Amazon EC2 instance?

A. Create a load balancer, and register the Amazon EC2 instance


with it
B. Create a CloudFront distribution, and configure the Amazon EC2
instance as the origin
C. Create an Auto Scaling group from the instance using the
CreateAutoScalingGroup action
D. Create a launch configuration from the instance using the
CreateLaunchConfigurationAction

Answer A.
Explanation:Creating alone an autoscaling group will not solve the
issue, until you attach a load balancer to it. Once you attach a load
balancer to an autoscaling group, it will efficiently distribute the load
among all the instances. Option B – CloudFront is a CDN, it is a data
transfer tool therefore will not help reduce load on the EC2 instance.
Similarly the other option – Launch configuration is a template for
configuration which has no connection with reducing loads.

50. When should I use a Classic Load Balancer and when should I use
an Application load balancer?

A Classic Load Balancer is ideal for simple load balancing of traffic across
multiple EC2 instances, while an Application Load Balancer is ideal for
microservices or container-based architectures where there is a need to
route traffic to multiple services or load balance across multiple ports on
the same EC2 instance.

51. What does Connection draining do?

A. Terminates instances which are not in use.


B. Re-routes traffic from instances which are to be updated or
failed a health check.
C. Re-routes traffic from instances which have more workload to
instances which have less workload.
D. Drains all the connections from an instance, with one click.

48 | P a g e
Answer B.
Explanation: Connection draining is a service under ELB which
constantly monitors the health of the instances. If any instance fails a
health check or if any instance has to be patched with a software update,
it pulls all the traffic from that instance and re routes them to other
instances.

52. When an instance is unhealthy, it is terminated and replaced


with a new one, which of the following services does that?

A. Sticky Sessions
B. Fault Tolerance
C. Connection Draining
D. Monitoring

Answer B.
Explanation: When ELB detects that an instance is unhealthy, it starts
routing incoming traffic to other healthy instances in the region. If all the
instances in a region becomes unhealthy, and if you have instances in
some other availability zone/region, your traffic is directed to them. Once
your instances become healthy again, they are re routed back to the
original instances.

53. What are lifecycle hooks used for in AutoScaling?

A. They are used to do health checks on instances


B. They are used to put an additional wait time to a scale in or scale
out event.
C. They are used to shorten the wait time to a scale in or scale out
event
D. None of these

Answer B.
Explanation: Lifecycle hooks are used for putting wait time before any
lifecycle action i.e launching or terminating an instance happens. The
purpose of this wait time, can be anything from extracting log files
before terminating an instance or installing the necessary softwares in an
instance before launching it.

49 | P a g e
54. A user has setup an Auto Scaling group. Due to some issue the
group has failed to launch a single instance for more than 24 hours.
What will happen to Auto Scaling in this condition?

A. Auto Scaling will keep trying to launch the instance for 72 hours
B. Auto Scaling will suspend the scaling process
C. Auto Scaling will start an instance in a separate region
D. The Auto Scaling group will be terminated automatically

Answer B.
Explanation: Auto Scaling allows you to suspend and then resume one
or more of the Auto Scaling processes in your Auto Scaling group. This
can be very useful when you want to investigate a configuration problem
or other issue with your web application, and then make changes to your
application, without triggering the Auto Scaling process.
Section 7: CloudTrail, Route 53

55. You have an EC2 Security Group with several running EC2 instances.
You changed the Security Group rules to allow inbound traffic on a new
port and protocol, and then launched several new instances in the same
Security Group. The new rules apply:

A. Immediately to all instances in the security group.


B. Immediately to the new instances only.
C. Immediately to the new instances, but old instances must be
stopped and restarted before the new rules apply.
D. To all instances, but it may take several minutes for old instances
to see the changes.

Answer A.
Explanation: Any rule specified in an EC2 Security Group applies
immediately to all the instances, irrespective of when they are launched
before or after adding a rule.

56. To create a mirror image of your environment in another region


for disaster recovery, which of the following AWS resources do not
need to be recreated in the second region? ( Choose 2 answers )

A. Route 53 Record Sets


B. Elastic IP Addresses (EIP)
50 | P a g e
C. EC2 Key Pairs
D. Launch configurations
E. Security Groups

Answer A.
Explanation: Route 53 record sets are common assets therefore there is
no need to replicate them, since Route 53 is valid across regions

57. A customer wants to capture all client connection information


from his load balancer at an interval of 5 minutes, which of the
following options should he choose for his application?

A. Enable AWS CloudTrail for the loadbalancer.


B. Enable access logs on the load balancer.
C. Install the Amazon CloudWatch Logs agent on the load balancer.
D. Enable Amazon CloudWatch metrics on the load balancer.

Answer A.
Explanation: AWS CloudTrail provides inexpensive logging information
for load balancer and other AWS resources This logging information can
be used for analyses and other administrative work, therefore is perfect
for this use case.

58. A customer wants to track access to their Amazon Simple


Storage Service (S3) buckets and also use this information for their
internal security and access audits. Which of the following will meet
the Customer requirement?

A. Enable AWS CloudTrail to audit all Amazon S3 bucket access.


B. Enable server access logging for all required Amazon S3 buckets.
C. Enable the Requester Pays option to track access via AWS Billing
D. Enable Amazon S3 event notifications for Put and Post.

Answer A.
Explanation: AWS CloudTrail has been designed for logging and
tracking API calls. Also this service is available for storage, therefore
should be used in this use case.

51 | P a g e
59. Which of the following are true regarding AWS CloudTrail?
(Choose 2 answers)

A. CloudTrail is enabled globally


B. CloudTrail is enabled on a per-region and service basis
C. Logs can be delivered to a single Amazon S3 bucket for
aggregation.
D. CloudTrail is enabled for all available services within a region.

Answer B,C.
Explanation: Cloudtrail is not enabled for all the services and is also not
available for all the regions. Therefore option B is correct, also the logs
can be delivered to your S3 bucket, hence C is also correct.

60. What happens if CloudTrail is turned on for my account but my


Amazon S3 bucket is not configured with the correct policy?

CloudTrail files are delivered according to S3 bucket policies. If the


bucket is not configured or is misconfigured, CloudTrail might not be
able to deliver the log files.

61. How do I transfer my existing domain name registration to


Amazon Route 53 without disrupting my existing web traffic?

You will need to get a list of the DNS record data for your domain name
first, it is generally available in the form of a “zone file” that you can get
from your existing DNS provider. Once you receive the DNS record data,
you can use Route 53’s Management Console or simple web-services
interface to create a hosted zone that will store your DNS records for
your domain name and follow its transfer process. It also includes steps
such as updating the nameservers for your domain name to the ones
associated with your hosted zone. For completing the process you have
to contact the registrar with whom you registered your domain name
and follow the transfer process. As soon as your registrar propagates the
new name server delegations, your DNS queries will start to get
answered.
Section 8: AWS SQS, AWS SNS, AWS SES, AWS ElasticBeanstalk

52 | P a g e
62. Which of the following services you would not use to deploy an app?

A. Elastic Beanstalk
B. Lambda
C. Opsworks
D. CloudFormation

Answer B.
Explanation: Lambda is used for running server-less applications. It can
be used to deploy functions triggered by events. When we say serverless,
we mean without you worrying about the computing resources running
in the background. It is not designed for creating applications which are
publicly accessed.

63. How does Elastic Beanstalk apply updates?

A. By having a duplicate ready with updates before swapping.


B. By updating on the instance while it is running
C. By taking the instance down in the maintenance window
D. Updates should be installed manually

Answer A.
Explanation: Elastic Beanstalk prepares a duplicate copy of the instance,
before updating the original instance, and routes your traffic to the
duplicate instance, so that, incase your updated application fails, it will
switch back to the original instance, and there will be no downtime
experienced by the users who are using your application.

64. How is AWS Elastic Beanstalk different than AWS OpsWorks?

AWS Elastic Beanstalk is an application management platform while


OpsWorks is a configuration management platform. BeanStalk is an easy
to use service which is used for deploying and scaling web applications
developed with Java, .Net, PHP, Node.js, Python, Ruby, Go and Docker.
Customers upload their code and Elastic Beanstalk automatically handles
the deployment. The application will be ready to use without any
infrastructure or resource configuration.

53 | P a g e
In contrast, AWS Opsworks is an integrated configuration management
platform for IT administrators or DevOps engineers who want a high
degree of customization and control over operations.

65. What happens if my application stops responding to requests in


beanstalk?

AWS Beanstalk applications have a system in place for avoiding failures


in the underlying infrastructure. If an Amazon EC2 instance fails for any
reason, Beanstalk will use Auto Scaling to automatically launch a new
instance. Beanstalk can also detect if your application is not responding
on the custom link, even though the infrastructure appears healthy, it will
be logged as an environmental event( e.g a bad version was deployed)
so you can take an appropriate action.
Section 9: AWS OpsWorks, AWS KMS

66. How is AWS OpsWorks different than AWS CloudFormation?

OpsWorks and CloudFormation both support application modelling,


deployment, configuration, management and related activities. Both
support a wide variety of architectural patterns, from simple web
applications to highly complex applications. AWS OpsWorks and AWS
CloudFormation differ in abstraction level and areas of focus.
AWS CloudFormation is a building block service which enables customer
to manage almost any AWS resource via JSON-based domain specific
language. It provides foundational capabilities for the full breadth of
AWS, without prescribing a particular model for development and
operations. Customers define templates and use them to provision and
manage AWS resources, operating systems and application code.
In contrast, AWS OpsWorks is a higher level service that focuses on
providing highly productive and reliable DevOps experiences for IT
administrators and ops-minded developers. To do this, AWS OpsWorks
employs a configuration management model based on concepts such as
stacks and layers, and provides integrated experiences for key activities
like deployment, monitoring, auto-scaling, and automation. Compared
to AWS CloudFormation, AWS OpsWorks supports a narrower range of

54 | P a g e
application-oriented AWS resource types including Amazon EC2
instances, Amazon EBS volumes, Elastic IPs, and Amazon CloudWatch
metrics.

67. I created a key in Oregon region to encrypt my data in North Virginia


region for security purposes. I added two users to the key and an
external AWS account. I wanted to encrypt an object in S3, so when I
tried, the key that I just created was not listed. What could be the
reason?

A. External aws accounts are not supported.


B. AWS S3 cannot be integrated KMS.
C. The Key should be in the same region.
D. New keys take some time to reflect in the list.

Answer C.
Explanation: The key created and the data to be encrypted should be in
the same region. Hence the approach taken here to secure the data is
incorrect.

68. A company needs to monitor the read and write IOPS for their
AWS MySQL RDS instance and send real-time alerts to their
operations team. Which AWS services can accomplish this?

A. Amazon Simple Email Service


B. Amazon CloudWatch
C. Amazon Simple Queue Service
D. Amazon Route 53

Answer B.
Explanation: Amazon CloudWatch is a cloud monitoring tool and hence
this is the right service for the mentioned use case. The other options
listed here are used for other purposes for example route 53 is used for
DNS services, therefore CloudWatch will be the apt choice.

69. What happens when one of the resources in a stack cannot be


created successfully in AWS OpsWorks?

When an event like this occurs, the “automatic rollback on error” feature
is enabled, which causes all the AWS resources which were created
55 | P a g e
successfully till the point where the error occurred to be deleted. This is
helpful since it does not leave behind any erroneous data, it ensures the
fact that stacks are either created fully or not created at all. It is useful in
events where you may accidentally exceed your limit of the no. of Elastic
IP addresses or maybe you may not have access to an EC2 AMI that you
are trying to run etc.

70. What automation tools can you use to spinup servers?


Any of the following tools can be used:

• Roll-your-own scripts, and use the AWS API tools. Such scripts
could be written in bash, perl or other language of your choice.
• Use a configuration management and provisioning tool like puppet
or its successor Opscode Chef. You can also use a tool like Scalr.
• Use a managed solution such as Rightscale.

A. External aws accounts are not supported.


B. AWS S3 cannot be integrated KMS.
C. The Key should be in the same region.
D. New keys take some time to reflect in the list.

Answer C.
Explanation: The key created and the data to be encrypted should be in
the same region. Hence the approach taken here to secure the data is
incorrect.

68. A company needs to monitor the read and write IOPS for their
AWS MySQL RDS instance and send real-time alerts to their
operations team. Which AWS services can accomplish this?

A. Amazon Simple Email Service


B. Amazon CloudWatch
C. Amazon Simple Queue Service
D. Amazon Route 53

Answer B.
Explanation: Amazon CloudWatch is a cloud monitoring tool and hence
this is the right service for the mentioned use case. The other options

56 | P a g e
listed here are used for other purposes for example route 53 is used for
DNS services, therefore CloudWatch will be the apt choice.

69. What happens when one of the resources in a stack cannot be


created successfully in AWS OpsWorks?

When an event like this occurs, the “automatic rollback on error” feature
is enabled, which causes all the AWS resources which were created
successfully till the point where the error occurred to be deleted. This is
helpful since it does not leave behind any erroneous data, it ensures the
fact that stacks are either created fully or not created at all. It is useful in
events where you may accidentally exceed your limit of the no. of Elastic
IP addresses or maybe you may not have access to an EC2 AMI that you
are trying to run etc.

70. What automation tools can you use to spinup servers?

Any of the following tools can be used:

• Roll-your-own scripts, and use the AWS API tools. Such scripts
could be written in bash, perl or other language of your choice.
• Use a configuration management and provisioning tool like puppet
or its successor Opscode Chef. You can also use a tool like Scalr.
• Use a managed solution such as Rightscale.

57 | P a g e
58 | P a g e

You might also like