AWS Certified Cloud Practitioner Exam
AWS Certified Cloud Practitioner Exam
Question #1 Topic 1
A company is planning to run a global marketing application in the AWS Cloud. The application will feature videos that can be viewed by users.
The company must ensure that all users can view these videos with low latency.
Which AWS service should the company use to meet this requirement?
D. Amazon CloudFront
Correct Answer: D
CloudFront has a vast network of edge locations located globally, which reduces the distance between users and the content they are
trying to access. When a user requests a video, CloudFront delivers it from the edge location nearest to the user, resulting in reduced
latency.
upvoted 1 times
CloudFront is a content delivery network (CDN) that speeds up the delivery of static and dynamic web content, as well as videos.
upvoted 2 times
Which pillar of the AWS Well-Architected Framework refers to the ability of a system to recover from infrastructure or service disruptions and
dynamically acquire computing resources to meet demand?
A. Security
B. Reliability
C. Performance efficiency
D. Cost optimization
Correct Answer: B
The reliability pillar focuses on workloads performing their intended functions and how to recover quickly from failure to meet demands.
Key topics include distributed system design, recovery planning, and adapting to changing requirements.
https://aws.amazon.com/architecture/well-architected/
upvoted 16 times
The performance efficiency pillar focuses on structured and streamlined allocation of IT and computing resources. Key topics include
selecting resource types and sizes optimized for workload requirements, monitoring performance, and maintaining efficiency as
business needs evolve.
https://aws.amazon.com/architecture/well-architected/
upvoted 4 times
CloudFront has a vast network of edge locations located globally, which reduces the distance between users and the content they are
trying to access. When a user requests a video, CloudFront delivers it from the edge location nearest to the user, resulting in reduced
latency.
upvoted 1 times
Which of the following are benefits of migrating to the AWS Cloud? (Choose two.)
A. Operational resilience
C. Business agility
D. Business excellence
Correct Answer: AC
A. Operational resilience: The AWS Cloud is designed to be highly available and scalable, which can help organizations improve their
operational resilience and reduce the impact of failures or disruptions.
C. Business agility: Migrating to the AWS Cloud can help organizations to increase their business agility by allowing them to quickly and
easily deploy new applications and services, scale their infrastructure up or down as needed, and experiment with new technologies.
upvoted 7 times
C. Business agility: AWS enables organizations to quickly and easily provision computing resources as needed, allowing for scalability and
flexibility. This agility allows businesses to respond faster to market changes, experiment with new ideas, and launch products or services
more rapidly.
upvoted 1 times
C. Business agility: AWS enables organizations to quickly and easily provision computing resources as needed, allowing for scalability and
flexibility. This agility allows businesses to respond faster to market changes, experiment with new ideas, and launch products or services
more rapidly.
upvoted 1 times
Business Agility
upvoted 1 times
A company is planning to replace its physical on-premises compute servers with AWS serverless compute services. The company wants to be able
to take advantage of advanced technologies quickly after the migration.
Which pillar of the AWS Well-Architected Framework does this plan represent?
A. Security
B. Performance efficiency
C. Operational excellence
D. Reliability
Correct Answer: B
Operational Excellence: The operational excellence pillar includes how your organization supports your business objectives, your ability to
run workloads effectively, gain insight into their operations, and to continuously improve supporting processes and procedures to deliver
business value.
I understand why you could think it is C, but I believe the correct answer is B (key part: as demand changes and technologies evolve)
upvoted 29 times
but we could see the "serverless computing" mentioned in pillar "Performance Efficiency"
Design Principles
There are five design principles for performance efficiency in the cloud:
Go global in minutes: Deploying your workload in multiple AWS Regions around the world permits you to provide lower latency and a
better experience for your customers at minimal cost.
Use serverless architectures: Serverless architectures remove the need for you to run and maintain physical servers for traditional
compute activities. For example, serverless storage services can act as static websites (removing the need for web servers) and event
services can host code. This removes the operational burden of managing physical servers, and can lower transactional costs because
managed services operate at cloud scale.
upvoted 1 times
Scale workloads up or down as needed. This will help the company to save money on compute resources when they are not needed, and
to ensure that they have enough resources to handle spikes in demand.
Take advantage of advanced technologies quickly. AWS serverless compute services are constantly being updated with new features and
capabilities. By moving to serverless, the company will be able to take advantage of these new features as soon as they are available.
Improve operational efficiency. Serverless compute services are designed to be easy to use and manage. This will help the company to
reduce the amount of time and effort that is required to manage their infrastructure.
upvoted 1 times
The Performance Efficiency pillar of the AWS Well-Architected Framework focuses on using computing resources efficiently to meet
system requirements and to maintain that efficiency as demand changes and technologies evolve. By adopting serverless compute
services, the company can take advantage of advanced technologies like auto-scaling, pay-per-use pricing models, and managed services,
which can help improve the efficiency and performance of their applications.
upvoted 2 times
A large company has multiple departments. Each department has its own AWS account. Each department has purchased Amazon EC2 Reserved
Instances.
Some departments do not use all the Reserved Instances that they purchased, and other departments need more Reserved Instances than they
purchased.
The company needs to manage the AWS accounts for all the departments so that the departments can share the Reserved Instances.
Which AWS service or tool should the company use to meet these requirements?
B. Cost Explorer
D. AWS Organizations
Correct Answer: B
Reference:
https://aws.amazon.com/ru/organizations/
I take the exam tomorrow and will see if this test set, along with the basic AWS training, are enough.
With AWS Organizations, you can create a "Consolidated Billing" family, which allows you to consolidate billing for all the linked accounts.
This enables the company to pool the Reserved Instances from multiple departments and efficiently utilize them across the organization.
By sharing Reserved Instances through AWS Organizations, departments that do not fully utilize their Reserved Instances can allocate
those resources to other departments that need more Reserved Instances, optimizing the use of these cost-saving commitments.
upvoted 1 times
yokitoki 4 days, 9 hours ago
I am pretty certain it is AWS Organizations, but selecting Cost Explorer can confuse many. I take my Exam on Tuesday, so we shall see.
upvoted 1 times
Using AWS Organizations, the company can create an organization with multiple member accounts for each department. The organization
can then establish a consolidated payment method, making it easier to manage and pay for the AWS services used by each department.
upvoted 3 times
AWS Organizations is a service that helps centrally manage and govern multiple AWS accounts. It provides features and tools to simplify
the administration, billing, and resource sharing across accounts.
Using AWS Organizations, the company can create an organization with multiple member accounts for each department. The organization
can then establish a consolidated payment method, making it easier to manage and pay for the AWS services used by each department.
With AWS Organizations, the company can also create a Reserved Instance Sharing group. This allows departments with unused Reserved
Instances to share them with other departments that need additional capacity. By enabling Reserved Instance Sharing, the company can
optimize the usage of Reserved Instances across the organization and reduce costs.
upvoted 3 times
Which component of the AWS global infrastructure is made up of one or more discrete data centers that have redundant power, networking, and
connectivity?
A. AWS Region
B. Availability Zone
C. Edge location
D. AWS Outposts
Correct Answer: B
The purpose of Availability Zones is to provide fault tolerance and high availability for applications and services hosted in the AWS cloud.
By deploying resources across multiple Availability Zones, organizations can ensure that their systems remain operational even if an
individual Availability Zone experiences an outage.
upvoted 1 times
The purpose of Availability Zones is to provide fault tolerance and high availability for applications and services hosted in the AWS cloud.
By deploying resources across multiple Availability Zones, organizations can ensure that their systems remain operational even if an
individual Availability Zone experiences an outage.
upvoted 1 times
An Availability Zone is a separate data center within a region that has independent power, cooling, and networking, which are connected
to other Availability Zones through high-bandwidth, low-latency links. By using multiple Availability Zones, customers can design and
operate applications and services that are highly available, fault-tolerant, and scalable.
upvoted 3 times
Which duties are the responsibility of a company that is using AWS Lambda? (Choose two.)
Correct Answer: AD
D. Writing and updating of code: The company is responsible for developing and maintaining the code that runs within the Lambda
functions. This involves writing the code, testing it, and making any necessary updates or modifications over time.
upvoted 1 times
Which AWS services or features provide disaster recovery solutions for Amazon EC2 instances? (Choose two.)
D. AWS Shield
E. Amazon GuardDuty
Correct Answer: BC
C. Amazon Elastic Block Store (Amazon EBS) snapshots: Amazon EBS is a block storage service for use with Amazon EC2 instances. EBS
snapshots are point-in-time copies of an EBS volume that can be used to create a new volume or restore an existing volume. EBS
snapshots can be used to recover data in the event of an instance failure, or to create a new instance in a different region or Availability
Zone.
B. Amazon Machine Images (AMIs): An AMI is a pre-configured virtual machine image that contains the operating system, application
software, and any other required components needed to launch an instance. AMIs can be used to create new instances in the same or a
different region, which can be useful for disaster recovery purposes.
upvoted 6 times
C. Amazon Elastic Block Store (Amazon EBS) snapshots: Amazon EBS allows the creation of snapshots, which are point-in-time copies of
EBS volumes. By taking regular snapshots of EBS volumes attached to EC2 instances, companies can create backups that can be used to
restore data or launch new instances in the event of a disaster. Snapshots are stored durably and can be easily restored when needed.
upvoted 1 times
A company is migrating to the AWS Cloud instead of running its infrastructure on premises.
Which of the following are advantages of this migration? (Choose two.)
Correct Answer: BD
A user is comparing purchase options for an application that runs on Amazon EC2 and Amazon RDS. The application cannot sustain any
interruption. The application experiences a predictable amount of usage, including some seasonal spikes that last only a few weeks at a time. It is
not possible to modify the application.
Which purchase option meets these requirements MOST cost-effectively?
A. Review the AWS Marketplace and buy Partial Upfront Reserved Instances to cover the predicted and seasonal load.
B. Buy Reserved Instances for the predicted amount of usage throughout the year. Allow any seasonal usage to run on Spot Instances.
C. Buy Reserved Instances for the predicted amount of usage throughout the year. Allow any seasonal usage to run at an On-Demand rate.
D. Buy Reserved Instances to cover all potential usage that results from the seasonal usage.
Correct Answer: B
The reserved instances already are going to be used throughout the predicted amount of usage. Only the seasonal spikes is going to be
run on the spot instances, as in they're traffic spikes they have flexible start and end times.
Paying on demand for those spikes incur in a higher price for on demand is more expensive than Spot.
upvoted 3 times
A company wants to review its monthly costs of using Amazon EC2 and Amazon RDS for the past year.
Which AWS service or tool provides this information?
B. Cost Explorer
C. Amazon Forecast
D. Amazon CloudWatch
Correct Answer: B
With Cost Explorer, users can view their monthly costs and usage for specific AWS services, including Amazon EC2 and Amazon RDS. It
offers pre-built reports, customizable filters, and visualizations to help analyze spending patterns, identify cost drivers, and track expenses
over time.
upvoted 3 times
A company wants to migrate a critical application to AWS. The application has a short runtime. The application is invoked by changes in data or by
shifts in system state. The company needs a compute solution that maximizes operational efficiency and minimizes the cost of running the
application.
Which AWS solution should the company use to meet these requirements?
B. AWS Lambda
Correct Answer: B
1. Run code without provisioning or managing infrastructure. Simply write and upload code as a .zip file or container image.
2. Automatically respond to code execution requests at any scale, from a dozen events per day to hundreds of thousands per second.
3. Save costs by paying only for the compute time you use—by per-millisecond—instead of provisioning infrastructure upfront for peak
capacity.
upvoted 20 times
All other solutions are EC2-based and would cost more than the Lambda solution.
Which AWS service or feature allows users to connect with and deploy AWS services programmatically?
B. AWS Cloud9
C. AWS CodePipeline
Correct Answer: D
Using the AWS SDKs, developers can write code to authenticate with AWS, make API calls, manage resources, and deploy and configure
AWS services. This enables programmatically automating tasks, provisioning resources, managing infrastructure, and interacting with
AWS services without the need for manual intervention.
upvoted 3 times
Option A, AWS Management Console is a web-based graphical user interface (GUI) that allows users to access and manage AWS services
using a web browser.
Option B, AWS Cloud9 is an integrated development environment (IDE) that provides a cloud-based environment for writing, running, and
debugging code.
Option C, AWS CodePipeline is a continuous integration and delivery service that allows users to automate the building, testing, and
deployment of applications to AWS services.
upvoted 4 times
Correct Answer: A
A company is launching an ecommerce application that must always be available. The application will run on Amazon EC2 instances continuously
for the next
12 months.
What is the MOST cost-effective instance purchasing option that meets these requirements?
A. Spot Instances
B. Savings Plans
C. Dedicated Hosts
D. On-Demand Instances
Correct Answer: B
ÀWS Managed Services`is the answer and it helps the companies with the migration.
https://aws.amazon.com/managed-services/
upvoted 1 times
Which AWS service or feature can a company use to determine which business unit is using specific AWS resources?
B. Key pairs
C. Amazon Inspector
Correct Answer: A
When using cost allocation tags, companies can track and analyze their AWS costs based on the assigned tags. This allows them to
determine the cost distribution among different business units or any other relevant categorization. Cost allocation tags provide granular
visibility into resource usage and expenditure, enabling accurate cost attribution and analysis.
upvoted 4 times
A company wants to migrate its workloads to AWS, but it lacks expertise in AWS Cloud computing.
Which AWS service or feature will help the company with its migration?
C. AWS Artifacts
Correct Answer: D
N9 8 months ago
APN Consulting partners
An APN Consulting Partner helps an AWS customer implement and manage an AWS cloud deployment. These types of partners
include system integrators, managed services providers, and other consultancies and agencies. An APN Technology Partner
provides software tools and services that are hosted on or integrated with AWS.
upvoted 10 times
https://d1.awsstatic.com/partner-network/APN_Consulting-Benefits_Brochure-Digital.pdf
upvoted 34 times
The AWS Partner Network (APN) is a global community of partners that leverages programs, expertise, and resources to build, market,
and sell customer offerings.
upvoted 1 times
ÀWS Managed Services`is the answer and it helps the companies with the migration.
https://aws.amazon.com/managed-services/
upvoted 1 times
By engaging an AWS Consulting Partner, the company can benefit from their knowledge and experience to plan and execute a successful
migration strategy. Consulting Partners can help assess the existing infrastructure, design an architecture optimized for AWS, perform the
migration, and provide ongoing support and management.
AWS Consulting Partners are part of the AWS Partner Network (APN), and they undergo certification and validation processes to
demonstrate their expertise and capabilities in AWS services and solutions. The APN includes a range of consulting partners, system
integrators, and managed service providers with various specialties and industry focus.
upvoted 1 times
Reference:
https://aws.amazon.com/managed-services/
upvoted 3 times
Reference:
https://aws.amazon.com/managed-services/
upvoted 2 times
Question #18 Topic 1
Which AWS service or tool should a company use to centrally request and track service limit increases?
A. AWS Config
B. Service Quotas
D. AWS Budgets
Correct Answer: B
By using Service Quotas, a company can centrally manage its service limits and request increases when needed. It provides visibility into
the current limits, usage, and available quota for each service. The service includes an interface to submit requests for limit increases,
track the status of the requests, and receive notifications on updates.
upvoted 4 times
N9 8 months ago
Selected Answer: B
https://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html
upvoted 2 times
Correct Answer: B
Through AWS Artifact, customers can access AWS's ISO certifications, such as ISO 27001, which specifies requirements for establishing,
implementing, maintaining, and continuously improving an information security management system within the context of AWS services.
upvoted 2 times
Correct Answer: B
https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/control-access-billing.html#ControllingAccessWebsite-Activate
upvoted 1 times
"The root user is the superuser within your AWS account. AMS monitors root usage. We recommend that you use root only for the few
tasks that require it, for example: changing your account settings, activating AWS Identity and Access Management (IAM) access to billing
and cost management, changing your root password, and enabling multi-factor authentication (MFA)."
upvoted 3 times
Correct Answer: B
AWS Data Pipeline helps you easily create complex data processing workloads that are fault tolerant, repeatable, and highly available
ref: https://aws.amazon.com/datapipeline/
upvoted 2 times
AWS Lambda is a serverless compute service that allows you to run code without provisioning or managing servers. It can be triggered by
events, such as messages in an Amazon SQS queue, and execute functions in response. Lambda scales automatically based on the
incoming request rate, ensuring efficient processing of multiple requests.
By combining Amazon SQS and AWS Lambda, the company can achieve asynchronous message processing, reliable message delivery, and
automatic scaling to handle the high volume of requests from different users efficiently.
upvoted 3 times
C. A VPC must span at least two edge locations in each AWS Region.
Correct Answer: D
When creating a VPC, users can choose the AWS Region where it will reside. Within that Region, a VPC can span multiple Availability Zones.
Availability Zones are separate data centers within a given Region, each with redundant power, networking, and connectivity. By spanning
multiple Availability Zones, a VPC can provide high availability and fault tolerance for resources deployed within it.
upvoted 1 times
Which of the following are components of an AWS Site-to-Site VPN connection? (Choose two.)
C. NAT gateway
D. Customer gateway
E. Internet gateway
Correct Answer: BD
NAT Gateway is used for the private subnet to connect to internet - not C
D. Customer gateway: A customer gateway is the customer's side of the VPN connection. It represents the physical device or software
application located in the customer's on-premises network.
upvoted 3 times
An AWS Site-to-Site VPN connection is a way to securely connect your on-premises network to your VPC and leverage the resources in your
VPC. The components of an AWS Site-to-Site VPN connection are:
B. Virtual private gateway: A virtual private gateway is the Amazon VPC side of a VPN connection. It acts as the termination point for VPN
connections.
D. Customer gateway: A customer gateway is the on-premises side of a VPN connection. It is a physical or software appliance that is
connected to your on-premises network and is responsible for establishing the VPN connection to your VPC.
A. AWS Storage Gateway, C. NAT gateway, and E. Internet gateway are not components of an AWS Site-to-Site VPN connection. They are
different AWS services that provide different functionality.
upvoted 5 times
D. Customer gateway: The customer gateway is a physical or software appliance that is owned by the customer and connected to the
customer's on-premises network. It serves as the entry point for the VPN connection and is responsible for terminating the VPN
connection and forwarding traffic between the on-premises network and the AWS Virtual Private Cloud (VPC).
B. Virtual private gateway: The virtual private gateway is a VPC component that serves as the entry point for VPN connections to the VPC.
It is responsible for routing traffic between the VPC and the customer gateway.
upvoted 5 times
A company needs to establish a connection between two VPCs. The VPCs are located in two different AWS Regions. The company wants to use
the existing infrastructure of the VPCs for this connection.
Which AWS service or feature can be used to establish this connection?
B. VPC peering
D. VPC endpoints
Correct Answer: B
Reference:
https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html
You can create a VPC peering connection between your own VPCs, or with a VPC in another AWS account. The VPCs can be in different
Regions (also known as an inter-Region VPC peering connection).
upvoted 1 times
B. VPC Peering:
C. AWS Direct Connect: again connection between the client and AWS using the dedicated connection. not C
D. VPC Endpoint: the connection between VPC and a supported AWS Service. Not D
It's simple
Documentation: https://docs.aws.amazon.com/devicefarm/latest/developerguide/amazon-vpc-cross-region.html
upvoted 1 times
Question #25 Topic 1
According to the AWS shared responsibility model, what responsibility does a customer have when using Amazon RDS to host a database?
Correct Answer: A
When using Amazon RDS, AWS manages the underlying infrastructure, including hardware, networking, and database software patching
for the managed database service. However, the customer is responsible for implementing encryption-at-rest strategies to protect the
data stored within the RDS instance.
Option A, managing connections to the database, is typically handled by the application or the customer's responsibility outside of the
database service itself.
upvoted 1 times
Amazon RDS encrypts your databases using keys you manage with the AWS Key Management Service (KMS). On a database instance
running with Amazon RDS encryption, data stored at rest in the underlying storage is encrypted, as are its automated backups, read
replicas, and snapshots. Amazon RDS encryption uses the industry standard AES-256 encryption algorithm to encrypt your data on the
server that hosts your Amazon RDS instance.
Designing an encryption strategy means building the strategy from scratch (including choosing the best-fit encryption algorithm for that
strategy), as mentioned before, selecting a one is different from design/create a new one.
Reference:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Encryption.html
upvoted 1 times
While Amazon RDS manages the underlying infrastructure and operational tasks such as hardware provisioning, database software
installation, and backups, the customer is responsible for designing and implementing encryption-at-rest strategies to protect the data
stored in the database. This includes choosing and configuring the appropriate encryption options provided by Amazon RDS, such as
encrypting the database storage using AWS Key Management Service (KMS) or using Transparent Data Encryption (TDE) for specific
database engines.
upvoted 2 times
Reference:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Encryption.html#:~:text=To%20encrypt%20a%20new%20DB%20in
stance%2C%20choose%20Enable%20encryption%20on%20the%20Amazon%20RDS%20console.%20For%20information%20on%20creating
%20a%20DB%20instance%2C%20see%20Creating%20an%20Amazon%20RDS%20DB%20instance.
upvoted 2 times
What are some advantages of using Amazon EC2 instances to host applications in the AWS Cloud instead of on premises? (Choose two.)
B. EC2 integrates with Amazon VPC, AWS CloudTrail, and AWS Identity and Access Management (IAM).
Correct Answer: DE
I do think that native integrations between EC2 and the various AWS services like cloudtrail/VPC/IAM is an advantage over on-prem
solutions which tend to be more piecemeal. While I agree that VPC and IAM there may be equivalent to on-prem solutions, you
cannot track API calls on your on-prem like you would via CloudTrail.
As for Option E of automatic cost optimization for storage cost. I do not believe that instance store scales up (vertical scaling)
automatically. ASGs only spin up more instances of the same type. EBS does not scale and you pay for what is provisioned whether
you use or not.
upvoted 15 times
D. EC2 has a flexible, pay-as-you-go pricing model: Amazon EC2 offers a flexible pricing model based on usage. Users can choose from a
variety of instance types and sizes, paying only for the resources consumed on an hourly or per-second basis. This pay-as-you-go model
eliminates the need for upfront infrastructure investment and allows for cost optimization by scaling resources up or down based on
application demands.
upvoted 2 times
Tachy0n 4 weeks ago
Selected Answer: BD
EC2 does not have automatic storage cost optimization. It is up to the user to manage and optimize storage costs for EC2 instances.
upvoted 1 times
B. EC2 integrates with Amazon VPC, AWS CloudTrail, and AWS Identity and Access Management (IAM), which makes it easier to configure
and manage resources in the cloud.
D. EC2 has a flexible, pay-as-you-go pricing model that allows customers to pay only for what they use, which can result in cost savings
compared to on-premises infrastructure.
Option A is incorrect because while EC2 provides options for automating operating system patch management, it is not included by
default. Option C is incorrect because EC2 does not have a 100% service level agreement (SLA); the SLA varies depending on the type of
EC2 instance used. Option E is incorrect because EC2 does not have automatic storage cost optimization; customers need to manage and
optimize their own storage usage.
upvoted 2 times
Question #27 Topic 1
A user needs to determine whether an Amazon EC2 instance's security groups were modified in the last month.
How can the user see if a change was made?
B. Use AWS Identity and Access Management (IAM) to see which user or role changed the security group.
Correct Answer: C
CloudTrail records information such as the identity of the user or role making the changes, the time of the change, and the details of the
modification. This allows users to gain visibility into the actions performed on their resources and supports security analysis, resource
change tracking, and compliance auditing.
upvoted 2 times
AWS CloudTrail provides a history of events for an AWS account, including API calls made to EC2 and changes made to security groups. By
using CloudTrail, the user can determine whether a change was made to the security groups of an EC2 instance in the last month.
CloudTrail logs can be searched and filtered to identify security group-related events, including changes to the rules, additions or removal
of security groups, and modifications to existing security groups.
Option A is not the recommended way to see if the security group was changed, as EC2 provides limited access to its own logs and events.
Option B is not directly relevant, as IAM is not used to track changes to security groups.
Option D is also not the recommended way to see if the security group was changed, as CloudWatch is primarily used for monitoring and
alerting on performance metrics, rather than tracking configuration changes.
upvoted 4 times
Which AWS service will help protect applications running on AWS from DDoS attacks?
A. Amazon GuardDuty
B. AWS WAF
C. AWS Shield
D. Amazon Inspector
Correct Answer: C
Scalability: AWS Shield is designed to scale and handle high-volume DDoS attacks, leveraging AWS's global network infrastructure.
upvoted 1 times
Which AWS service or feature acts as a firewall for Amazon EC2 instances?
A. Network ACL
C. Amazon VPC
D. Security group
Correct Answer: D
How does the AWS Cloud pricing model differ from the traditional on-premises storage pricing model?
Correct Answer: B
No upfront or capital expenditure. But On premise or AWS will have operational expenditure (OpEx), and the same is managed/covered by
the fees you pay for AWS.
upvoted 1 times
A company has a single Amazon EC2 instance. The company wants to adopt a highly available architecture.
What can the company do to meet this requirement?
Correct Answer: B
C - A dedicated instance will not solve HA but might help to address compliance issues.
So B.
upvoted 3 times
A company's on-premises application deployment cycle was 3-4 weeks. After migrating to the AWS Cloud, the company can deploy the application
in 2-3 days.
Which benefit has this company experienced by moving to the AWS Cloud?
A. Elasticity
B. Flexibility
C. Agility
D. Resilience
Correct Answer: A
Which of the following are included in AWS Enterprise Support? (Choose two.)
Correct Answer: AD
You will also get access to a Technical Account Manager (TAM) who will provide consultative architectural and operational guidance
delivered in the context of your apps and use-cases to help you achieve the greatest value from AWS.
upvoted 2 times
D. Support of third-party software integration to AWS: AWS Enterprise Support includes support for integrating third-party software with
AWS services. This support helps customers with troubleshooting, guidance, and best practices for integrating and optimizing their
applications and software on the AWS platform.
upvoted 2 times
LadyRose 1 month, 1 week ago
Selected Answer: AD
E is wrong because response time is 15 mins
upvoted 2 times
Explanation:
AWS Enterprise Support includes a dedicated technical account manager (TAM) and a 5-minute response time for critical issues. It also
includes support of third-party software integration to AWS. AWS Professional Services and partner-led support are not included in AWS
Enterprise Support.
upvoted 1 times
A global media company uses AWS Organizations to manage multiple AWS accounts.
Which AWS service or feature can the company use to limit the access to AWS services for member accounts?
Correct Answer: C
AWS Organizations helps to manage multiple AWS accounts in a centralized manner. SCPs are a feature of AWS Organizations that
allow an organization to set rules that govern the use of AWS services across all accounts in the organization. SCPs can be used to
restrict the use of specific AWS services or to impose additional conditions or requirements on the use of those services. SCPs are
applied at the organizational unit (OU) level, so organizations can create different policies for different groups of accounts within their
AWS Organization.
AWS Identity and Access Management (IAM) is a service that enables you to manage access to AWS services and resources securely.
IAM is used to create and manage users, groups, and permissions. It can be used in conjunction with SCPs to further restrict access to
AWS services
upvoted 5 times
Answer is B
upvoted 6 times
By defining SCPs, the company can limit access to specific AWS services for member accounts. SCPs can be used to allow or deny
permissions for services, actions, or resources at the organizational level. This provides a centralized way to enforce security and
compliance policies across the entire organization.
upvoted 1 times
Create a Service Control Policy: The company can define a custom SCP using the AWS Identity and Access Management (IAM) policy
language. The policy can specify the services or actions that are allowed or denied for member accounts.
Attach the SCP to OUs or accounts: The created SCP can be attached to specific OUs or individual accounts within the AWS Organizations
hierarchy. When an SCP is attached to an OU, it automatically applies to all accounts within that OU, including any existing or future
accounts. Alternatively, SCPs can be attached directly to individual accounts.
Control access permissions: The SCP defines the permissions for the member accounts. It can limit access to specific AWS services or
actions by using allow or deny statements. By default, new member accounts inherit the permissions defined by the organization's root
SCP, and additional SCPs can be layered to further refine access control.
upvoted 1 times
The global media company can use Service Control Policies (SCPs) to limit access to AWS services for member accounts within their AWS
Organization. SCPs allow the company to set permission guardrails at the organization level to control which AWS services and features
can be accessed by member accounts. SCPs provide a way to centrally manage permissions and restrict the maximum available
permissions for the member accounts within the organization.
AWS Identity and Access Management (IAM) is used to manage user access to AWS resources and is typically used at the account level.
Organizational Units (OUs) are used to group and organize member accounts within an organization, and Access Control Lists (ACLs) are
used to control access to network resources. However, neither IAM, OUs, nor ACLs provide the ability to limit access to AWS services at the
organization level like SCPs.
upvoted 1 times
Service control policies (SCPs) are a type of organization policy that allow an AWS account administrator to set permissions that specify
which AWS services and features can be used by member accounts within an organization. SCPs can be used to restrict access to specific
services or features at the organizational unit (OU) or account level. By using SCPs, the global media company can restrict the usage of
AWS services and features that are not required for its member accounts.
upvoted 1 times
Question #35 Topic 1
A company wants to limit its employees' AWS access to a portfolio of predefined AWS resources.
Which AWS solution should the company use to meet this requirement?
A. AWS Config
D. AWS AppSync
Correct Answer: C
By using AWS Service Catalog, the company can define a portfolio of approved AWS resources, such as EC2 instances, RDS databases, or
S3 buckets, along with their specific configurations and policies. Employees can then discover and launch these predefined resources from
the Service Catalog, ensuring that they are accessing only the authorized resources.
upvoted 1 times
How it works
AWS Service Catalog lets you centrally manage deployed IT services, applications, resources, and metadata to achieve consistent
governance of your infrastructure as code (IaC) templates. With AWS Service Catalog, you can meet your compliance requirements while
making sure your customers can quickly deploy the approved IT services they need.
upvoted 4 times
An online company was running a workload on premises and was struggling to launch new products and features. After migrating the workload to
AWS, the company can quickly launch products and features and can scale its infrastructure as required.
Which AWS Cloud value proposition does this scenario describe?
A. Business agility
B. High availability
C. Security
D. Centralized auditing
Correct Answer: A
By migrating the workload to AWS, the company is able to quickly launch new products and features, indicating that they are able to
innovate and respond to changing market conditions faster than before. Additionally, the ability to scale infrastructure as required allows
the company to adjust to changing demand for their products or services. This agility is a key benefit of cloud computing and one of the
primary reasons that many companies choose to migrate to AWS
upvoted 1 times
Which of the following are advantages of the AWS Cloud? (Choose two.)
Correct Answer: BC
https://docs.aws.amazon.com/whitepapers/latest/aws-overview/six-advantages-of-cloud-computing.html
upvoted 6 times
C. High economies of scale: AWS operates at a large scale, serving millions of customers globally. This scale allows AWS to achieve cost
efficiencies and pass on the benefits to customers. By leveraging AWS services, users can access enterprise-grade infrastructure and
services without the need for significant upfront investment in hardware or infrastructure.
upvoted 1 times
B-
C-
E - with AWS's pay-as-you-go model, no capital (fixed) expenses. Only operational expenses
N9 8 months ago
A is not true for guest OS on EC2
upvoted 1 times
AWS has the ability to achieve lower pay-as-you-go pricing by aggregating usage across hundreds of thousands of users.
This describes which advantage of the AWS Cloud?
Correct Answer: C
This advantage allows AWS to provide cost-effective services and resources to users, enabling them to benefit from enterprise-grade
infrastructure without the need for significant upfront investment. Customers can access a wide range of services and pay only for the
resources they consume, resulting in cost savings and increased value.
upvoted 1 times
C-
C is the answer
upvoted 2 times
A company has a database server that is always running. The company hosts the server on Amazon EC2 instances. The instance sizes are
suitable for the workload. The workload will run for 1 year.
Which EC2 instance purchasing option will meet these requirements MOST cost-effectively?
B. On-Demand Instances
C. Spot Instances
Correct Answer: A
Since the workload is expected to run continuously for a year, the company can confidently make the upfront payment and reserve the
instances for that duration. This provides cost predictability and significant savings compared to using On-Demand Instances throughout
the year.
upvoted 1 times
D - no. CRI allows customers to change instance type AZs, etc. and have lower discount rate than SRI
A is the answer
upvoted 2 times
A. Amazon Aurora
B. Amazon RDS
C. Amazon Redshift
E. Amazon DynamoDB
Correct Answer: BE
E. Amazon DynamoDB: Amazon DynamoDB is a fully managed NoSQL database service that offers fast and predictable performance at
any scale. It provides low-latency access to data and automatically scales to handle millions of requests per second. DynamoDB is suitable
for high-performance, low-latency applications and can handle demanding workloads with ease.
upvoted 1 times
STOPITALREADY 4 weeks ago
Selected Answer: DE
D & E are correct answer
upvoted 2 times
Which tasks are the responsibility of AWS, according to the AWS shared responsibility model? (Choose two.)
Correct Answer: BD
D. Maintain the physical security of edge locations: AWS is responsible for the physical security of its global infrastructure, which includes
the security and protection of edge locations.
upvoted 1 times
B-
C - no. customers
D-
E - no. customers
B&D
upvoted 1 times
Which of the following are features of network ACLs as they are used in the AWS Cloud? (Choose two.)
D. They process rules in order, starting with the lowest numbered rule, when deciding whether to allow traffic.
Correct Answer: AD
Network ACLs are stateless, which means that responses to allowed inbound traffic are subject to the rules for outbound traffic (and vice
versa).
Rules are evaluated starting with the lowest numbered rule. As soon as a rule matches traffic, it's applied regardless of any higher-
numbered rule that might contradict it.
upvoted 32 times
D. They process rules in order, starting with the lowest numbered rule when deciding whether to allow traffic: Network ACLs evaluate
rules sequentially and process them in order, starting with the lowest numbered rule. Once a matching rule is found, processing stops,
and the decision to allow or deny traffic is made based on that rule. No further rules are evaluated.
upvoted 3 times
A) They are stateless: Network ACLs (Access Control Lists) in AWS are stateless. They evaluate each network packet independently and
don't track the state of the traffic flow. Therefore, any changes to the traffic flow require explicit rules for each direction of traffic.
D) They process rules in order, starting with the lowest numbered rule, when deciding whether to allow traffic: AWS Network ACLs process
the rules in sequential order starting with the lowest numbered rule to the highest numbered rule to decide whether to allow traffic or not
upvoted 3 times
D-
A&D
upvoted 2 times
A company has designed its AWS Cloud infrastructure to run its workloads effectively. The company also has protocols in place to continuously
improve supporting processes.
Which pillar of the AWS Well-Architected Framework does this scenario represent?
A. Security
B. Performance efficiency
C. Cost optimization
D. Operational excellence
Correct Answer: D
Which AWS service or feature can be used to create a private connection between an on-premises workload and an AWS Cloud workload?
A. Amazon Route 53
B. Amazon Macie
D. AWS PrivateLink
Correct Answer: D
Image: https://d1.awsstatic.com/products/privatelink/product-page-diagram_AWS-
PrivateLink.fc899b8ebd46fa0b3537d9be5b2e82de328c63b8.png
upvoted 2 times
By using AWS Direct Connect, organizations can establish a private and dedicated network link with high bandwidth and low latency. This
allows for consistent network performance and enhanced security for transmitting data between on-premises workloads and AWS Cloud
workloads.
upvoted 1 times
Amazon Route 53 is a highly available and scalable cloud Domain Name System (DNS) web service.
Amazon Macie is a security service that uses machine learning to automatically discover, classify, and protect sensitive data.
AWS PrivateLink is a technology designed to access services over AWS Private Network. It does not establish a direct connection between
on-premises workloads and AWS Cloud.
upvoted 1 times
AWS PrivateLink is a service that enables you to access AWS services in a highly available and scalable manner, securely through a private
connection. It allows you to create private endpoints within your Virtual Private Cloud (VPC) that are accessible from your on-premises
network, establishing a private and direct connection between your on-premises workload and an AWS Cloud workload.
upvoted 1 times
It's important to note that setting up and managing AWS Direct Connect may require coordination with a network service provider or a
Direct Connect partner. They can assist you in the physical connectivity, provisioning, and configuration aspects of the private connection.
upvoted 1 times
Pilar604 1 month, 2 weeks ago
Selected Answer: C
conexión directa
upvoted 1 times
A company needs to graphically visualize AWS billing and usage over time. The company also needs information about its AWS monthly costs.
Which AWS Billing and Cost Management tool provides this data in a graphical format?
A. AWS Bills
B. Cost Explorer
D. AWS Budgets
Correct Answer: B
With Cost Explorer, users can view their AWS billing and usage data in a graphical format, allowing them to gain insights into their
monthly costs and resource consumption. It offers pre-built reports, customizable filters, and visualizations that help track and analyze
spending across different dimensions like services, regions, tags, and more.
upvoted 1 times
https://www.cloudzero.com/blog/aws-budgets-vs-cost-
explorer#:~:text=AWS%20cost%20tool.-,What%20Are%20The%20Differences%20Between%20AWS%20Budgets%20And%20Cost%20Explore
r,costs%20and%20forecast%20future%20spending.
upvoted 4 times
A company wants to run production workloads on AWS. The company needs concierge service, a designated AWS technical account manager
(TAM), and technical support that is available 24 hours a day, 7 days a week.
Which AWS Support plan will meet these requirements?
Correct Answer: B
In addition to the TAM, AWS Enterprise Support offers concierge service, which provides assistance with billing and account-related
inquiries, service limit increases, and general operational support.
AWS Enterprise Support also includes 24/7 technical support, ensuring that assistance is available around the clock for critical issues or
emergencies.
upvoted 2 times
Which architecture design principle describes the need to isolate failures between dependent components in the AWS Cloud?
Correct Answer: D
By loosely coupling components, failures in one component are isolated and do not propagate to other components. This approach
enhances the overall fault tolerance and resilience of the system. If one component fails, it does not bring down or impact other
components, allowing the system to continue functioning or degrade gracefully.
Loose coupling is achieved by using well-defined interfaces, decoupling communication mechanisms, and employing distributed system
patterns such as message queues, event-driven architectures, and microservices.
upvoted 1 times
D is the answer
upvoted 3 times
jg_85 5 months, 3 weeks ago
Selected Answer: D
D of course
upvoted 1 times
B. Amazon S3
C. Amazon RDS
E. Amazon DynamoDB
Correct Answer: CE
E. Amazon DynamoDB: Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance
with seamless scalability. It offers automatic scaling, built-in security, and automated backups, making it an ideal choice for applications
that require low-latency data access and flexible scaling.
upvoted 1 times
A company is using the AWS Free Tier for several AWS services for an application.
What will happen if the Free Tier usage period expires or if the application use exceeds the Free Tier usage limits?
A. The company will be charged the standard pay-as-you-go service rates for the usage that exceeds the Free Tier usage.
B. AWS Support will contact the company to set up standard service charges.
C. The company will be charged for the services it consumed during the Free Tier period, plus additional charges for service consumption
after the Free Tier period.
D. The company's AWS account will be frozen and can be restarted after a payment plan is established.
Correct Answer: A
A company recently deployed an Amazon RDS instance in its VPC. The company needs to implement a stateful firewall to limit traffic to the private
corporate network.
Which AWS service or feature should the company use to limit network traffic directly to its RDS instance?
A. Network ACLs
B. Security groups
C. AWS WAF
D. Amazon GuardDuty
Correct Answer: C
To limit network traffic to the Amazon RDS instance, the company can configure the associated security group to only allow inbound
connections from the private corporate network. By specifying the appropriate rules, the company can restrict access to the RDS instance
to only the necessary IP addresses or IP ranges.
upvoted 1 times
On the other hand, Network ACLs (NACLs) are stateless and operate at the subnet level. AWS WAF is a web application firewall that helps
protect your web applications or APIs. Amazon GuardDuty is a threat detection service that continuously monitors for malicious activity
and unauthorized behavior.
upvoted 2 times
Which AWS service uses machine learning to help discover, monitor, and protect sensitive data that is stored in Amazon S3 buckets?
A. AWS Shield
B. Amazon Macie
D. Amazon Cognito
Correct Answer: B
Amazon Macie is a data security and data privacy service that uses machine learning (ML) and pattern matching to discover and protect
your sensitive data.
upvoted 11 times
Macie uses machine learning models to identify and classify data patterns, enabling the detection of sensitive data even if it is stored in an
unstructured format or embedded within larger files. It provides automated alerts and generates reports to assist with data privacy and
compliance efforts.
upvoted 1 times
B-
Answer: B
upvoted 4 times
Saif93 6 months, 2 weeks ago
Selected Answer: B
B is the answer.
upvoted 1 times
A company wants to improve the overall availability and performance of its applications that are hosted on AWS.
Which AWS service should the company use?
A. Amazon Connect
B. Amazon Lightsail
Correct Answer: C
By using AWS Global Accelerator, the company can benefit from improved availability and performance by leveraging the global AWS
infrastructure. It intelligently routes traffic across multiple regions and edge locations, allowing users to access applications with reduced
latency and improved response times.
upvoted 1 times
Answer: C
upvoted 4 times
DuboisNicolasDuclair 6 months, 1 week ago
Selected Answer: C
B c'est pour déployer les applications lorsqu'on a pas de connaissances sur AWS. la bonne réponse c'est C
upvoted 2 times
AWS Global Accelerator is a networking service that improves the performance of your users’ traffic by up to 60% using Amazon Web
Services’ global network infrastructure. When the internet is congested, AWS Global Accelerator optimizes the path to your application to
keep packet loss, jitter, and latency consistently low.
upvoted 4 times
Which AWS service or feature identifies whether an Amazon S3 bucket or an IAM role has been shared with an external entity?
D. AWS Organizations
Correct Answer: C
With IAM Access Analyzer, users can quickly identify whether their S3 bucket or IAM role has been shared with external entities such as
other AWS accounts. It provides detailed findings that highlight any potential issues with access permissions and recommends actions to
remediate them.
upvoted 2 times
C-
answer: C
upvoted 2 times
A company does not want to rely on elaborate forecasting to determine its usage of compute resources. Instead, the company wants to pay only
for the resources that it uses. The company also needs the ability to increase or decrease its resource usage to meet business requirements.
Which pillar of the AWS Well-Architected Framework aligns with these requirements?
A. Operational excellence
B. Security
C. Reliability
D. Cost optimization
Correct Answer: D
In this scenario, the company's goal of paying only for the resources it uses aligns with cost optimization by avoiding unnecessary
expenses. By leveraging AWS services such as pay-as-you-go pricing models, the company can dynamically scale resources up or down
based on business requirements, optimizing costs by aligning them with actual usage.
Additionally, the company's desire to avoid elaborate forecasting aligns with cost optimization as it allows the company to be agile and
responsive to changing needs without the need for long-term commitments or complex capacity planning.
upvoted 2 times
The AWS Well-Architected Framework is a collection of best practices and guidelines designed to help customers build secure, high-
performing, resilient, and efficient infrastructure for their applications. It consists of five pillars: operational excellence, security, reliability,
performance efficiency, and cost optimization.
The requirements in the question align with the cost optimization pillar of the AWS Well-Architected Framework. The cost optimization
pillar focuses on using AWS resources efficiently and cost-effectively by selecting the right resource types and sizes, leveraging elasticity to
scale resources up and down based on demand, and using automation to optimize resource utilization.
By paying only for the resources that it uses and being able to increase or decrease its resource usage to meet business requirements, the
company is effectively using the cost optimization pillar to optimize its AWS usage.
upvoted 3 times
Guru4Cloud 4 months ago
Selected Answer: D
The requirements mentioned align with the "Cost Optimization" pillar of the AWS Well-Architected Framework. This pillar focuses on
designing systems that deliver business value at the lowest possible price point, by optimizing costs in multiple dimensions such as
elasticity, expenditure awareness, and resource utilization. By paying only for the resources that it uses and having the ability to increase
or decrease resource usage, the company can achieve cost optimization.
upvoted 2 times
The company's objective is to pay only for the resources it uses, which is a cost optimization strategy. The ability to increase or decrease
its resource usage to meet business requirements is also an essential aspect of cost optimization. The cost optimization pillar is all about
finding ways to optimize costs without sacrificing performance, security, or reliability. By paying only for the resources that are used, the
company can save money and reduce waste.
upvoted 2 times
D-
Answer: D
upvoted 1 times
Cost optimization is one of the five pillars of the AWS Well-Architected Framework. It focuses on designing and operating workloads to
deliver business value at the lowest possible price. To achieve this, AWS offers a range of services that allow companies to pay only for the
resources they use, and the ability to increase or decrease resources as needed to meet business requirements.
In this case, the company does not want to rely on elaborate forecasting to determine its usage of compute resources and wants to pay
only for the resources it uses, which aligns with the principles of cost optimization. By leveraging AWS's on-demand and auto-scaling
capabilities, the company can easily adjust its resource usage to meet business requirements while keeping costs under control.
upvoted 2 times
A company wants to launch its workload on AWS and requires the system to automatically recover from failure.
Which pillar of the AWS Well-Architected Framework includes this requirement?
A. Cost optimization
B. Operational excellence
C. Performance efficiency
D. Reliability
Correct Answer: D
In this scenario, the company's requirement for automatic recovery from failure reflects the focus on building a reliable system that can
withstand disruptions and maintain availability. By designing the workload to be resilient and implementing fault-tolerant architectures,
the system can automatically recover and continue functioning even in the event of failures or disruptions.
upvoted 1 times
The reliability pillar includes the ability of a system to recover from infrastructure or service disruptions, dynamically acquire computing
resources to meet demand, and mitigate disruptions such as misconfigurations or transient network issues.
https://docs.aws.amazon.com/wellarchitected/latest/high-performance-computing-lens/reliability-pillar.html
upvoted 2 times
Design Principles
There are five design principles for reliability in the cloud:
A large enterprise with multiple VPCs in several AWS Regions around the world needs to connect and centrally manage network connectivity
between its VPCs.
Which AWS service or feature meets these requirements?
D. VPC endpoints
Correct Answer: B
With AWS Transit Gateway, the large enterprise can create a single gateway and establish peering connections with multiple VPCs across
different AWS Regions. This allows for centralized management and control of network traffic between VPCs, simplifying network
architecture and reducing administrative overhead.
upvoted 2 times
Which AWS service supports the creation of visual reports from AWS Cost and Usage Report data?
A. Amazon Athena
B. Amazon QuickSight
C. Amazon CloudWatch
D. AWS Organizations
Correct Answer: A
https://aws.amazon.com/premiumsupport/knowledge-center/quicksight-cost-usage-report/
upvoted 24 times
With Amazon QuickSight, users can connect to their AWS Cost and Usage Report data and create visual reports to analyze and track their
AWS costs. They can build charts, graphs, and other visualizations to understand cost trends, identify cost drivers, and compare spending
across different dimensions such as services, accounts, regions, and more.
upvoted 2 times
Amazon QuickSight is an AWS service that supports the creation of visual reports from AWS Cost and Usage Report data. It allows you to
easily analyze and visualize your cost and usage data using various charts, graphs, and dashboards. With QuickSight, you can gain insights
into your AWS spending patterns and optimize your costs effectively.
upvoted 1 times
Amazon QuickSight is a business intelligence service that allows users to create and publish interactive dashboards, reports, and
visualizations using AWS data sources. It supports a variety of data sources, including AWS services such as Amazon RDS, Amazon
Redshift, Amazon S3, and AWS Cost and Usage Report.
upvoted 2 times
Which AWS service should be used to monitor Amazon EC2 instances for CPU and network utilization?
A. Amazon Inspector
B. AWS CloudTrail
C. Amazon CloudWatch
D. AWS Config
Correct Answer: C
By setting up CloudWatch metrics, users can collect data on CPU usage and network traffic at regular intervals, allowing them to monitor
resource utilization and identify any performance bottlenecks or anomalies. CloudWatch provides real-time monitoring, customizable
dashboards, and the ability to set alarms and receive notifications based on predefined thresholds.
upvoted 1 times
A company is preparing to launch a new web store that is expected to receive high traffic for an upcoming event. The web store runs only on AWS,
and the company has an AWS Enterprise Support plan.
Which AWS resource will provide guidance about how the company should scale its architecture and operational support during the event?
Correct Answer: B
In this scenario, with the anticipated high traffic for the upcoming event, the TAM can provide guidance on scaling the architecture to
handle the increased load. They can offer recommendations on utilizing AWS services such as Auto Scaling, Elastic Load Balancing, and
caching mechanisms to ensure the web store can handle the expected traffic surge.
Additionally, the TAM can provide operational support, helping the company optimize its AWS resources, troubleshoot any issues, and
ensure the smooth operation of the web store during the event.
upvoted 1 times
The TAM is an AWS resource that comes with an AWS Enterprise Support plan. They act as a single point of contact within AWS and provide
personalized technical guidance, architectural advice, and best practices to help customers optimize their AWS usage. They can assist in
designing scalable and reliable architectures, identifying potential bottlenecks, and recommending optimizations to handle high traffic
events like the upcoming one for the web store.
Option B: The designated AWS technical account manager (TAM) is the correct answer.
upvoted 1 times
As part of an AWS Enterprise Support plan, customers are assigned a designated Technical Account Manager (TAM) who serves as a
trusted advisor and a single point of contact within AWS. The TAM works closely with the customer to understand their business needs,
provide architectural guidance, and offer proactive recommendations for optimizing their AWS infrastructure.
upvoted 2 times
In this scenario, the designated AWS technical account manager (TAM) will provide guidance about how the company should scale its
architecture and operational support during the event. A TAM is a resource available to customers with an AWS Enterprise Support plan.
They act as a trusted advisor and provide personalized support and guidance to help customers achieve their business goals and
effectively utilize AWS services.
upvoted 2 times
Reference:
https://aws.amazon.com/premiumsupport/programs/iem/
upvoted 1 times
The AWS Enterprise Support plan provides customers with access to an AWS technical account manager (TAM). TAMs are experienced
technical advisors who provide personalized guidance and support to help customers optimize their AWS infrastructure and applications.
They can assist with a range of activities, such as architecture design, cost optimization, performance tuning, and security planning.
In this scenario, the company is preparing to launch a new web store that is expected to receive high traffic for an upcoming event. The
company runs only on AWS, and they have an AWS Enterprise Support plan. In this case, the designated AWS technical account manager
(TAM) would be the best resource to provide guidance about how the company should scale its architecture and operational support
during the event. The TAM can help the company assess their current architecture, identify areas of improvement, and provide
recommendations on how to optimize the infrastructure for the expected traffic.
upvoted 4 times
Reference:
https://aws.amazon.com/premiumsupport/programs/iem/
upvoted 1 times
Question #60 Topic 1
A user wants to deploy a service to the AWS Cloud by using infrastructure-as-code (IaC) principles.
Which AWS service can be used to meet this requirement?
B. AWS CloudFormation
C. AWS CodeCommit
D. AWS Config
Correct Answer: B
"AWS CloudFormation is designed to allow resource lifecycles to be managed repeatably, predictable, and safely, while allowing for
automatic rollbacks, automated state management, and management of resources across accounts and regions. "
https://aws.amazon.com/cloudformation/faqs/
upvoted 1 times
With AWS CloudFormation, users can define their desired infrastructure configuration in a template file written in YAML or JSON format.
This template describes the desired state of the AWS resources, including EC2 instances, load balancers, databases, and more. By
deploying the CloudFormation stack, the infrastructure is automatically provisioned based on the template, ensuring consistency and
reducing manual configuration.
CloudFormation templates can be version-controlled, shared, and reused, making it easier to manage and maintain infrastructure
configurations. They also support advanced features such as parameterization, conditional resource creation, and orchestration of multi-
tier applications.
upvoted 1 times
AWS CloudFormation is a service that allows you to provision and manage AWS resources using infrastructure-as-code (IaC) principles.
It enables you to define your infrastructure and application resources in a template file, which is written in either YAML or JSON format.
This template can be version-controlled, reviewed, and treated as a code artifact.
upvoted 3 times
A company that has multiple business units wants to centrally manage and govern its AWS Cloud environments. The company wants to automate
the creation of
AWS accounts, apply service control policies (SCPs), and simplify billing processes.
Which AWS service or tool should the company use to meet these requirements?
A. AWS Organizations
B. Cost Explorer
C. AWS Budgets
Correct Answer: A
"AWS Organizations is an account management service that enables you to consolidate multiple AWS accounts into an organization that
you create and centrally manage. AWS Organizations includes account management and consolidated billing capabilities that enable you
to better meet the budgetary, security, and compliance needs of your business. As an administrator of an organization, you can create
accounts in your organization and invite existing accounts to join the organization."
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_introduction.html
upvoted 1 times
With AWS Organizations, the company can set up and manage multiple AWS accounts for its business units. It enables the automation of
account creation through the use of AWS Organizations API or AWS Service Catalog, making it easier to provision new accounts as needed.
Additionally, AWS Organizations allows the application of service control policies (SCPs) at the organization, organizational unit (OU), or
account level. SCPs help establish fine-grained permissions and control over the services and actions that can be performed within the
accounts, ensuring security and governance across the organization.
upvoted 3 times
Which IT controls do AWS and the customer share, according to the AWS shared responsibility model? (Choose two.)
B. Patch management
D. Zone security
Correct Answer: BC
Patch Management – AWS is responsible for patching and fixing flaws within the infrastructure, but customers are responsible for
patching their guest OS and applications.
Configuration Management – AWS maintains the configuration of its infrastructure devices, but a customer is responsible for configuring
their own guest operating systems, databases, and applications.
Awareness & Training - AWS trains AWS employees, but a customer must train their own employees.
upvoted 36 times
Patch Management – AWS is responsible for patching and fixing flaws within the infrastructure, but customers are responsible for
patching their guest OS and applications.
Configuration Management – AWS maintains the configuration of its infrastructure devices, but a customer is responsible for configuring
their own guest operating systems, databases, and applications.
Awareness & Training - AWS trains AWS employees, but a customer must train their own employees.
upvoted 2 times
B. Patch management: AWS is responsible for patching and maintaining the underlying infrastructure and host operating systems of their
services. This ensures that the infrastructure is protected from known vulnerabilities. On the other hand, customers are responsible for
patching their own applications, operating systems, and virtual machines (EC2 instances) that they deploy on AWS.
upvoted 1 times
B. Patch management: AWS is responsible for patching and securing the underlying infrastructure and host operating system. However,
customers are responsible for managing the patches and updates for their own applications, virtual machines, or containers running on
the AWS infrastructure
upvoted 1 times
A. Physical and environmental controls: AWS is responsible for providing and maintaining the physical infrastructure that underlies the
cloud services, such as the data centers, networking, and power and cooling systems. However, customers are responsible for ensuring
the security and compliance of their own physical and environmental controls, such as securing their own facilities, managing access
controls, and monitoring for physical threats.
B. Patch management: AWS is responsible for patching and maintaining the underlying infrastructure of their services, but customers are
responsible for patching and maintaining the operating systems, applications, and other software that they deploy on top of AWS services.
upvoted 1 times
B - AWS is responsible for the infrastructure and the customer is responsible for patching OS/platforms/applications
B and E
upvoted 3 times
Patch Management – AWS is responsible for patching and fixing flaws within the infrastructure, but customers are responsible for
patching their guest OS and applications.
Configuration Management – AWS maintains the configuration of its infrastructure devices, but a customer is responsible for configuring
their own guest operating systems, databases, and applications.
Awareness & Training - AWS trains AWS employees, but a customer must train their own employees.
upvoted 2 times
https://aws.amazon.com/compliance/shared-responsibility-model/
upvoted 1 times
tahagmail 5 months, 1 week ago
Selected Answer: BD
Vote for B and D
upvoted 1 times
A company is launching an application in the AWS Cloud. The application will use Amazon S3 storage. A large team of researchers will have
shared access to the data. The company must be able to recover data that is accidentally overwritten or deleted.
Which S3 feature should the company turn on to meet this requirement?
B. S3 Versioning
C. S3 Lifecycle rules
Correct Answer: B
"Versioning in Amazon S3 is a means of keeping multiple variants of an object in the same bucket. You can use the S3 Versioning feature
to preserve, retrieve, and restore every version of every object stored in your buckets. With versioning you can recover more easily from
both unintended user actions and application failures."
https://docs.aws.amazon.com/AmazonS3/latest/userguide/Versioning.html
upvoted 1 times
By enabling S3 Versioning, the company ensures that every modification made to an object in the S3 bucket is preserved as a separate
version. This provides an added layer of protection against accidental deletions or overwrites, as previous versions can be restored or
accessed as necessary.
upvoted 1 times
A manufacturing company has a critical application that runs at a remote site that has a slow internet connection. The company wants to migrate
the workload to
AWS. The application is sensitive to latency and interruptions in connectivity. The company wants a solution that can host this application with
minimum latency.
Which AWS service or feature should the company use to meet these requirements?
A. Availability Zones
C. AWS Wavelength
D. AWS Outposts
Correct Answer: B
AWS Local Zones are a new type of AWS infrastructure designed to run workloads that require single-digit millisecond latency, like video
rendering and graphics intensive, virtual desktop applications. Not every customer wants to operate their own on-premises data center,
while others may be interested in getting rid of their local data center entirely. Local Zones allow customers to gain all the benefits of
having compute and storage resources closer to end-users, without the need to own and operate their own data center infrastructure.
(D) AWS Outposts would be the best fit here. Since the client is migrating only the workloads on AWS while (B) AWS Local Zone wants to
get rid of hosting its on-prem data center.
upvoted 38 times
AWS Wavelength is designed specifically to reduce the latency between devices and applications hosted on AWS by placing AWS
compute and storage services at the edge of the 5G network. This means that the application can be hosted closer to the user or
device, which reduces the time it takes for data to travel between the application and the user or device. By reducing latency, AWS
Wavelength provides a more responsive user experience for latency-sensitive applications.
In this scenario, the manufacturing company has a critical application that is sensitive to latency and interruptions in connectivity.
Hosting the application on AWS Wavelength would ensure that the application is hosted closer to the user or device, reducing the time
it takes for data to travel between the application and the user or device. This would help to minimize latency and interruptions in
connectivity, which is critical for the performance of the application.
upvoted 5 times
"Run applications that require single-digit millisecond latency or local data processing by bringing AWS infrastructure closer to your end
users and business centers."
https://aws.amazon.com/about-aws/global-infrastructure/localzones/
upvoted 1 times
A company wants to migrate its applications from its on-premises data center to a VPC in the AWS Cloud. These applications will need to access
on-premises resources.
Which actions will meet these requirements? (Choose two.)
A. Use AWS Service Catalog to identify a list of on-premises resources that can be migrated.
B. Create a VPN connection between an on-premises device and a virtual private gateway in the VPC.
C. Use an Amazon CloudFront distribution and configure it to accelerate content delivery close to the on-premises resources.
D. Set up an AWS Direct Connect connection between the on-premises data center and AWS.
E. Use Amazon CloudFront to restrict access to static web content provided through the on-premises web servers.
Correct Answer: AD
B. Create a VPN connection between an on-premises device and a virtual private gateway in the VPC.
This will allow the applications running in the VPC to securely access on-premises resources through the VPN connection.
D. Set up an AWS Direct Connect connection between the on-premises data center and AWS.
AWS Direct Connect provides a dedicated network connection between the on-premises data center and AWS, bypassing the public
internet. This allows for a more reliable and consistent connection between the VPC and the on-premises resources.
upvoted 4 times
> **Apply security at all layers:** Apply a defence in depth approach with multiple security controls. Apply to all layers (for example, edge
of network, VPC, load balancing, every instance and compute service, operating system, application, and code).
D. Set up an AWS Direct Connect connection between the on-premises data center and AWS.
AWS Direct Connect provides a dedicated network connection between the on-premises data center and AWS, bypassing the public
internet. This enables a private and dedicated network connection, with higher bandwidth and lower latency compared to VPN
connections. It allows for reliable and consistent access to on-premises resources from within the VPC.
upvoted 2 times
These two options allow for the secure and reliable connection of an on-premises data center with the AWS Cloud. A VPN connection uses
the internet to establish a secure, private network connection, while AWS Direct Connect bypasses the public internet altogether and
provides a dedicated, private connection between the data center and AWS. Both can be used for hybrid cloud scenarios where
applications in the AWS cloud need to communicate with on-premises resources.
upvoted 1 times
A company wants to use the AWS Cloud to provide secure access to desktop applications that are running in a fully managed environment.
Which AWS service should the company use to meet this requirement?
A. Amazon S3
C. AWS AppSync
D. AWS Outposts
Correct Answer: A
"As an application streaming / SaaS conversion service, AppStream 2.0 lets you move your desktop applications to AWS without rewriting
them. It’s easy to install your applications on AppStream 2.0, set launch configurations, and make your applications available to users. "
https://aws.amazon.com/appstream2/faqs/?nc=sn&loc=7
upvoted 1 times
AppStream 2.0 provides a secure and controlled environment for running desktop applications by ensuring that the application code and
data never leave the AWS infrastructure. Users can access the applications through a web browser or client application on their device,
and all processing happens on the AWS servers, with only the user interface being streamed to the device.
upvoted 1 times
Amazon WorkSpaces is a fully managed desktop computing service in the cloud that allows users to access their desktop applications
from anywhere, using any device. It provides a secure and scalable solution for desktop virtualization, enabling organizations to easily
provision and manage desktops for their users, without having to invest in expensive hardware and infrastructure.
Amazon WorkSpaces provides a range of features for secure access, including multi-factor authentication, network isolation, encryption,
and integration with Active Directory. With Amazon WorkSpaces, the company can provide its users with a secure, high-performance
desktop experience, while also reducing the cost and complexity of managing desktop infrastructure.
upvoted 2 times
AWS Outposts is the service that meets the company's requirement of migrating the workload to AWS while minimizing latency and
maintaining the performance of the application. AWS Outposts is a fully managed service that allows you to run AWS infrastructure and
services on-premises, in your own data center, or at the edge of your network. It is designed to be used in environments where low-
latency, high-throughput, or high-bandwidth applications are critical, and where internet connectivity is limited or unreliable.
upvoted 2 times
Secure, reliable, and scalable access to applications and non-persistent desktops from any location
upvoted 3 times
A company wants to implement threat detection on its AWS infrastructure. However, the company does not want to deploy additional software.
Which AWS service should the company use to meet these requirements?
A. Amazon VPC
B. Amazon EC2
C. Amazon GuardDuty
Correct Answer: C
1. Continuously monitor your AWS accounts, instances, container workloads, users, and storage for potential threats.
2. Expose threats quickly using anomaly detection, machine learning, behavioral modeling, and threat intelligence feeds from AWS and
leading third-parties.
"Amazon GuardDuty is a threat detection service that continuously monitors your AWS accounts and workloads for malicious activity and
delivers detailed security findings for visibility and remediation."
https://aws.amazon.com/guardduty/
upvoted 1 times
GuardDuty operates at the account level, analyzing data from various AWS services to detect common attack patterns, including
reconnaissance activities, compromised instances, and data exfiltration attempts. It provides real-time alerts and findings, helping
organizations quickly identify and respond to potential threats.
upvoted 1 times
A. Amazon Aurora
C. Amazon Connect
D. AWS Outposts
Correct Answer: B
Reference:
https://aws.amazon.com/global-accelerator/
"AWS Global Accelerator and Amazon CloudFront are separate services that use the AWS global network and its edge locations around the
world. CloudFront improves performance for both cacheable content (such as images and videos) and dynamic content (such as API
acceleration and dynamic site delivery). Global Accelerator improves performance for a wide range of applications over TCP or UDP by
proxying packets at the edge to applications running in one or more AWS Regions. "
https://aws.amazon.com/global-
accelerator/faqs/#:~:text=Q%3A%20How%20is%20AWS%20Global%20Accelerator%20different%20from%20Amazon%20CloudFront%3F
upvoted 1 times
Edge locations are small, distributed data centers strategically located around the world. They are part of the AWS global network and are
used to cache and deliver content closer to end-users, reducing latency and improving the user experience. These edge locations are
deployed in major cities and regions worldwide.
upvoted 2 times
A: AWS Global Accelerator and Amazon CloudFront are separate services that use the AWS global network and its edge locations around
the world.
upvoted 4 times
Using Global Accelerator, your users' traffic is moved off the internet and onto Amazon’s private global network through 90+ global edge
locations, then directed to your application origins.
upvoted 2 times
https://aws.amazon.com/about-aws/global-infrastructure/
upvoted 1 times
A. AWS Fargate
D. Amazon EC2
Correct Answer: C
If you use ECS without Fargate, you still need to choose and provision the EC2 instances for your clusters, and manage their scaling.
In contrast, if you choose AWS Fargate to run your ECS tasks, AWS manages the underlying EC2 instances for you. You don't need to
provision, patch, monitor, or manage these servers, and you don't need to worry about scaling the underlying infrastructure to meet
your workloads.
upvoted 1 times
Yes, the question did mention "Service" but NOT "Services" AND ALSO, the NEED here is "to INSTALL APPLICATION IN A DOCKER
CONTAINER (The Container should be existing already)".
Amazon Elastic Container Service --> Run highly secure, reliable, and scalable containers.
Launch containers on AWS at scale without worrying about the underlying infrastructure.
1. Containers
- ECR - Easily store, manage, and deploy container images
- EKS - The most trusted way to run Kubernetes
- Elastic Container Service - Highly secure, reliable, and scalable way to run containers
Answer is clearly A - Fargate. Not sure why is there even a confusion in the first place.
upvoted 1 times
https://aws.amazon.com/fargate/
Deploy and manage your applications, not infrastructure. Fargate removes the operational overhead of scaling, patching, securing, and
managing servers.
upvoted 18 times
When using AWS Fargate, you define your containerized application using services such as Amazon Elastic Container Service (ECS) or
Amazon Elastic Kubernetes Service (EKS). You provide the container image, CPU and memory requirements, networking, and other
configuration details. Fargate then handles the provisioning and management of the underlying infrastructure needed to run your
containers, such as the container hosts.
upvoted 2 times
khanda 3 weeks, 3 days ago
Selected Answer: A
With Fargate, you no longer have to provision, configure, or scale clusters of virtual machines to run containers. This removes the need to
choose server types, decide when to scale your clusters, or optimize cluster packing.
upvoted 2 times
ECS is the orchestration tool for Docker containers, but it needs to run on something. Either EC2 (managed by the customer) or Fargate
(managed by AWS). Since the question asks for a fully-managed solution, the answer is A.
upvoted 3 times
reference: https://docs.aws.amazon.com/whitepapers/latest/aws-
overview/containers.html#:~:text=Amazon%20ECS%20eliminates%20the%20need%20for%20you%20to%20install%20and%20operate%20y
our%20own%20container%20orchestration%20software%2C%20manage%20and%20scale%20a%20cluster%20of%20virtual%20machines%
20(VMs)%2C%20or%20schedule%20containers%20on%20those%20VMs.
upvoted 1 times
Which AWS service or feature checks access policies and offers actionable recommendations to help users set secure and functional policies?
D. Amazon GuardDuty
Correct Answer: B
AWS IAM Access Analyzer helps identify resources in your organization and accounts that are shared with an external entity. IAM Access
Analyzer validates IAM policies against policy grammar and best practices. IAM Access Analyzer generates IAM policies based on access
activity in your AWS CloudTrail logs.
AWS Trusted Advisor provides recommendations to help you follow AWS best practices for security, cost optimization, performance
improvement, and fault tolerance.
Amazon GuardDuty is a threat detection service that continuously monitors for malicious activity and unauthorized behavior to protect
your AWS accounts and workloads.
upvoted 8 times
https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-policy-validation.html
upvoted 1 times
IAM Access Analyzer uses automated reasoning to analyze policies, including resource-based policies and IAM policies. It checks for any
potential vulnerabilities, unintended access, or over-permissive access permissions that might be present in the policies. It can help
identify issues such as overly permissive access, wildcard permissions, and other security risks.
With IAM Access Analyzer, users can review the policy findings, understand the impact of the identified issues, and take appropriate
actions to correct and secure their access policies. It helps users ensure that their policies align with security best practices and functional
requirements.
upvoted 2 times
https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-policy-validation.html
upvoted 1 times
AWS IAM Access Analyzer helps identify any unintended access to AWS resources. It checks policies for resources such as Amazon S3
buckets and IAM roles to ensure that only authorized users and services have access to them. It offers actionable recommendations to
help users set secure and functional policies.
upvoted 2 times
AWS IAM Access Analyzer helps identify resources that can be accessed publicly or from other accounts and provides actionable
recommendations to help set secure and functional policies. IAM Access Analyzer uses automated reasoning, which applies mathematical
analysis and inference to determine the possible implications of resource policies. IAM Access Analyzer also provides a detailed report that
helps identify which policies need to be updated.
upvoted 2 times
https://aws.amazon.com/pt/premiumsupport/technology/trusted-advisor/
upvoted 3 times
A company has a fleet of cargo ships. The cargo ships have sensors that collect data at sea, where there is intermittent or no internet connectivity.
The company needs to collect, format, and process the data at sea and move the data to AWS later.
Which AWS service should the company use to meet these requirements?
B. Amazon Lightsail
Correct Answer: C
In the given scenario, the cargo ships can use Snowball Edge devices on board to collect and process the data from the sensors. Snowball
Edge devices have built-in storage and computing capabilities that can be used to transform and format the data while at sea. The data
can be stored on the Snowball Edge devices until the ships reach a location with internet connectivity.
upvoted 2 times
AWS IoT Greengrass allows the company to install a software agent on a device (such as a cargo ship) that can run AWS Lambda functions,
interact with other AWS services, and communicate with other devices on the local network, even when the device is offline or has limited
connectivity. The agent can also perform data processing and filtering locally, reducing the amount of data that needs to be sent to the
cloud.
Answer : A
upvoted 1 times
AWS Snowball Edge is a petabyte-scale data transfer and edge computing device that can be used to move large amounts of data in and
out of AWS, as well as perform local processing and data storage. Snowball Edge can be used in environments where there is intermittent
or no internet connectivity to collect, format, and process data at sea and then move the data to AWS when a connection is available.
upvoted 2 times
C - not designed for data collection, processing, and transfer in low-connectivity environments
D - a data transport solution. can collect, process, and designed for situations where there is limited or no internet connectivity
Answer: D
upvoted 2 times
QueCJ 4 months, 4 weeks ago
Selected Answer: D
Snow family compute optimized solutions facilitates edge computing
upvoted 1 times
Question #72 Topic 1
A retail company needs to build a highly available architecture for a new ecommerce platform. The company is using only AWS services that
replicate data across multiple Availability Zones.
Which AWS services should the company use to meet this requirement? (Choose two.)
A. Amazon EC2
C. Amazon Aurora
D. Amazon DynamoDB
E. Amazon Redshift
Correct Answer: AB
Reference:
https://aws.amazon.com/rds/features/multi-az/#:~:text=Amazon%20Aurora%20further%20extends%20the,ways%2C%20across%20three%
20Availability%20Zones
See: https://aws.amazon.com/blogs/architecture/creating-a-multi-region-application-with-aws-services-part-2-data-and-replication/
upvoted 2 times
man5484 3 weeks, 1 day ago
Selected Answer: CD
C. Amazon Aurora: Amazon Aurora is a relational database service that is designed for high availability and durability. It replicates data
across multiple Availability Zones within a region, providing automatic failover and minimizing downtime in the event of an outage.
D. Amazon DynamoDB: Amazon DynamoDB is a NoSQL database service that is designed for high availability and scalability. It
automatically replicates data across multiple Availability Zones within a region, ensuring data durability and enabling continuous
availability.
upvoted 2 times
Amazon Aurora is a relational database service that automatically scales and replicates data across multiple Availability Zones for high
availability and durability.
Amazon DynamoDB is a NoSQL database service that provides consistent performance at any scale and automatically replicates data
across multiple AWS regions and Availability Zones.
While EC2, EBS, and Redshift can be configured to operate across multiple Availability Zones, they don't automatically replicate data across
them, so Aurora and DynamoDB are the most suitable options for this requirement.
upvoted 3 times
Amazon Aurora replicates the data with six copies across three Availability Zones, while Amazon DynamoDB uses multiple replicas in at
least three Availability Zones.
upvoted 3 times
C. Amazon Aurora - Amazon Aurora is a relational database service that is designed to be highly available and durable. It replicates data
across multiple Availability Zones in a region, providing automatic failover and minimizing downtime in the event of an Availability Zone
outage.
D. Amazon DynamoDB - Amazon DynamoDB is a fully managed NoSQL database service that is designed for high availability and
scalability. It replicates data across multiple Availability Zones in a region, providing automatic failover and minimal downtime in the event
of an Availability Zone outage.
upvoted 1 times
Question #73 Topic 1
Which characteristic of the AWS Cloud helps users eliminate underutilized CPU capacity?
A. Agility
B. Elasticity
C. Reliability
D. Durability
Correct Answer: B
With elasticity in the AWS Cloud, users can scale up their CPU capacity during periods of high demand to ensure optimal performance and
responsiveness. Conversely, during periods of low demand, they can scale down the CPU capacity to eliminate underutilized resources
and reduce costs.
By leveraging the elastic nature of the AWS Cloud, users can effectively eliminate underutilized CPU capacity and only pay for the
resources they actually need, optimizing cost-efficiency while maintaining performance.
upvoted 1 times
Service control policies (SCPs) manage permissions for which of the following?
A. Availability Zones
B. AWS Regions
C. AWS Organizations
D. Edge locations
Correct Answer: C
Reference:
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html
organization policy
"An AWS Service Control Policy (SCP) is a set of rules you can create to control access to your AWS resources within the AWS accounts in
your AWS Organization."
https://towardsthecloud.com/aws-scp-service-control-
policies#:~:text=An%20AWS%20Service%20Control%20Policy%20(SCP)%20is%20a%20set%20of%20rules%20you%20can%20create%20to%2
0control%20access%20to%20your%20AWS%20resources%20within%20the%20AWS%20accounts%20in%20your%20AWS%20Organization.
upvoted 1 times
SCPs are used within AWS Organizations to set fine-grained permissions and control access to services and resources within the
organization. SCPs enable you to define and enforce permissions at the root level or at specific organizational units (OUs) within your
organization's hierarchy. They allow you to control what actions can be performed by accounts within the organization, including access
to specific services, regions, or actions.
upvoted 1 times
A. Amazon GuardDuty
B. AWS Shield
Correct Answer: D
Reference:
https://aws.amazon.com/blogs/security/how-to-protect-data-at-rest-with-amazon-ec2-instance-store-encryption/
With AWS KMS, you can encrypt your data at rest by using AWS managed keys or by creating and importing your own keys. You can
integrate KMS with various AWS services, such as Amazon S3, Amazon EBS, Amazon RDS, and others, to enable automatic encryption of
data at rest.
upvoted 1 times
D - yes
Answer: D
upvoted 1 times
AWS Key Management Service (AWS KMS) is a fully managed service that can be used to encrypt data at rest. It is a secure and scalable
way to manage the encryption keys that are used to encrypt your data, and it provides a range of features and tools to help you manage
and secure your encryption keys.
With AWS KMS, you can create and manage symmetric and asymmetric keys, and use them to encrypt and decrypt data at rest. You can
also use KMS to create and manage custom key policies, and to control access to your keys based on your security and compliance
requirements.
AWS KMS is a flexible and scalable solution that can be used to encrypt data at rest in a variety of scenarios, including data storage, data
backup and recovery, and data archiving. It is a cost-effective way to ensure that your data is secure and compliant, and it is easy to use
and manage.
upvoted 4 times
Which characteristics are advantages of using the AWS Cloud? (Choose two.)
D. Enhanced security
Correct Answer: BD
Reference:
https://intellipaat.com/blog/aws-benefits-and-drawbacks/
D. Enhanced security: AWS provides a secure infrastructure and offers a wide range of security services and features to help users protect
their data and applications. These include encryption options, identity and access management controls, network security, monitoring
tools, and compliance certifications. AWS follows security best practices and offers a shared responsibility model, where both AWS and the
customer have responsibilities for security.
upvoted 1 times
B. Compute capacity that is adjusted on demand: With AWS, users can easily scale up or down the compute capacity as per their business
needs. This provides flexibility and cost savings as users only pay for what they use.
D. Enhanced security: AWS offers a high level of security, compliance, and data protection. AWS has built a wide range of security tools
and services to protect the data, network, and infrastructure of users.
D - yes
A user is storing objects in Amazon S3. The user needs to restrict access to the objects to meet compliance obligations.
What should the user do to meet this requirement?
Correct Answer: D
Option B, tagging the objects in the S3 bucket, is also not a suitable option as tagging is used to categorize objects for management
purposes and does not provide access control.
Option C, using security groups, is not applicable to Amazon S3 as it is a network-level security feature that controls inbound and
outbound traffic to and from Amazon EC2 instances.
Option D, using network ACLs, is also not a suitable option for Amazon S3. Network ACLs are used to control traffic at the subnet level and
do not provide access control to individual objects in an S3 bucket.
Therefore, the correct answer is to use Amazon S3 bucket policies or access control lists (ACLs).
upvoted 5 times
"Object tags enable fine-grained access control of permissions. For example, you could grant a user permissions to read-only objects with
specific tags."
https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-
tagging.html#:~:text=Object%20tags%20enable%20fine%2Dgrained%20access%20control%20of%20permissions.%20For%20example%2C
%20you%20could%20grant%20a%20user%20permissions%20to%20read%2Donly%20objects%20with%20specific%20tags.
upvoted 1 times
With Amazon S3, access control can be achieved through various mechanisms, such as bucket policies, access control lists (ACLs), and IAM
policies. By leveraging object tags, the user can define more granular access control policies using IAM policies and resource-based
policies.
For example, the user can create an IAM policy that allows read access to objects with a specific tag value (e.g., compliance=yes) and deny
access to objects without that tag. This ensures that only authorized users or systems with the appropriate tag can access the objects.
upvoted 2 times
khanda 3 weeks, 2 days ago
Selected Answer: D
Correct answer is D:
https://repost.aws/knowledge-center/secure-s3-resources
upvoted 1 times
A company wants to convert video files and audio files from their source format into a format that will play on smartphones, tablets, and web
browsers.
Which AWS service will meet these requirements?
B. Amazon Comprehend
C. AWS Glue
D. Amazon Rekognition
Correct Answer: A
"Amazon Elastic Transcoder is media transcoding in the cloud. It is designed to be a highly scalable, easy to use and a cost effective way
for developers and businesses to convert (or “transcode”) media files from their source format into versions that will playback on devices
like smartphones, tablets and PCs."
https://aws.amazon.com/elastictranscoder/#:~:text=Amazon%20Elastic%20Transcoder%20is,smartphones%2C%20tablets%20and%20PCs.
upvoted 1 times
With Amazon Elastic Transcoder, you can define transcoding presets to specify the desired output formats, codecs, bitrates, resolutions,
and other parameters for the transcoded files. You can also configure the service to automatically generate thumbnails or add
watermarks to the converted media files.
By using Elastic Transcoder, you can easily integrate media transcoding into your workflows and applications, ensuring that your video
and audio content is accessible and playable across a wide range of devices and platforms.
upvoted 2 times
AWS Elemental MediaConvert is a file-based video transcoding service that enables the user to easily create video-on-demand (VOD)
content for broadcast and multiscreen delivery at scale. It can be used to convert audio and video files from their source format into a
format that is optimized for playback on various devices, including smartphones, tablets, and web browsers.
With AWS Elemental MediaConvert, the company can easily create custom transcoding presets that can be used to optimize the video and
audio quality for different devices and network conditions. It also provides advanced features such as closed captions, audio
normalization, and watermarking.
upvoted 1 times
Which of the following are benefits of Amazon EC2 Auto Scaling? (Choose two.)
E. Cross-Region Replication
Correct Answer: AB
C. Optimized performance and costs: With Amazon EC2 Auto Scaling, you can set scaling policies to dynamically adjust the number of EC2
instances based on metrics such as CPU utilization, network traffic, or other custom metrics. This allows you to optimize your application's
performance by scaling resources up during high-demand periods and scaling down during low-demand periods, helping to minimize
costs.
Option B, reduced network latency, is not a direct benefit of Amazon EC2 Auto Scaling. While Auto Scaling can help ensure the availability
and performance of applications, it does not directly address network latency.
upvoted 1 times
Therefore, the correct options are A and C. EC2 Auto Scaling helps in automatically adjusting the capacity of EC2 instances in response to
changes in demand. This ensures that the application is always available and can handle varying levels of traffic. It also helps in optimizing
the performance and costs by automatically scaling up and down the instances as per the demand, thus avoiding over-provisioning or
under-provisioning of resources. However, EC2 Auto Scaling does not provide automated snapshots of data or cross-region replication.
upvoted 2 times
A. Improved health and availability of applications: EC2 Auto Scaling enables you to automatically launch or terminate EC2 instances based
on demand. By automatically scaling your infrastructure in response to changing demand, you can ensure that your applications are
always available and responsive to users.
C. Optimized performance and costs: EC2 Auto Scaling enables you to optimize the performance and costs of your infrastructure by
automatically scaling up or down based on demand. By scaling up during periods of high demand and scaling down during periods of low
demand, you can ensure that you are only paying for the resources you need, while still providing a responsive user experience.
Reduced network latency, automated snapshots of data, and cross-region replication are not benefits of Amazon EC2 Auto Scaling.
upvoted 3 times
A company has several departments. Each department has its own AWS accounts for its applications. The company wants all AWS costs on a
single invoice to simplify payment, but the company wants to know the costs that each department is incurring.
Which AWS tool or feature will provide this functionality?
B. Consolidated billing
C. Savings Plans
D. AWS Budgets
Correct Answer: B
While the billing is consolidated, you can still view the costs incurred by each individual account or department. AWS provides detailed
cost and usage reports through the AWS Cost and Usage Reports (CUR). These reports can be configured to include cost allocation tags
that you can set up for each account or department. By using cost allocation tags, you can track and analyze costs based on specific tags,
such as department, project, or application.
upvoted 1 times
Consolidated billing enables the company to consolidate payment for multiple AWS accounts or multiple departments into a single
payment, making it easier to track and pay AWS costs. However, each AWS account remains separate, and each department can view its
own usage and cost data
upvoted 1 times
To get all AWS costs on a single invoice while still being able to track the costs that each department is incurring, the company can use
consolidated billing.
Consolidated billing is a feature of AWS Organizations that allows a single AWS account to pay the bills for multiple AWS accounts. This can
be useful for companies that have multiple AWS accounts, as it allows them to see all of their costs on a single invoice, while still being
able to track the costs of each department separately.
upvoted 1 times
sumanshu 10 months, 2 weeks ago
Vote for B
upvoted 1 times
A company runs its workloads on premises. The company wants to forecast the cost of running a large application on AWS.
Which AWS service or tool can the company use to obtain this information?
B. AWS Budgets
D. Cost Explorer
Correct Answer: D
Reference:
https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/ce-forecast.html
https://docs.aws.amazon.com/pricing-calculator/latest/userguide/what-is-pricing-calculator.html
upvoted 1 times
With the AWS Cost Explorer, you can generate custom reports, view cost and usage data at various levels of granularity (such as by
service, region, or tag), and explore different cost dimensions. It also provides visualization options to help you understand your cost
trends and patterns over time.
By using the AWS Cost Explorer, the company can input its specific workload requirements, estimate the usage of various AWS services,
and obtain a forecasted cost for running the application on AWS.
A company wants to eliminate the need to guess infrastructure capacity before deployments. The company also wants to spend its budget on
cloud resources only as the company uses the resources.
Which advantage of the AWS Cloud matches the company's requirements?
A. Reliability
B. Global reach
C. Economies of scale
D. Pay-as-you-go pricing
Correct Answer: D
Reference:
https://docs.aws.amazon.com/whitepapers/latest/aws-overview/six-advantages-of-cloud-computing.html
Pay-as-you-go pricing allows the company to only pay for the resources they use, without having to guess how much infrastructure
capacity they will need in advance. This means that the company can use cloud resources as needed and can scale up or down depending
on demand, without having to worry about overprovisioning or underprovisionin
upvoted 1 times
C. https://aws.amazon.com/pricing/?aws-products-pricing.sort-by=item.additionalFields.productNameLowercase&aws-products-
pricing.sort-order=asc&awsf.Free%20Tier%20Type=*all&awsf.tech-category=*all
D. https://docs.aws.amazon.com/whitepapers/latest/aws-overview/six-advantages-of-cloud-computing.html
upvoted 1 times
Which AWS service supports a hybrid architecture that gives users the ability to extend AWS infrastructure, AWS services, APIs, and tools to data
centers, co- location environments, or on-premises facilities?
A. AWS Snowmobile
C. AWS Outposts
D. AWS Fargate
Correct Answer: C
Reference:
https://aws.amazon.com/outposts/
AWS Outposts is a service that supports a hybrid architecture that gives users the ability to extend AWS infrastructure, AWS services, APIs,
and tools to data centers, co-location environments, or on-premises facilities.
upvoted 9 times
A company has a physical tape library to store data backups. The tape library is running out of space. The company needs to extend the tape
library's capacity to the AWS Cloud.
Which AWS service should the company use to meet this requirement?
B. Amazon S3
Correct Answer: D
AWS Storage Gateway is a service that can be used to extend the tape library's capacity to the AWS Cloud.
AWS Storage Gateway is a hybrid storage service that allows users to connect their on-premises data centers to the AWS Cloud. It provides
a range of storage options, including file-based, block-based, and tape-based storage, which can be used to store data backups and other
types of data.
upvoted 10 times
"AWS Storage Gateway is a set of hybrid cloud storage services that provide on-premises access to virtually unlimited cloud storage."
https://aws.amazon.com/storagegateway/
upvoted 1 times
An online retail company has seasonal sales spikes several times a year, primarily around holidays. Demand is lower at other times. The company
finds it difficult to predict the increasing infrastructure demand for each season.
Which advantages of moving to the AWS Cloud would MOST benefit the company? (Choose two.)
A. Global footprint
B. Elasticity
E. Pay-as-you-go pricing
Correct Answer: BE
Reference:
https://docs.aws.amazon.com/whitepapers/latest/aws-overview/six-advantages-of-cloud-computing.html
B. Elasticity: The company can take advantage of the elasticity of the AWS Cloud to easily scale its infrastructure up or down as demand
fluctuates during the seasonal sales spikes. This can help avoid overprovisioning during low demand periods, which can result in
unnecessary costs.
E. Pay-as-you-go pricing: The company can benefit from the pay-as-you-go pricing model of the AWS Cloud, which allows it to only pay for
the resources it uses. This can help reduce costs during low demand periods and avoid the need to make large upfront investments in
infrastructure that may not be fully utilized.
upvoted 2 times
Which AWS service can be used to turn text into lifelike speech?
A. Amazon Polly
B. Amazon Kendra
C. Amazon Rekognition
D. Amazon Connect
Correct Answer: A
Reference:
https://aws.amazon.com/polly/#:~:text=Amazon%20Polly%20is%20a%20service,synthesize%20natural%20sounding%20human%20speech
Amazon Polly is a service that can be used to turn text into lifelike speech. Amazon Polly uses advanced deep learning technologies to
synthesize speech that sounds natural and lifelike, allowing users to convert written content into spoken language.
upvoted 6 times
"Amazon Polly uses deep learning technologies to synthesize natural-sounding human speech, so you can convert articles to speech. With
dozens of lifelike voices across a broad set of languages, use Amazon Polly to build speech-activated applications."
https://aws.amazon.com/polly/#:~:text=Amazon%20Polly%20is%20a%20service,synthesize%20natural%20sounding%20human%20speech
upvoted 1 times
Which AWS service or tool can be used to capture information about inbound and outbound traffic in an Amazon VPC?
B. Amazon Inspector
D. NAT gateway
Correct Answer: A
"VPC Flow Logs is a feature that enables you to capture information about the IP traffic going to and from network interfaces in your VPC.
Flow log data can be published to the following locations: Amazon CloudWatch Logs, Amazon S3, or Amazon Kinesis Data Firehose. After
you create a flow log, you can retrieve and view the flow log records in the log group, bucket, or delivery stream that you configured."
https://docs.aws.amazon.com/vpc/latest/userguide/flow-logs.html
upvoted 1 times
A company wants to ensure that two Amazon EC2 instances are in separate data centers with minimal communication latency between the data
centers.
How can the company meet this requirement?
A. Place the EC2 instances in two separate AWS Regions connected with a VPC peering connection.
B. Place the EC2 instances in two separate Availability Zones within the same AWS Region.
C. Place one EC2 instance on premises and the other in an AWS Region. Then connect them by using an AWS VPN connection.
Correct Answer: B
A cluster placement group is a logical grouping of instances within a SINGLE AVAILABILITY ZONE that benefits from low network latency,
high network throughput.
"You can use placement groups to influence the placement of a group of interdependent instances to meet the needs of your workload.
Depending on the type of workload, you can create a placement group using one of the following placement strategies:
Cluster – packs instances close together inside an Availability Zone. This strategy enables workloads to achieve the low-latency network
performance necessary for tightly-coupled node-to-node communication that is typical of high-performance computing (HPC)
applications."
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html
upvoted 1 times
In which situations should a company create an IAM user instead of an IAM role? (Choose two.)
A. When an application that runs on Amazon EC2 instances requires access to other AWS services
C. When the company creates an application that runs on a mobile phone that makes requests to AWS
E. When users are authenticated in the corporate network and want to be able to use AWS without having to sign in a second time
Correct Answer: BD
A - IAM Role
B - IAM User
C - IAM Role
D - IAM User
E - Integrated SSO
Explaination:
There are several situations in which you might want to create an IAM user instead of an IAM role:
When you want to grant access to an individual person, rather than to an AWS resource or service.
When you want to give someone the ability to access the AWS Management Console.
When you want to use multi-factor authentication (MFA) to secure access to your AWS resources.
When you want to give someone the ability to use the AWS API or command line interface (CLI) to access your resources.
On the other hand, there are situations in which you might want to create an IAM role instead of an IAM user:
When you want to grant permissions to an AWS resource or service, rather than to an individual person.
When you want to grant temporary access to your resources.
When you want to grant access to resources in another AWS account.
It's important to carefully consider the specific needs of your use case when deciding whether to create an IAM user or an IAM role.
upvoted 19 times
Option C is incorrect because IAM roles can be used to provide access to AWS services for mobile applications through AWS Security Token
Service (STS) APIs.
Option E is also incorrect because IAM roles can be used with identity federation to enable users who are authenticated in the corporate
network to access AWS resources without needing to sign in a second time.
upvoted 1 times
In situations where users are authenticated in the corporate network and want to be able to use AWS without having to sign in a second
time (option E), it is also appropriate to create an IAM user. This can be done using AWS Single Sign-On (AWS SSO), which allows users to
access AWS accounts and resources by using their corporate credentials.
upvoted 2 times
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users.html
An AWS Identity and Access Management (IAM) user is an entity that you create in AWS to represent the person or application that uses it
to interact with AWS. A user in AWS consists of a name and credentials.
upvoted 3 times
Which AWS services should a company use to read and write data that changes frequently? (Choose two.)
A. Amazon S3 Glacier
B. Amazon RDS
C. AWS Snowball
D. Amazon Redshift
Correct Answer: BD
E. Amazon Elastic File System (Amazon EFS): Amazon EFS is a scalable file storage service that provides shared file storage for multiple
Amazon EC2 instances. It is suitable for use cases where multiple instances need concurrent read and write access to the same data.
Amazon EFS allows you to handle data that changes frequently and enables multiple users to access and modify the data simultaneously.
Therefore, the correct choices are B. Amazon RDS and E. Amazon Elastic File System (Amazon EFS).
upvoted 3 times
ESAJRR 1 month ago
Selected Answer: BE
B. Amazon RDS
E. Amazon Elastic File System (Amazon EFS)
upvoted 2 times
Amazon RDS is a managed relational database service that supports multiple database engines, including Amazon Aurora, PostgreSQL,
MySQL, MariaDB, Oracle Database, and SQL Server. It is designed for applications that require frequent updates to the data.
Amazon EFS is a fully managed file system that supports the NFSv4 protocol. It is designed for applications that require shared access to
files and require frequent updates to the data.
upvoted 4 times
B. Amazon RDS: Amazon RDS is a managed relational database service that provides a scalable and highly available database platform for
applications. It supports various popular database engines such as MySQL, PostgreSQL, Oracle, and SQL Server, and allows users to easily
create, operate, and scale a relational database in the cloud. Amazon RDS is well-suited for applications that require frequent read and
write operations to the database.
E. Amazon Elastic File System (Amazon EFS): Amazon EFS is a fully managed, scalable, and highly available file storage service for use with
Amazon EC2 instances. It provides a simple, scalable, and reliable way to share data across multiple EC2 instances, and supports multiple
file systems, file locking, and file permissions. Amazon EFS is well-suited for applications that require shared access to frequently changing
data, such as content management systems, web serving, and Big Data applications.
upvoted 2 times
E. Amazon Elastic File System (Amazon EFS): Amazon EFS is a fully managed, scalable, and highly available file storage service that
provides simple and scalable file storage for use with Amazon EC2 instances. It is designed for workloads that require frequent reads and
writes and provides low-latency performance for data-intensive applications.
upvoted 3 times
et_learner 4 months, 2 weeks ago
Selected Answer: BE
Option A is incorrect because Amazon S3 Glacier is an archival storage service designed for data that is infrequently accessed and for
which retrieval times of several hours are acceptable.
Option C is incorrect because AWS Snowball is a data transfer service that is designed to transfer large amounts of data into and out of
AWS.
Option D is incorrect because Amazon Redshift is a data warehousing service that is designed for large-scale data analytics, and it may not
be the best choice for frequently changing data.
upvoted 3 times
E. Amazon Elastic File System (Amazon EFS): Amazon EFS is a fully managed, scalable, and highly available file storage service that can be
accessed by multiple Amazon EC2 instances and on-premises servers at the same time. EFS provides a common data source that can be
used to share data across multiple instances or servers, making it ideal for scenarios where data changes frequently.
upvoted 2 times
C. AWS KMS
D. AWS Config
Correct Answer: C
By utilizing AWS KMS, you can create and manage encryption keys to protect your EBS volumes. This helps ensure that the data stored on
the EBS volumes remains secure and encrypted. AWS KMS integrates seamlessly with Amazon EBS and provides a robust key
management infrastructure.
upvoted 1 times
Which AWS services make use of global edge locations? (Choose two.)
A. AWS Fargate
B. Amazon CloudFront
D. AWS Wavelength
E. Amazon VPC
Correct Answer: BC
Reference:
https://www.lastweekinaws.com/blog/what-is-an-edge-location-in-aws-a-simple-explanation/#:~:text=CloudFront%20is%20the%20most%
20commonly,caches%20content%20in%20edge%20locations
C. AWS Global Accelerator: AWS Global Accelerator is a networking service that utilizes the AWS global network infrastructure to improve
the availability and performance of applications. It uses a network of global edge locations to route user traffic to the nearest edge
location, reducing latency and improving the responsiveness of applications.
upvoted 1 times
https://store.bitslovers.com/p/mindmap-aws-vpc-everything-that-you-need-to-know-about-in-a-single-mindmap/
upvoted 3 times
Using Global Accelerator, your users' traffic is moved off the internet and onto Amazon’s private global network through 90+ global edge
locations, then directed to your application origins. AWS Global Accelerator is quick to setup and increases traffic performance by up to
60%.
upvoted 3 times
A company is operating several factories where it builds products. The company needs the ability to process data, store data, and run applications
with local system interdependencies that require low latency.
Which AWS service should the company use to meet these requirements?
B. AWS Lambda
C. AWS Outposts
Correct Answer: B
B - Serverless computing. No
Answer: C
upvoted 1 times
AWS Outposts is a fully-managed service that extends AWS infrastructure, services, APIs, and tools to virtually any datacenter, co-location
space, or on-premises facility for a truly consistent hybrid experience. With AWS Outposts, the company can run AWS services locally,
including compute, storage, database, and analytics, while seamlessly connecting to AWS services in the cloud.
This means that the company can process data, store data, and run applications with local system interdependencies that require low
latency using AWS Outposts, which is located within their own datacenters or facilities. With AWS Outposts, the company can avoid the
latency and data transfer costs that come with using cloud services located far away from their factories.
upvoted 4 times
Which of the following is a recommended design principle for AWS Cloud architecture?
B. Build a single application component that can handle all the application functionality.
Correct Answer: C
95. A company is designing its AWS workloads so that components can be updated regularly and so that changes can be made in small,
reversible increments.
Which pillar of the AWS Well-Architected Framework does this design support?
upvoted 18 times
By segmenting workloads, you can isolate different parts of your application and make them more modular, which can lead to greater
flexibility, scalability, and fault tolerance. It also allows you to use different technologies and tools for different parts of the application,
which can make it easier to optimize each part for its specific requirements.
upvoted 3 times
Reference : https://www.botmetric.com/blog/aws-cloud-architecture-design-principles
upvoted 1 times
A company is designing its AWS workloads so that components can be updated regularly and so that changes can be made in small, reversible
increments.
Which pillar of the AWS Well-Architected Framework does this design support?
A. Security
B. Performance efficiency
C. Operational excellence
D. Reliability
Correct Answer: B
There are five design principles for operational excellence in the cloud:
Perform operations as code
Make frequent, small, reversible changes
Refine operations procedures frequently
Anticipate failure
Learn from all operational failures
upvoted 14 times
Which of the following acts as an instance-level firewall to control inbound and outbound access?
B. Security groups
Correct Answer: B
but isn't the question here then a bit unclear since it also says "...to control inbound and outbound access" since the Security Group
doesn't control the outbound?'
upvoted 3 times
A company has a workload that will run continuously for 1 year. The workload cannot tolerate service interruptions.
Which Amazon EC2 purchasing option will be MOST cost-effective?
C. Dedicated Instances
D. On-Demand Instances
Correct Answer: A
https://aws.amazon.com/ec2/pricing/reserved-instances/pricing/
upvoted 13 times
A. AWS Shield
B. Amazon Inspector
C. Amazon GuardDuty
D. Amazon Detective
Correct Answer: A
Answer: A
upvoted 3 times
Using AWS Config to record, audit, and evaluate changes to AWS resources to enable traceability is an example of which AWS Well-Architected
Framework pillar?
A. Security
B. Operational excellence
C. Performance efficiency
D. Cost optimization
Correct Answer: A
Reference:
https://d1.awsstatic.com/whitepapers/architecture/AWS_Well-Architected_Framework.pdf
(12)
AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources over time. It records
changes to resource configurations and provides a detailed view of how the configurations are aligned with best practices and desired
configurations. By using AWS Config, you can track and maintain traceability of changes to your resources, helping to enforce security
controls, detect unauthorized changes, and ensure compliance with security requirements.
upvoted 1 times
The question is specifically mentioning "AWS Config" at the first sentence, take a walk to the AWS Config main page and you will find
some of the related use cases is related with the Operations activities: like continually assess, monitor, and record resource configuration
changes to simplify change management and to simplify operational troubleshooting by correlating configuration changes to particular
events in your account.
Reference:
https://aws.amazon.com/config/
upvoted 1 times
https://aws.amazon.com/blogs/apn/the-6-pillars-of-the-aws-well-architected-framework/
upvoted 2 times
https://docs.aws.amazon.com/pdfs/wellarchitected/latest/operational-excellence-pillar/wellarchitected-operational-excellence-pillar.pdf
upvoted 1 times
Which AWS tool or feature acts as a VPC firewall at the subnet level?
A. Security group
B. Network ACL
C. Traffic Mirroring
D. Internet gateway
Correct Answer: B
A. AWS Config
C. AWS Batch
Correct Answer: B
Amazon Simple Queue Service (Amazon SQS) is a fully managed message queuing service that makes it easy to decouple and scale
microservices, distributed systems, and serverless applications. Amazon SQS moves data between distributed application components
and helps you decouple these components.
upvoted 11 times
A. Warm standby
B. Multisite
D. Pilot light
Correct Answer: C
COOL to HOT
C. Backup and restore - D. Pilot light - A. Warm standby - B. Multisite
https://docs.aws.amazon.com/whitepapers/latest/disaster-recovery-workloads-on-aws/disaster-recovery-options-in-the-cloud.html
upvoted 1 times
Which type of AWS storage is ephemeral and is deleted when an Amazon EC2 instance is stopped or terminated?
D. Amazon S3
Correct Answer: B
When you stop or terminate an instance, every block of storage in the instance store is reset. Therefore, your data cannot be accessed through
the instance store of another instance.
Reference:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html
Instance store volumes are physically attached to the host server where the EC2 instance is running. They provide high I/O performance
and low-latency access, making them suitable for applications that require temporary storage, caching, or scratch space. However, it is
important to note that the data stored in instance store volumes cannot be easily recovered once the EC2 instance is stopped or
terminated.
upvoted 1 times
Amazon EC2 instance store provides temporary block-level storage for Amazon EC2 instances. The data on an instance store volume
persists only during the life of the associated Amazon EC2 instance. If the instance is stopped or terminated, any data on instance store
volumes is lost. In contrast, Amazon Elastic Block Store (Amazon EBS) provides persistent block-level storage volumes for use with
Amazon EC2 instances.
upvoted 1 times
A. The root user is the only user that can be configured with multi-factor authentication (MFA).
B. The root user is the only user that can access the AWS Management Console.
C. The root user is the first sign-in identity that is available when an AWS account is created.
Correct Answer: B
Reference:
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_root-user.html
The root user is the only user that can perform certain tasks, such as creating and managing IAM users, billing and payment information,
and closing an AWS account. However, the root user is not the only user that can access the AWS Management Console. IAM users with
the appropriate permissions can also access the console.
It is important to secure the root user account with a strong password and enable multi-factor authentication (MFA) to prevent
unauthorized access to the account. Therefore, the root user is not the only user that can be configured with MFA.
upvoted 3 times
nixonlaw 3 months ago
Selected Answer: C
C. not to mention it.
upvoted 1 times
Explanation: The AWS account root user is created automatically when the AWS account is created. This user has full access to all AWS
services and resources in the account. The root user is the first identity that is available to sign in to the AWS account, and it cannot be
deleted. However, it is recommended that the root user not be used for everyday tasks in order to improve security. Multi-factor
authentication (MFA) can be configured for the root user, but it is also recommended to create additional IAM users with appropriate
permissions for daily operations. The root user can access the AWS Management Console, but other IAM users can also be granted access.
The root user's password can be changed for security purposes.
upvoted 1 times
C - This is true
D - I can change my root user password. In fact, recently I had to change due to a policy change in AWS
upvoted 1 times
The AWS account root user is the initial user identity created when an AWS account is created. It has complete access and control over all
resources in the AWS account, and it cannot be deleted. However, it is recommended to create IAM (Identity and Access Management)
users and groups and use them instead of the root user to manage AWS resources.
Option A is incorrect because MFA can be enabled for any IAM user, including the root user.
Option B is incorrect because IAM users can also access the AWS Management Console if they have been granted the necessary
permissions.
Option D is incorrect because the root user can change its password just like any other IAM user.
upvoted 1 times
A company hosts an application on an Amazon EC2 instance. The EC2 instance needs to access several AWS resources, including Amazon S3 and
Amazon
DynamoDB.
What is the MOST operationally efficient solution to delegate permissions?
A. Create an IAM role with the required permissions. Attach the role to the EC2 instance.
B. Create an IAM user and use its access key and secret access key in the application.
C. Create an IAM user and use its access key and secret access key to create a CLI profile in the EC2 instance
D. Create an IAM role with the required permissions. Attach the role to the administrative IAM user.
Correct Answer: A
Reference:
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2.html
By creating an IAM role with the necessary permissions and attaching it to the EC2 instance, the EC2 instance can assume the role and
access the required AWS resources. This approach eliminates the need for managing access keys within the EC2 instance and provides a
more secure and scalable solution.
upvoted 2 times
This is the most operationally efficient solution to delegate permissions. When an IAM role is attached to an EC2 instance, the instance can
use the permissions associated with the role to access AWS resources, such as Amazon S3 and Amazon DynamoDB, without the need for
an access key and secret access key.
B and C are not the most operationally efficient solutions because they involve creating an IAM user and using access keys, which can be
less secure than using IAM roles, and require more management overhead to rotate access keys.
D is not the most operationally efficient solution because it involves attaching an IAM role to an administrative IAM user, which is not
recommended for security best practices. It is recommended to follow the principle of least privilege and assign permissions only to the
necessary resources and roles.
upvoted 3 times
A. Amazon Alexa
B. AWS Regions
C. Amazon Lightsail
D. AWS Organizations
Correct Answer: B
Reference:
https://aws.amazon.com/about-aws/global-infrastructure/
D. To load balance traffic from the internet across Amazon EC2 instances
Correct Answer: B
The internet gateway provides a way for instances within the VPC to send and receive traffic directly to and from the internet without
requiring network address translation (NAT). It serves as a bridge between the VPC and the public internet, allowing resources within the
VPC to have public IP addresses and be accessible from the internet.
upvoted 1 times
Which AWS service allows users to download security and compliance reports about the AWS infrastructure on demand?
A. Amazon GuardDuty
C. AWS Artifact
D. AWS Shield
Correct Answer: C
AWS Artifact centralizes the distribution of these reports, making them easily accessible to customers who require evidence of AWS
compliance with various industry standards and regulations. Users can access AWS Artifact through the AWS Management Console to
search for and download the desired reports.
upvoted 1 times
A pharmaceutical company operates its infrastructure in a single AWS Region. The company has thousands of VPCs in a various AWS accounts
that it wants to interconnect.
Which AWS service or feature should the company use to help simplify management and reduce operational costs?
A. VPC endpoint
D. VPC peering
Correct Answer: D
Reference:
https://d1.awsstatic.com/whitepapers/building-a-scalable-and-secure-multi-vpc-aws-network-infrastructure.pdf
(9)
AWS Transit Gateway acts as a hub that allows for the centralization of network traffic between VPCs and on-premises networks. With AWS
Transit Gateway, the pharmaceutical company can establish a single connection to the transit gateway and use it to efficiently route traffic
between all the interconnected VPCs.
By using AWS Transit Gateway, the company can simplify its network architecture, reduce the number of VPN connections or Direct
Connect links required, and manage its network routing policies in a centralized manner. This simplification and consolidation of network
connectivity can lead to operational cost savings and improved management efficiency.
upvoted 1 times
AWS Transit Gateway is a service that enables customers to connect their VPCs and on-premises networks to a single gateway, making it
easier to manage connectivity across their entire infrastructure. It simplifies network architecture by allowing customers to build a hub-
and-spoke model where VPCs can communicate with each other through a central hub, rather than through multiple VPC peering
connections.
With AWS Transit Gateway, the pharmaceutical company can manage thousands of VPCs across multiple accounts as a single logical unit.
It simplifies the management of network connectivity and reduces operational costs by eliminating the need for multiple VPC peering
connections, VPN connections, and NAT gateways.
upvoted 1 times
Scale
VPC Peering: Up to 125 active Peers/VPC.
Transit Gateway: Up to 5,000 Attachments per Region.
upvoted 4 times
The AWS Transit Gateway is a fully managed service that allows customers to connect thousands of VPCs, AWS accounts, and on-premises
networks together using a single gateway. It simplifies the management and reduces operational costs by providing a centralized hub
that can be used to manage connectivity between the different networks.
VPC endpoints allow for private communication between VPCs and AWS services, but they are not used for interconnecting VPCs in
different accounts.
AWS Direct Connect is used for creating a dedicated network connection between an on-premises data center and an AWS Region, and it
is not used for interconnecting VPCs in different accounts.
VPC peering allows for private communication between VPCs in the same or different accounts, but it does not scale well for managing
large numbers of VPCs.
upvoted 4 times
linux_admin 4 months, 3 weeks ago
Selected Answer: C
The company should use AWS Transit Gateway to help simplify management and reduce operational costs. AWS Transit Gateway is a
service that enables customers to connect thousands of VPCs and on-premises networks using a single gateway. With Transit Gateway,
the company can interconnect thousands of VPCs in various AWS accounts across a single Region without the need to create and manage
multiple VPC peering connections or VPN connections. This helps to reduce operational costs and simplify network management.
upvoted 1 times
With AWS Transit Gateway, the company can create a hub-and-spoke network topology that allows it to centrally manage connectivity
between its VPCs, VPNs, and on-premises networks. It also helps to reduce operational costs by eliminating the need for complex peering
relationships between VPCs, and simplifying network routing and security.
Additionally, AWS Transit Gateway integrates with other AWS services, such as AWS Direct Connect, AWS PrivateLink, and AWS Global
Accelerator, to provide a comprehensive networking solution for the company's infrastructure.
upvoted 1 times
VPC Peering is point to point makes the management so complex and expensive to maintain though its cheaper solution in terms of cost
involved.
At the end, if we don't have simplified management solution in place for corporates, any resource management will be very expensive and
unmanageable which we don't want.
upvoted 1 times
A company is planning an infrastructure deployment to the AWS Cloud. Before the deployment, the company wants a cost estimate for running the
infrastructure.
Which AWS service or feature can provide this information?
A. Cost Explorer
Correct Answer: D
The AWS Pricing Calculator allows users to estimate the costs of using various AWS services based on their anticipated usage. It provides a
web-based interface where users can select the desired AWS services, configure the specifications and quantities of resources needed,
and define usage patterns.
By inputting the details of the infrastructure deployment, such as the types and sizes of EC2 instances, storage volumes, networking
resources, and other AWS services, the AWS Pricing Calculator can generate an estimated monthly cost for running the infrastructure in
the AWS Cloud.
This cost estimate can help the company plan its budget, evaluate different deployment options, and make informed decisions about the
infrastructure design.
upvoted 1 times
Which AWS service of tool helps to centrally manage billing and allow controlled access to resources across AWS accounts?
B. AWS Organizations
C. Cost Explorer
D. AWS Budgets
Correct Answer: B
AWS Organizations provides a way to centrally manage and govern multiple AWS accounts within an organization. It enables you to create
and manage groups of accounts, called organizational units (OUs), to align with your company's structure. With AWS Organizations, you
can apply policies across those accounts, control access to resources, and consolidate billing.
upvoted 1 times
https://aws.amazon.com/aws-cost-management/aws-cost-explorer/
AWS Cost Explorer has an easy-to-use interface that lets you visualize, understand, and manage your AWS costs and usage over time. Get
started quickly by creating custom reports that analyze cost and usage data. Analyze your data at a high level (for example, total costs and
usage across all accounts)
upvoted 1 times
Which of the following are Amazon Virtual Private Cloud (Amazon VPC) resources?
D. Groups; roles
Correct Answer: B
Internet gateways are another important resource in Amazon VPC. They serve as the entry and exit points for traffic between your VPC
and the internet. They enable communication between your VPC resources and external networks, such as the public internet.
upvoted 1 times
A company needs to identify the last time that a specific user accessed the AWS Management Console.
Which AWS service will provide this information?
A. Amazon Cognito
B. AWS CloudTrail
C. Amazon Inspector
D. Amazon GuardDuty
Correct Answer: B
By examining the CloudTrail logs, you can identify the last time a specific user accessed the AWS Management Console. The logs will
provide information about console sign-in events, including the user, timestamp, and source IP address.
upvoted 1 times
A company launched an Amazon EC2 instance with the latest Amazon Linux 2 Amazon Machine Image (AMI).
Which actions can a system administrator take to connect to the EC2 instance? (Choose two.)
Correct Answer: AC
D. AWS Systems Manager Session Manager provides interactive shell access to EC2 instances directly from the AWS Management Console
or the AWS CLI. It does not require opening inbound SSH ports or managing SSH keys.
upvoted 1 times
D. Use AWS Systems Manager Session Manager: This is a fully-managed, secure, and auditable way to access your instances using the
AWS Systems Manager console or AWS CLI. With Session Manager, you can tunnel your SSH (Secure Shell) and SCP (Secure Copy)
connections to your instances, without requiring inbound connections or the use of bastion hosts or VPNs.
upvoted 4 times
D - yes
A company wants to perform sentiment analysis on customer service email messages that it receives. The company wants to identify whether the
customer service engagement was positive or negative.
Which AWS service should the company use to perform this analysis?
A. Amazon Textract
B. Amazon Translate
C. Amazon Comprehend
D. Amazon Rekognition
Correct Answer: C
The AWS service that should be used to perform sentiment analysis on customer service email messages is Amazon Comprehend.
Amazon Comprehend is a natural language processing (NLP) service that uses machine learning to find insights and relationships in text.
It can be used to perform sentiment analysis, entity recognition, topic modeling, and language detection.
upvoted 1 times
The AWS service that should be used to perform sentiment analysis on customer service email messages is Amazon Comprehend.
Amazon Comprehend is a natural language processing (NLP) service that uses machine learning to find insights and relationships in text.
It can be used to perform sentiment analysis, entity recognition, topic modeling, and language detection.
Amazon Textract is a service that is used for extracting text and data from scanned documents, but it is not used for sentiment analysis.
Amazon Translate is a service that is used for translating text from one language to another, but it is not used for sentiment analysis.
Amazon Rekognition is a service that is used for image and video analysis, such as object and scene detection, facial analysis, and celebrity
recognition, but it is not used for sentiment analysis of text.
upvoted 4 times
A. 100MB
B. 5 GB
C. 5 TB
D. Unlimited
Correct Answer: D
D - yes
upvoted 2 times
A company is migrating to Amazon S3. The company needs to transfer 60 TB of data from an on-premises data center to AWS within 10 days.
Which AWS service should the company use to accomplish this migration?
A. Amazon S3 Glacier
C. AWS Snowball
Correct Answer: C
Amazon S3 Glacier (S3 Glacier) is a secure and durable service for low-cost data archiving and long-term backup.
With S3 Glacier, you can store your data cost effectively for months, years, or even decades. S3 Glacier helps you offload the administrative
burdens of operating and scaling storage to AWS, so you don't have to worry about capacity planning, hardware provisioning, data
replication, hardware failure detection and recovery, or time-consuming hardware migration
AWS Database Migration Service (AWS DMS) is a cloud service that makes it easy to migrate relational databases, data warehouses, NoSQL
databases, and other types of data stores. You can use AWS DMS to migrate your data into the AWS Cloud or between combinations of
cloud and on-premises setups.
upvoted 9 times
A. In-memory
B. Relational
C. Key-value
D. Graph
Correct Answer: C
B. Discounts can be applied on a quarterly basis by submitting cases in the AWS Management Console.
C. Transitioning objects from Amazon S3 to Amazon S3 Glacier in separate AWS accounts will be less expensive.
D. Having multiple accounts reduces the risks associated with malicious activity targeted at a single account.
E. Amazon QuickSight offers access to a cost tool that provides application-specific recommendations for environments running in multiple
accounts.
Correct Answer: AC
D. Having multiple accounts reduces the risks associated with malicious activity targeted at a single account: If a single AWS account is
compromised, all the resources and workloads within that account may be at risk. By using multiple accounts, the impact of a security
breach or malicious activity can be limited to a specific account, reducing the overall risk.
upvoted 1 times
A. It allows for administrative isolation between different workloads, making it easier to manage permissions and access control for
different teams and projects.
D. Having multiple accounts reduces the risks associated with malicious activity targeted at a single account. If a security breach occurs in
one account, it will not affect other accounts, and the organization can contain the impact of the breach.
B, C, and E are not valid advantages of reconfiguring a single AWS account into multiple accounts.
B is not a valid advantage because discounts are applied to the entire organization's AWS usage and are not limited to a single account.
C is not valid because transitioning objects between S3 and Glacier would not be less expensive across multiple accounts.
E is not valid because QuickSight offers cost tools for any AWS account, regardless of whether it is a single or multiple accounts.
upvoted 4 times
huanghaiyao 2 months, 3 weeks ago
Selected Answer: AD
AD IS right,c seems unrelated
upvoted 1 times
A. It allows for administrative isolation between different workloads. By creating separate AWS accounts for different workloads, it is
possible to have better control over administrative access and permissions. This can help to reduce the risk of accidental or intentional
changes to the workloads and increase security.
D. Having multiple accounts reduces the risks associated with malicious activity targeted at a single account. If a single AWS account is
compromised, all the workloads running in that account are at risk. By using multiple accounts, the risk is distributed, and the impact of a
security breach is minimized.
upvoted 1 times
A retail company has recently migrated its website to AWS. The company wants to ensure that it is protected from SQL injection attacks. The
website uses an
Application Load Balancer to distribute traffic to multiple Amazon EC2 instances.
Which AWS service or feature can be used to create a custom rule that blocks SQL injection attacks?
A. Security groups
B. AWS WAF
C. Network ACLs
D. AWS Shield
Correct Answer: B
By configuring AWS WAF with custom rules, you can define conditions and actions to be taken when SQL injection attempts are detected.
These rules can be applied at the Application Load Balancer level, allowing you to protect your web application across multiple Amazon
EC2 instances behind the load balancer.
upvoted 1 times
B - yes
https://docs.aws.amazon.com/waf/latest/developerguide/waf-rule-statement-type-sqli-match.html
upvoted 4 times
Which AWS service provides a feature that can be used to proactively monitor and plan for the service quotas of AWS resources?
A. AWS CloudTrail
D. Amazon CloudWatch
Correct Answer: D
There is no question about being alerted (using Cloudwatch). Trusted advisor does proactive monitoring and thats how you plan
upvoted 3 times
Within the Service Limits category, AWS Trusted Advisor provides information and recommendations on your resource usage and service
quotas. It helps you monitor your current resource usage and plan for future needs by notifying you when you are approaching or
exceeding your service quotas.
upvoted 2 times
AWS Service Quotas allows you to view and manage your service quotas, which are the predefined limits set by AWS on the resources you
can use within your AWS account. With AWS Service Quotas, you can monitor your current resource usage, track your quotas, and request
increases if needed. It helps you understand your resource usage patterns and plan for scaling your applications or workloads
accordingly.
upvoted 1 times
AWS Trusted Advisor provides a feature that can be used to proactively monitor and plan for the service quotas of AWS resources. Trusted
Advisor offers recommendations to optimize AWS infrastructure across various categories, including service limits and quotas.
upvoted 1 times
Within AWS Trusted Advisor's Quotas category, you can view the current usage and available quota for different AWS services. It helps you
proactively monitor resource usage and identify potential issues related to resource limits or quotas. Trusted Advisor provides
recommendations to adjust quotas if needed, allowing you to plan and manage your AWS resource usage more effectively.
upvoted 2 times
Which of the following is an advantage that users experience when they move on-premises workloads to the AWS Cloud?
Correct Answer: A
Moving on-premises workloads to the AWS Cloud eliminates the need to maintain expensive data centers, hardware, and related
infrastructure. This can help organizations reduce their overall IT costs and improve their operational efficiency. AWS provides a wide
range of cloud-based services, including compute, storage, and database services, that can be used to run on-premises workloads on the
cloud. By moving workloads to AWS, organizations can take advantage of the flexibility, scalability, and cost-effectiveness of cloud
computing.
upvoted 1 times
Which design principle is included in the operational excellence pillar of the AWS Well-Architected Framework?
B. Anticipate failure.
D. Optimize costs.
Correct Answer: B
B. Anticipate failure.
This principle recommends designing systems that can withstand failure and that can be quickly recovered from any failure that may
occur. It also emphasizes the importance of testing and experimenting to improve system resilience, and the implementation of
automated responses to potential failures to minimize downtime and ensure system continuity.
upvoted 2 times
https://docs.aws.amazon.com/wellarchitected/latest/operational-excellence-pillar/design-principles.html
upvoted 2 times
Which AWS services offer gateway VPC endpoints that can be used to avoid sending traffic over the internet? (Choose two.)
C. AWS CodeBuild
D. Amazon S3
E. Amazon DynamoDB
Correct Answer: BD
VPC endpoints enable you to privately connect your VPC to services hosted on AWS without requiring an Internet gateway, a NAT device,
VPN, or firewall proxies. Endpoints are horizontally scalable and highly available virtual devices that allow communication between
instances in your VPC and AWS services. Amazon VPC offers two different types of endpoints: gateway type endpoints and interface type
endpoints.
Gateway type endpoints are available only for AWS services including S3 and DynamoDB. These endpoints will add an entry to your route
table you selected and route the traffic to the supported services through Amazon’s private network.
Interface type endpoints provide private connectivity to services powered by PrivateLink, being AWS services, your own services or SaaS
solutions, and supports connectivity over Direct Connect. More AWS and SaaS solutions will be supported by these endpoints in the future.
Please refer to VPC Pricing for the price of interface type endpoints.
upvoted 22 times
A gateway VPC endpoint allows private connectivity to supported AWS services within a VPC without requiring an internet gateway, NAT
device, VPN connection, or AWS Direct Connect connection. It enables you to access AWS services privately from your VPC using private IP
addresses, eliminating the need to traverse the public internet.
upvoted 1 times
VPC endpoints enable you to privately connect your VPC to services hosted on AWS without requiring an Internet gateway, a NAT device,
VPN, or firewall proxies. Endpoints are horizontally scalable and highly available virtual devices that allow communication between
instances in your VPC and AWS services. Amazon VPC offers two different types of endpoints: gateway type endpoints and interface type
endpoints.
Gateway type endpoints are available only for AWS services including S3 and DynamoDB. These endpoints will add an entry to your route
table you selected and route the traffic to the supported services through Amazon’s private network.
Interface type endpoints provide private connectivity to services powered by PrivateLink, being AWS services, your own services or SaaS
solutions, and supports connectivity over Direct Connect. More AWS and SaaS solutions will be supported by these endpoints in the future.
Please refer to VPC Pricing for the price of interface type endpoints.
upvoted 2 times
Guru4Cloud 4 months ago
Selected Answer: DE
The AWS services that offer gateway VPC endpoints are:
D. Amazon S3: With a gateway VPC endpoint for Amazon S3, you can access S3 buckets from your VPC without traversing the internet.
E. Amazon DynamoDB: You can create a VPC endpoint for DynamoDB to access it from your VPC without going over the internet.
upvoted 1 times
D. Amazon S3
E. Amazon DynamoDB
Amazon S3 and Amazon DynamoDB both offer gateway VPC endpoints that allow you to access the services over a private network
connection within your VPC, without the need to go over the internet. This can help improve security, reduce latency, and lower data
transfer costs.
Amazon SNS, Amazon SQS, and AWS CodeBuild do not offer gateway VPC endpoints. However, you can still use these services securely
within your VPC by using VPC endpoints for AWS services or by setting up a VPC peering connection.
upvoted 1 times
Gateway type endpoints are available only for AWS services including S3 and DynamoDB. These endpoints will add an entry to your route
table you selected and route the traffic to the supported services through Amazon’s private network.
Reference -
https://aws.amazon.com/vpc/faqs/#:~:text=Gateway%20type%20endpoints%20are%20available,services%20through%20Amazon's%20pri
vate%20network.
upvoted 6 times
Which of the following is the customer responsible for updating and patching, according to the AWS shared responsibility model?
Correct Answer: B
https://docs.aws.amazon.com/workspaces/latest/adminguide/update-management.html
upvoted 6 times
Who has the responsibility to patch the host operating system of an Amazon EC2 instance, according to the AWS shared responsibility model?
D. AWS only
Correct Answer: B
https://aws.amazon.com/compliance/shared-responsibility-
model/#:~:text=Security%20and%20Compliance,and%20security%20patches)
upvoted 3 times
In the case of Amazon EC2 instances, AWS is responsible for patching and maintaining the underlying host operating system. AWS
ensures that the infrastructure and host operating system are up to date with the latest security patches and updates. This responsibility
includes managing the hypervisor and the host operating system layer.
As a customer, you are responsible for managing the security and patching of the guest operating system and any applications or
software running on your EC2 instances. This includes configuring and maintaining security settings, applying updates and patches to the
guest operating system, and implementing appropriate security measures within your EC2 instances.
To summarize, according to the AWS shared responsibility model, AWS is responsible for patching and maintaining the host operating
system of EC2 instances, while the customer is responsible for patching and maintaining the guest operating system and any software
running on those instances.
upvoted 2 times
Patch Management – AWS is responsible for patching and fixing flaws within the infrastructure, but customers are responsible for
patching their guest OS and applications.
https://aws.amazon.com/compliance/shared-responsibility-
model/#:~:text=Patch%20Management%20%E2%80%93%20AWS%20is%20responsible,their%20guest%20OS%20and%20applications.
upvoted 2 times
A company is using an Amazon RDS DB instance for an application that is deployed in the AWS Cloud. The company needs regular patching of the
operating system of the server where the DB instance runs.
What is the company's responsibility in this situation, according to the AWS shared responsibility model?
A. Open a support case to obtain administrative access to the server so that the company can patch the DB instance operating system.
B. Open a support case and request that AWS patch the DB instance operating system.
C. Use administrative access to the server, and apply the operating system patches during the regular maintenance window that is defined for
the DB instance.
D. Establish a regular maintenance window that tells AWS when to patch the DB instance operating system.
Correct Answer: B
The 30-minute maintenance window is selected at random from an 8-hour block of time per region. If you don't specify a maintenance
window when you create the DB instance, RDS assigns a 30-minute maintenance window on a randomly selected day of the week.
upvoted 1 times
The company does not have direct administrative access to the server hosting the DB instance and should not attempt to patch the
operating system themselves. Instead, they can rely on AWS to perform the necessary patching as part of their managed service for
Amazon RDS.
upvoted 1 times
itben 3 weeks, 1 day ago
Selected Answer: D
you will never get administrative access to servers used by AWS RDS
upvoted 2 times
Some maintenance items require that Amazon RDS take your DB instance offline for a short time. Maintenance items that require a
resource to be offline include required operating system or database patching. Required patching is automatically scheduled only for
patches that are related to security and instance reliability. Such patching occurs infrequently (typically once every few months) and
seldom requires more than a fraction of your maintenance window.
Reference:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.Maintenance.html
upvoted 3 times
Some maintenance items require that Amazon RDS take your DB instance offline for a short time. Maintenance items that require a
resource to be offline include required operating system or database patching. Required patching is automatically scheduled only for
patches that are related to security and instance reliability. Such patching occurs infrequently (typically once every few months) and
seldom requires more than a fraction of your maintenance window.
upvoted 2 times
Why is an AWS Well-Architected review a critical part of the cloud design process?
B. A Well-Architected review helps identify design gaps and helps evaluate design decisions and related documents.
C. A Well-Architected review is an audit mechanism that is a part of requirements for service level agreements.
D. A Well-Architected review eliminates the need for ongoing auditing and compliance tests.
Correct Answer: B
By conducting a Well-Architected review, organizations can gain insights into potential design gaps, identify areas for improvement, and
ensure that their architecture aligns with AWS best practices. This review process helps validate the architecture's adherence to AWS
guidelines and provides recommendations for optimization, risk mitigation, and cost savings.
The review is not mandatory for workloads to run on AWS, nor does it replace ongoing auditing and compliance requirements. Instead, it
serves as a valuable tool in the cloud design process, assisting organizations in creating robust, well-architected solutions on AWS.
upvoted 1 times
A company implements an Amazon EC2 Auto Scaling policy along with an Application Load Balancer to automatically recover unhealthy
applications that run on
Amazon EC2 instances.
Which pillar of the AWS Well-Architected Framework does this action cover?
A. Security
B. Performance efficiency
C. Operational excellence
D. Reliability
Correct Answer: D
Reference:
https://docs.aws.amazon.com/wellarchitected/latest/reliability-pillar/wellarchitected-reliability-pillar.pdf
Even though the question mentions scaling, it didn't specifically state vertical or horizontal. Vertical is an example of Performance
Efficiency and horizontal is an example of Reliability. Question said scaling is used to recover unhealthy applications, so horizontal scaling
is implied. Horizontal scaling and Application Load Balancer fall under Reliability.
upvoted 4 times
Which AWS Cloud benefit is shown by an architecture's ability to withstand failures with minimal downtime?
A. Agility
B. Elasticity
C. Scalability
D. High availability
Correct Answer: D
D - yes
upvoted 1 times
Under the AWS shared responsibility model, which task is the customer's responsibility when managing AWS Lambda functions?
Correct Answer: A
AWS Lambda is a serverless computing service where the underlying infrastructure and server management are handled by AWS. The
shared responsibility model dictates that AWS is responsible for the operational aspects of the Lambda service, such as scaling, patching
the underlying infrastructure, and managing the runtime environment.
However, the customer is responsible for the configuration and management of their specific Lambda functions. This includes creating
versions of Lambda functions, which allows for safe and controlled updates to the function's code and configuration. Creating versions
enables the customer to manage and deploy changes to their Lambda functions without affecting the production environment.
upvoted 3 times
AWS Lambda is a serverless compute service, so customers don't have to worry about maintaining servers or operating systems (B),
scaling resources according to demand (C), or updating the runtime environment (D). Those tasks are managed by AWS.
However, customers are responsible for the code they run in Lambda functions, which includes version management. So creating versions
of Lambda functions (A) is part of the customer's responsibility. This allows customers to manage and invoke different versions of a
function, which can be critical for development, testing, and deployment workflows.
upvoted 2 times
AWS manages the underlying infrastructure, including server and operating systems, as well as scaling Lambda resources according to
demand.
upvoted 1 times
However, of the given options, the task that is closest to the customer's responsibility when managing AWS Lambda functions is option D,
updating the Lambda runtime environment. AWS is responsible for maintaining the underlying infrastructure and runtime environment,
including security updates and patching, but customers are responsible for updating the Lambda runtime version for their functions to
take advantage of new features or performance improvements.
upvoted 2 times
No, customers cannot update the Lambda runtime environment directly. Lambda runtime environments are managed by AWS and are
designed to provide a secure and isolated execution environment for Lambda functions.
When you create a Lambda function, you can choose a runtime environment from a list of supported runtimes, such as Node.js, Python,
Java, and C#. AWS manages the runtime environment and ensures that it is up to date with security patches and other updates.
upvoted 1 times
B - no, AWS
C - no, AWS
D - no, AWS
upvoted 1 times
The AWS shared responsibility model defines which security and compliance tasks are the responsibility of AWS and which tasks are the
responsibility of the customer. Under this model, AWS is responsible for the security of the underlying infrastructure that supports the
AWS services, while the customer is responsible for the security of the applications and data they run on the AWS services.
For AWS Lambda, AWS is responsible for managing the underlying compute infrastructure, including server and operating systems,
scaling of resources, and updating the Lambda runtime environment. However, the customer is responsible for creating and managing
the code and configurations of their Lambda functions,
upvoted 1 times
D. A dedicated AWS staff member who reviews the user's application architecture
Correct Answer: A
https://aws.amazon.com/premiumsupport/plans/enterprise/
upvoted 2 times
A company needs to generate reports that can break down cloud costs by product, by company-defined tags, and by hour, day, and month.
Which AWS tool should the company use to meet these requirements?
Correct Answer: D
A company has a serverless application that includes an Amazon API Gateway API, an AWS Lambda function, and an Amazon DynamoDB
database.
Which AWS service can the company use to trace user requests as they move through the application's components?
A. AWS CloudTrail
B. Amazon CloudWatch
C. Amazon Inspector
D. AWS X-Ray
Correct Answer: D
CloudTrail - audit
Cloudwatch - monitor
Inspector - vulnerability management
X-Ray - tracing
upvoted 102 times
AWS X-Ray allows you to trace user requests as they travel through your serverless application's components, including Amazon API
Gateway APIs, AWS Lambda functions, and Amazon DynamoDB databases. With X-Ray, you can visualize and debug the requests, identify
performance bottlenecks, and analyze errors.
* AWS CloudTrail is a service that records AWS API calls and events for audit and compliance purposes. Amazon CloudWatch is a
monitoring service for AWS resources and the applications you run on them. Amazon Inspector is a security assessment service that helps
you test the security of your applications.
upvoted 3 times
AWS X-Ray is a service that helps developers analyze and debug production, distributed applications, such as those built using a
microservices architecture. With X-Ray, you can understand how your application and its underlying services are performing to identify
and troubleshoot the root cause of performance issues and errors. It allows you to trace requests as they move through your application
and visualize the components and services that are involved in processing those requests
upvoted 2 times
AWS X-Ray is a service that allows developers to analyze and debug distributed applications, such as those built using Amazon API
Gateway, AWS Lambda, and Amazon DynamoDB. X-Ray provides a visual representation of the application's components and their
interactions, allowing developers to trace user requests as they move through the application and identify performance issues or errors.
By using X-Ray, the company can view a complete end-to-end trace of user requests, including the time taken by each component, the
number of requests processed, and any errors or exceptions encountered. X-Ray also provides insights into how resources are used by the
application, which can help the company optimize performance and reduce costs.
upvoted 2 times
AWS X-Ray is a distributed tracing system that helps developers analyze and debug distributed applications, such as those built using AWS
Lambda, Amazon API Gateway, and Amazon DynamoDB. With X-Ray, the company can visualize the flow of requests through their
application and identify performance bottlenecks or errors.
CHATGPT
upvoted 2 times
A. Amazon DynamoDB
B. Amazon RDS
C. Amazon Redshift
D. Amazon ElastiCache
Correct Answer: C
A. Amazon S3
D. AWS WAF
Correct Answer: B
"You can interact with IAM through the web-based IAM console, the AWS Command Line Interface, or the AWS API or SDKs. IAM is offered
at no additional charge. "
upvoted 3 times
A company needs to design an AWS disaster recovery plan to cover multiple geographic areas.
Which action will meet this requirement?
Correct Answer: C
"In addition to data, you must also back up the configuration and infrastructure necessary to redeploy your workload and meet your
Recovery Time Objective (RTO). AWS CloudFormation provides Infrastructure as Code (IaC), and enables you to define all of the AWS
resources in your workload so you can reliably deploy and redeploy to multiple AWS accounts and AWS Regions. "
upvoted 8 times
Configuring the architecture across multiple AWS Regions will ensure that if a disaster occurs in one region, the application can failover to
another region. This approach provides geographic diversity and redundancy for disaster recovery purposes.
upvoted 2 times
Therefore, option C (Configure the architecture across multiple AWS Regions) is the most suitable action to meet the requirement of
designing an AWS disaster recovery plan to cover multiple geographic areas.
upvoted 2 times
kumaran1000001 4 months, 2 weeks ago
A - does not address DR
C - yes
AWS Regions are geographically separated locations, each with multiple Availability Zones (AZs), that are designed to be isolated from
each other. Deploying an application across multiple Regions can help provide disaster recovery and business continuity in the event of a
natural disaster or other regional disruption. By deploying resources in multiple Regions, the company can ensure that if one Region
becomes unavailable, the application can still be accessed from another Region.
upvoted 3 times
Seems like a lot of people don’t understand the basic functionality of cloud and geographic if they answered anything else
upvoted 2 times
Multi-AZ strategy
Every AWS Region consists of multiple Availability Zones (AZs). Each AZ consists of one or more data centers, located a separate and
distinct geographic location. This significantly reduces the risk of a single event impacting more than one AZ. Therefore, if you’re
designing a DR strategy to withstand events such as power outages, flooding, and other other localized disruptions, then using a Multi-AZ
DR strategy within an AWS Region can provide the protection you need.
Multi-Region strategy
AWS provides multiple resources to enable a multi-Region approach for your workload. This provides business assurance against events
of sufficient scope that can impact multiple data centers across separate and distinct locations. For most examples in this blog post, we
use a multi-Region approach to demonstrate DR strategies. But, you can also use these for Multi-AZ strategies or hybrid (on-premises
workload/cloud recovery) strategies.
upvoted 2 times
Question #138 Topic 1
Which of the following is a benefit of moving from an on-premises data center to the AWS Cloud?
B. Compute costs can be viewed in the AWS Billing and Cost Management console.
D. Users can optimize costs by permanently running enough instances at peak load.
Correct Answer: A
B - basic feature
C - not true
In which ways does the AWS Cloud offer lower total cost of ownership (TCO) of computing resources than on-premises data centers? (Choose
two.)
Correct Answer: AC
D. AWS uses economies of scale to continually reduce prices: AWS operates at a large scale, serving millions of customers, which allows
them to benefit from economies of scale. As a result, AWS continually reduces prices for its services, providing cost savings for customers.
upvoted 2 times
Explanation:
AWS offers lower total cost of ownership (TCO) than on-premises data centers in several ways. First, AWS replaces upfront capital
expenditures with pay-as-you-go costs. Instead of purchasing and maintaining hardware, software, and facilities, users pay only for the
computing resources they actually use. Second, AWS uses economies of scale to continually reduce prices, resulting in lower costs for
users over time. This is possible because AWS has a large global infrastructure, which enables it to negotiate better prices for hardware
and software, and to spread the costs of its operations over a large number of customers. By contrast, on-premises data centers require
significant upfront investments, and users must bear the costs of maintaining and upgrading their equipment and facilities over time.
upvoted 3 times
A. AWS replaces upfront capital expenditures with pay-as-you-go costs, allowing users to pay only for the computing resources they
actually use, without having to make an upfront capital investment in hardware and software.
D. AWS uses economies of scale to continually reduce prices, passing on the savings to customers. This allows customers to benefit from
lower costs over time, without having to negotiate lower prices or renegotiate contracts.
upvoted 2 times
poiuytrewq123456 4 months, 1 week ago
Selected Answer: AD
C not true - staff still needed to admin AWS
upvoted 3 times
C - not true
D - yes
E - no
upvoted 2 times
A. AWS replaces upfront capital expenditures with pay-as-you-go costs: With AWS, users only pay for what they use, without the need for
upfront investments in hardware, facilities, and maintenance. This pay-as-you-go model can result in significant cost savings, especially for
organizations that have unpredictable or variable workloads.
D. AWS uses economies of scale to continually reduce prices: AWS operates at a large scale, and as a result, they can offer computing
resources at lower prices than on-premises data centers. Additionally, AWS regularly reduces prices for its services as it gains more
customers and achieves greater economies of scale.
upvoted 2 times
D is also a valid response however, just my opinion that A and C have more merit and value to the customer as key word is 'continually'
which made me think a bit...
upvoted 1 times
A. Amazon GuardDuty
C. Amazon Cognito
Correct Answer: A
N9 8 months ago
Selected Answer: A
GuardDuty
Correct Answer: B
AWS performs automatic patching of Amazon EC2 instances as part of its shared responsibility model for managing security in the cloud.
AWS manages the underlying infrastructure and provides patches and updates for the infrastructure components, including the
hypervisor, network, and storage layers. However, customers are responsible for patching the operating system, applications, and
software that they install on their EC2 instances.
upvoted 1 times
Correct Answer: A
"All user data stored in Amazon DynamoDB is fully encrypted at rest. DynamoDB encryption at rest provides enhanced security by
encrypting all your data at rest using encryption keys stored in AWS Key Management Service (AWS KMS). This functionality helps reduce
the operational burden and complexity involved in protecting sensitive data. With encryption at rest, you can build security-sensitive
applications that meet strict encryption compliance and regulatory requirements."
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/EncryptionAtRest.html
upvoted 1 times
AWS automatically provides and manages TLS (Transport Layer Security) certificates for users' websites through the AWS Certificate
Manager (ACM) service. ACM eliminates the need for users to generate, manage, and renew their own SSL/TLS certificates. It simplifies
the process of obtaining and deploying certificates for securing websites and other AWS resources. Users can request and import SSL/TLS
certificates for their domain names directly from ACM, and AWS takes care of the certificate provisioning, renewal, and integration with
other AWS services like Elastic Load Balancer (ELB), CloudFront, and API Gateway.
upvoted 1 times
Amazon Web Services (AWS) automatically patches the underlying infrastructure that runs customer workloads, including Amazon Elastic
Compute Cloud (EC2) instances, to address known security vulnerabilities and protect the AWS infrastructure. Customers are responsible
for patching the operating systems, platforms, and application layers of their EC2 instances, but AWS provides automated patching tools
to simplify the process.
AWS also provides encryption capabilities for customer data, but it's the customer's responsibility to implement and manage encryption
keys and policies for their data. The customer is also responsible for encrypting their own network traffic and creating TLS certificates for
their websites.
upvoted 3 times
AWS owned CMK – Default encryption type. The key is owned by DynamoDB (no additional charge).
AWS managed CMK – The key is stored in your account and is managed by AWS KMS (AWS KMS charges apply).
Customer managed CMK – The key is stored in your account and is created, owned, and managed by you. You have full control over the
CMK (AWS KMS charges apply).
All DynamoDB tables are encrypted. There is no option to enable or disable encryption for new or existing tables. By default, all tables are
encrypted under an AWS owned customer master key (CMK) in the DynamoDB service account.
https://catalog.us-east-1.prod.workshops.aws/workshops/aad9ff1e-b607-45bc-893f-121ea5224f24/en-
US/ddb#:~:text=All%20user%20data%20stored%20in,Management%20Service%20(AWS%20KMS).
upvoted 2 times
Luxurie 3 months ago
B. AWS automatically patches EC2 instances.
upvoted 1 times
Question #143 Topic 1
Which AWS service or tool can a company use to visualize, understand, and manage AWS spending and usage over time?
B. Amazon CloudWatch
C. Cost Explorer
D. AWS Budgets
Correct Answer: C
"AWS Cost Explorer has an easy-to-use interface that lets you visualize, understand, and manage your AWS costs and usage over time. Get
started quickly by creating custom reports that analyze cost and usage data. Analyze your data at a high level (for example, total costs and
usage across all accounts), or dive deeper into your cost and usage data to identify trends, pinpoint cost drivers, and detect anomalies."
https://aws.amazon.com/aws-cost-management/aws-cost-explorer/
upvoted 1 times
Cost Explorer is a cost management tool provided by AWS that allows users to visualize, understand, and manage AWS spending and
usage over time. It provides various features and functionalities to help users analyze their AWS costs and usage, such as cost and usage
reports, custom cost allocation tags, and budgeting tools
upvoted 1 times
A company wants to deploy some of its resources in the AWS Cloud. To meet regulatory requirements, the data must remain local and on
premises. There must be low latency between AWS and the company resources.
Which AWS service or feature can be used to meet these requirements?
B. Availability Zones
C. AWS Outposts
Correct Answer: A
Reference:
https://d1.awsstatic.com/whitepapers/hybrid-cloud-with-aws.pdf
(18)
AWS Outposts also provides low latency between your on-premises resources and AWS. This is because AWS Outposts uses the same
high-speed network that AWS uses to connect its data centers around the world.
upvoted 1 times
AWS Outposts is a fully managed service that extends AWS infrastructure, services, APIs, and tools to customer premises. With AWS
Outposts, customers can run AWS services locally, ensuring that their data remains on premises while still having access to low-latency
connectivity with AWS services in the cloud.
AWS Local Zones and Availability Zones are both cloud data centers that provide high availability and fault tolerance for AWS resources.
They do not provide the capability to run AWS services locally.
AWS Wavelength Zones provide ultra-low latency connectivity between 5G devices and AWS services, but they are not designed to run
AWS services on customer premises.
upvoted 1 times
AWS Outposts allows customers to run AWS infrastructure and services on-premises for a consistent hybrid experience. With AWS
Outposts, customers can have low-latency access to the same AWS services, APIs, and tools that they use in AWS Regions, while
maintaining data residency and meeting regulatory requirements. This service enables customers to run compute, storage, and database
services locally, and seamlessly connect to AWS services in the cloud.
upvoted 1 times
C - yes
Unlike Outposts, which you deploy within your datacenter or a co-location of your choice, Local Zones are owned, managed, and operated
by AWS. Local Zones eliminate the need for you to manage power, connectivity, and capacity using the exact same set of APIs and tools
that you are already using for an AWS Region.
Ref link: https://aws.amazon.com/blogs/compute/aws-local-zones-and-aws-outposts-choosing-the-right-technology-for-your-edge-
workload/#:~:text=Unlike%20Outposts%2C%20which%20you%20deploy,using%20for%20an%20AWS%20Region.
upvoted 2 times
Saif93 6 months, 2 weeks ago
Selected Answer: C
C is the answer.
upvoted 1 times
D. Create an AWS Direct Connect connection between the company and AWS.
Correct Answer: B
Reference:
https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/infrastructure-security.html
Which AWS service is a highly available and scalable DNS web service?
A. Amazon VPC
B. Amazon CloudFront
C. Amazon Route 53
D. Amazon Connect
Correct Answer: C
Reference:
https://aws.amazon.com/route53/
Which of the following is an AWS best practice for managing an AWS account root user?
Correct Answer: B
Reference:
https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html
Source: https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html
upvoted 5 times
source: https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html
upvoted 1 times
Enabling multi-factor authentication (MFA) for the root user is an AWS best practice for managing an AWS account. MFA adds an
additional layer of security to an account by requiring a user to provide a second form of authentication in addition to their password,
such as a code generated by an authenticator app or a hardware token. This makes it much more difficult for unauthorized users to
access the account, even if they have obtained the root user's password.
upvoted 1 times
Option A is not a best practice as keeping the root user password with the security team may increase the risk of the password being
compromised.
Option C is not recommended as creating access keys for the root user can lead to the key being accidentally exposed and used to gain
unauthorized access to the account.
Option D is not recommended as using consistent passwords for compliance purposes can increase the risk of the password being
compromised. It is recommended to use strong, unique passwords and rotate them periodically
upvoted 1 times
It is also recommended to avoid using the root user for everyday tasks and instead create IAM users with the necessary permissions.
Access keys should not be used for the root user, as they provide long-term access to the AWS account and should be used with caution.
Keeping the password consistent for compliance purposes is not necessarily a security best practice, and can make it easier for attackers
to gain unauthorized access if the password is known or compromised.
upvoted 3 times
A company wants to improve its security and audit posture by limiting Amazon EC2 inbound access.
What should the company use to access instances remotely instead of opening inbound SSH ports and managing SSH keys?
D. Network ACLs
Correct Answer: B
Reference:
https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager.html
Answer is B.
upvoted 6 times
By using Session Manager, you can centrally manage access to instances, enforce fine-grained permissions using IAM policies, and record
all session activity in CloudTrail for auditing and compliance purposes. It provides a secure and convenient way to access your EC2
instances without exposing them to inbound SSH traffic.
upvoted 1 times
After selecting an Amazon EC2 Dedicated Host reservation, which pricing option would provide the largest discount?
A. No upfront payment
Correct Answer: D
Reference:
https://aws.amazon.com/ec2/pricing/reserved-instances/pricing/
Among these options, the largest discount is provided when choosing the all upfront payment option. With the all upfront payment
option, you pay for the entire reservation upfront, which gives you the highest discount compared to the other options.
upvoted 1 times
A company has refined its workload to use specific AWS services to improve efficiency and reduce cost.
Which best practice for cost governance does this example show?
A. Resource controls
B. Cost allocation
C. Architecture optimization
D. Tagging enforcement
Correct Answer: B
Reference:
https://d1.awsstatic.com/whitepapers/architecture/AWS-Cost-Optimization-Pillar.pdf
The example provided shows the best practice of architecture optimization for cost governance. Architecture optimization is the process
of refining an organization's workloads to use specific AWS services or configurations to improve efficiency and reduce costs. By choosing
the most cost-effective services and configurations, an organization can optimize its architecture to minimize costs while still meeting its
performance and functionality requirements.
upvoted 1 times
brianbanda 3 months, 3 weeks ago
The example given shows the best practice for cost governance of C. Architecture optimization.
By refining its workload to use specific AWS services to improve efficiency and reduce cost, the company is optimizing its architecture to
minimize unnecessary resource usage and maximize cost-effectiveness. This is a key aspect of cost governance, as it helps to ensure that
resources are being used efficiently and cost-effectively.
Resource controls refer to setting limits on resource usage to prevent over-provisioning and unnecessary costs. Cost allocation involves
assigning costs to specific users, teams, or departments for accountability and cost optimization. Tagging enforcement involves the use of
tags to label and categorize resources for better cost management.
While all of these practices are important for cost governance, architecture optimization is a foundational best practice that underlies
many of the other practices.
upvoted 2 times
Option A, resource controls, involves setting limits on the resources that can be used by an individual, team, or application to manage
costs. This is not related to the example provided.
Option B, cost allocation, involves tracking costs across multiple accounts, teams, or applications to assign costs to the appropriate entity
for accountability and financial reporting. This is not related to the example provided.
Option D, tagging enforcement, involves enforcing a consistent set of tags on AWS resources to make it easier to search, filter, and track
costs. This is not related to the example provided.
Refining a workload to use specific AWS services to improve efficiency and reduce cost is an example of the best practice of architecture
optimization for cost governance. Architecture optimization involves designing and configuring AWS resources and services to minimize
costs while meeting performance and availability requirements.
upvoted 1 times
Resource controls, cost allocation, and tagging enforcement are also important best practices for cost governance, but they are not
directly related to the example provided. Resource controls help to ensure that resources are used efficiently and effectively, cost
allocation allows you to assign costs to specific projects or departments, and tagging enforcement helps to ensure that resources are
properly tagged for cost tracking and reporting.
upvoted 2 times
jatric 7 months, 3 weeks ago
Selected Answer: C
its Cost Optimization not Cost allocation. So its C
upvoted 2 times
Question #151 Topic 1
A company would like to host its MySQL databases on AWS and maintain full control over the operating system, database installation, and
configuration.
Which AWS service should the company use to host the databases?
A. Amazon RDS
B. Amazon EC2
C. Amazon DynamoDB
D. Amazon Aurora
Correct Answer: A
Reference:
https://d1.awsstatic.com/whitepapers/best-practices-for-running-oracle-database-on-aws.pdf?did=wp_card&trk=wp_card
(6)
Amazon EC2 (Elastic Compute Cloud) is a web service that provides resizable compute capacity in the cloud, allowing users to create and
run virtual machines (instances) on Amazon's infrastructure. With Amazon EC2, users have full control over the operating system,
database installation, and configuration. This means that the company can install and configure MySQL as they see fit, giving them
complete control over their databases.
upvoted 1 times
akiraaws 4 months ago
Who is responsible for this page, the answers do not have any consistency.
B- Amazon EC2 is absolutely correct.
upvoted 3 times
Amazon EC2 provides full administrative access to the underlying operating system and hardware, giving users complete control over the
configuration and management of their environment. This allows users to install and configure the specific software versions and
components required for their database environment, including MySQL.
Amazon RDS, on the other hand, is a managed database service that provides automated database administration, backup and recovery,
and scaling capabilities. While Amazon RDS supports MySQL databases, users do not have access to the underlying operating system and
hardware, and therefore cannot have full control over the installation and configuration of their databases.
upvoted 1 times
To maintain full control over the operating system, database installation, and configuration, a company should use Amazon EC2 to host
MySQL databases on AWS. Amazon EC2 provides scalable, customizable compute capacity that can be used to run applications and
services, including databases.
upvoted 1 times
How does the AWS global infrastructure offer high availability and fault tolerance to its users?
A. The AWS infrastructure is made up of multiple AWS Regions within various Availability Zones located in areas that have low flood risk, and
are interconnected with low-latency networks and redundant power supplies.
B. The AWS infrastructure consists of subnets containing various Availability Zones with multiple data centers located in the same geographic
location.
C. AWS allows users to choose AWS Regions and data centers so that users can select the closest data centers in different Regions.
D. The AWS infrastructure consists of isolated AWS Regions with independent Availability Zones that are connected with low-latency
networking and redundant power supplies.
Correct Answer: D
AWS achieves high availability and fault tolerance for its users by building its infrastructure on top of independent AWS Regions, which are
made up of separate Availability Zones. Each Availability Zone is physically isolated and located in a separate geographic location with
redundant power, networking, and connectivity to other zones within the same region. This isolation and redundancy ensure that even if
one Availability Zone fails, other zones within the same region continue to operate normally
upvoted 1 times
A company is using Amazon EC2 Auto Scaling to scale its Amazon EC2 instances.
Which benefit of the AWS Cloud does this example illustrate?
A. High availability
B. Elasticity
C. Reliability
D. Global reach
Correct Answer: A
This provides several benefits, including the ability to meet fluctuating demand while maintaining optimal performance and cost
efficiency. During periods of high demand, Amazon EC2 Auto Scaling can automatically add more instances to handle the increased
workload, ensuring high availability and preventing performance degradation. Conversely, during periods of low demand, it can scale
down the number of instances, reducing costs and optimizing resource utilization.
upvoted 2 times
The example of using Amazon EC2 Auto Scaling to scale Amazon EC2 instances illustrates the benefit of elasticity in the AWS Cloud.
Elasticity refers to the ability of an organization to quickly and easily scale up or down its resources in response to changing demand. In
this case, the company is able to use Amazon EC2 Auto Scaling to automatically adjust the number of EC2 instances based on demand,
ensuring that it always has enough resources to handle incoming traffic without paying for excess capacity during periods of low demand.
upvoted 3 times
C - no.
Using Amazon EC2 Auto Scaling to scale Amazon EC2 instances illustrates the benefit of elasticity in the AWS Cloud. Elasticity refers to the
ability to automatically provision and de-provision compute resources as needed to handle changes in workload demand.
With Amazon EC2 Auto Scaling, the number of Amazon EC2 instances can be scaled up or down automatically based on the actual demand
for resources. This ensures that the application can handle changes in traffic and workload without downtime or manual intervention.
High availability refers to the ability of an application or service to remain available even in the event of failures. Reliability refers to the
ability of an application or service to function as expected over time. Global reach refers to the ability of an application or service to be
accessed from anywhere in the world. While these are all important benefits of the AWS Cloud, the example given specifically illustrates
elasticity.
upvoted 2 times
Elasticity refers to the ability of an infrastructure to automatically and dynamically scale up or down in response to changes in demand or
traffic. Amazon EC2 Auto Scaling is a service that enables you to automatically scale your EC2 instances based on demand, ensuring that
you have the appropriate capacity to handle traffic spikes and surges.
By using Amazon EC2 Auto Scaling, the company can easily add or remove EC2 instances to match the current demand for their
application, without having to manually provision or de-provision resources. This can lead to significant cost savings, as the company only
pays for the resources they need at any given time.
While high availability, reliability, and global reach are also important benefits of the AWS Cloud, they are not directly illustrated by this
example of using Amazon EC2 Auto Scaling.
upvoted 1 times
Momo2023 4 months, 4 weeks ago
CHATGPT.
The example of a company using Amazon EC2 Auto Scaling to scale its Amazon EC2 instances illustrates the benefit of elasticity in the AWS
Cloud.
Elasticity refers to the ability of an infrastructure to automatically and dynamically scale up or down in response to changes in demand or
traffic. Amazon EC2 Auto Scaling is a service that enables you to automatically scale your EC2 instances based on demand, ensuring that
you have the appropriate capacity to handle traffic spikes and surges.
By using Amazon EC2 Auto Scaling, the company can easily add or remove EC2 instances to match the current demand for their
application, without having to manually provision or de-provision resources. This can lead to significant cost savings, as the company only
pays for the resources they need at any given time.
While high availability, reliability, and global reach are also important benefits of the AWS Cloud, they are not directly illustrated by this
example of using Amazon EC2 Auto Scaling.
upvoted 1 times
Question #154 Topic 1
Which AWS service or feature is used to send both text and email messages from distributed applications?
Correct Answer: D
Reference:
https://aws.amazon.com/getting-started/hands-on/send-messages-distributed-
applications/#:~:text=Send%20Messages%20Between%20Distributed%
20Applications%20with%20Amazon%20Simple%20Queue%20Service%20(SQS)
Amazon Simple Notification Service (Amazon SNS) is a fully managed messaging service for both application-to-application (A2A) and
application-to-person (A2P) communication.
The A2A pub/sub functionality provides topics for high-throughput, push-based, many-to-many messaging between distributed systems,
microservices, and event-driven serverless applications. Using Amazon SNS topics, your publisher systems can fanout messages to a large
number of subscriber systems including Amazon SQS queues, AWS Lambda functions and HTTPS endpoints, for parallel processing, and
Amazon Kinesis Data Firehose. The A2P functionality enables you to send messages to users at scale via SMS, mobile push, and email.
upvoted 29 times
With Amazon SNS, applications can send messages to a variety of endpoints, including email addresses, mobile devices (via SMS), AWS
Lambda functions, HTTP endpoints, and more. It supports both text-based messages and email messages, making it a versatile solution
for sending notifications, alerts, and other types of messages to users or systems.
Amazon SQS is a message queue service that allows you to decouple microservices and distributed systems. SQS is a good option for
storing messages that need to be processed later. However, it is not a good option for sending text or email messages.
upvoted 2 times
Amazon SNS is a fully managed pub/sub messaging service that enables you to send messages from one application to another or to a
large number of subscribers. With Amazon SNS, you can send text messages (SMS), email messages, and push notifications to mobile
devices, as well as other distributed services and applications. Amazon SNS supports multiple protocols, including HTTP/HTTPS, Email,
SMS, Lambda, and more, making it easy to integrate with your existing applications and services.
upvoted 1 times
Amazon Simple Notification Service (Amazon SNS) is a fully managed pub/sub messaging service that enables distributed applications to
send messages to one or many recipients using either email or text messages (SMS). With Amazon SNS, developers can send messages to
large numbers of recipients or devices, with automatic scaling to handle load spikes.
upvoted 1 times
Amazon SNS is a fully managed messaging service that enables you to send messages or notifications to a variety of endpoints, including
email, SMS (text), mobile push notifications, and more. With Amazon SNS, you can send messages to multiple recipients or subscribers
simultaneously, and you can also filter messages based on recipient preferences or topics.
While Amazon Simple Email Service (Amazon SES) can also be used to send emails from distributed applications, it does not support text
(SMS) messages or other types of notifications.
upvoted 1 times
"Distribute application-to-person (A2P) notifications to your customers with SMS texts, push notifications, and email."
upvoted 1 times
A user is able to set up a master payer account to view consolidated billing reports through:
A. AWS Budgets.
B. Amazon Macie.
C. Amazon QuickSight.
D. AWS Organizations.
Correct Answer: D
Reference:
https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/consolidated-billing.html
"You can track the charges across multiple accounts and download the combined cost and usage data."
https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/consolidated-billing.html
upvoted 7 times
When using AWS Organizations, a master payer account is set up to consolidate the billing and obtain consolidated billing reports for all
the linked accounts. The master payer account is responsible for paying for the usage across all the linked accounts and can view and
analyze the consolidated billing reports to gain insights into the overall spending and resource usage of the organization.
upvoted 1 times
According to the AWS shared responsibility model, which task is the customer's responsibility?
Correct Answer: D
Reference:
https://aws.amazon.com/compliance/shared-responsibility-
model/#:~:text=Customers%20are%20responsible%20for%20managing,also%20extends%
20to%20IT%20controls
AWS is responsible for the security and maintenance of the underlying infrastructure, such as the physical servers, networking, and
storage. However, the customer is responsible for the configuration, management, and maintenance of their applications, operating
systems, and data that they run on the EC2 instances.
This includes tasks such as updating the guest operating system, installing security patches, and managing the software and applications
running on the instances.
upvoted 1 times
A company wants to migrate a small website and database quickly from on-premises infrastructure to the AWS Cloud. The company has limited
operational knowledge to perform the migration.
Which AWS service supports this use case?
A. Amazon EC2
B. Amazon Lightsail
C. Amazon S3
D. AWS Lambda
Correct Answer: C
With Lightsail, the company can easily create an instance to host their website and database, and the service takes care of the underlying
infrastructure, including networking, storage, and server management. It provides a streamlined migration process and offers a simplified
experience for users who may not have extensive knowledge of AWS services.
upvoted 1 times
Amazon Lightsail is a simplified, easy-to-use service that provides a preconfigured environment to deploy web applications quickly. It is
designed to help customers who have limited AWS experience to quickly launch and manage a simple application on the cloud.
Amazon Lightsail offers a preconfigured virtual machine (VM) image with a web server, and also includes a built-in database. This makes it
easy to quickly migrate a small website and database from on-premises infrastructure to the AWS Cloud with minimal operational
knowledge.
upvoted 2 times
B - yes
C - storage service
Amazon Lightsail is a simplified and user-friendly service that enables customers to quickly deploy a website and database to the AWS
Cloud without requiring advanced technical knowledge. With Lightsail, customers can launch a pre-configured virtual machine instance
with a choice of operating system, web application platform, and database engine, and easily migrate their data and applications to the
cloud.
upvoted 1 times
A company is moving multiple applications to a single AWS account. The company wants to monitor the AWS Cloud costs incurred by each
application.
What can the company do to meet this requirement?
Correct Answer: C
To implement this, the company can create and apply appropriate cost allocation tags to the resources used by each application. Once the
tags are in place, they can use AWS Cost Explorer or the AWS Cost and Usage Reports to view cost breakdowns based on the assigned
tags. This will provide the necessary visibility and insights into the costs incurred by each application within the single AWS account.
upvoted 2 times
Cost allocation tags allow the company to categorize resources in the AWS environment with metadata that reflects the organization's
structure and needs. Each tag consists of a key-value pair that can be associated with AWS resources such as EC2 instances, EBS volumes,
and RDS instances.
By applying cost allocation tags to AWS resources, the company can gain a better understanding of which resources are used by which
application, and monitor the costs of each application in Cost Explorer. The company can also use the tags to filter and group the cost and
usage data in reports and create custom cost allocation reports.
upvoted 2 times
D - yes
upvoted 1 times
Limitations on resources that can be tagged: Not all AWS resources support tagging, and the tags supported by different services may
have varying limitations. It's important to review the documentation of each service to understand which resources can be tagged and
how.
Limitations on the granularity of the tags: The granularity of the tags can be limited by the service that the resource belongs to. For
example, you may not be able to tag an EC2 instance by its individual usage time, and instead, the tag will apply to the entire instance.
This limitation can make it difficult to track and optimize costs for specific application components.
upvoted 1 times
Which design principle is achieved by following the reliability pillar of the AWS Well-Architected Framework?
A. Vertical scaling
Correct Answer: C
Reference:
https://aws.amazon.com/blogs/apn/the-5-pillars-of-the-aws-well-architected-framework/
As part of this pillar, it is important to test recovery procedures regularly to validate their effectiveness and ensure they function as
expected during actual failure scenarios. By conducting regular testing and simulations, organizations can identify and address any issues
or gaps in their recovery procedures, improving the overall reliability of their workload.
upvoted 1 times
Reliabilty
Test recovery procedures - Use automation to simulate different failures or to recreate
scenarios that led to failures before
upvoted 4 times
A user needs to quickly deploy a non-relational database on AWS. The user does not want to manage the underlying hardware or the database
software.
Which AWS service can be used to accomplish this?
A. Amazon RDS
B. Amazon DynamoDB
C. Amazon Aurora
D. Amazon Redshift
Correct Answer: B
Reference:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/SQLtoNoSQL.html
Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability.
It is designed to handle large amounts of data with high read and write throughput, making it ideal for use cases such as gaming, ad tech,
IoT, and mobile applications. With Amazon DynamoDB, the user doesn't have to worry about managing the underlying hardware or
database software, as AWS takes care of the infrastructure and maintenance.
upvoted 1 times
Correct Answer: C
A development team wants to publish and manage web services that provide REST APIs.
Which AWS service will meet this requirement?
C. Amazon CloudFront
Correct Answer: B
"Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs
at any scale. APIs act as the "front door" for applications to access data, business logic, or functionality from your backend services. Using
API Gateway, you can create RESTful APIs and WebSocket APIs that enable real-time two-way communication applications. API Gateway
supports containerized and serverless workloads, as well as web applications."
https://aws.amazon.com/api-gateway/
upvoted 1 times
A company has a social media platform in which users upload and share photos with other users. The company wants to identify and remove
inappropriate photos. The company has no machine learning (ML) scientists and must build this detection capability with no ML expertise.
Which AWS service should the company use to build this capability?
A. Amazon SageMaker
B. Amazon Textract
C. Amazon Rekognition
D. Amazon Comprehend
Correct Answer: D
"Detect inappropriate content, Quickly and accurately identify unsafe or inappropriate content across image and video assets based on
general or business-specific standards and practices."
https://aws.amazon.com/rekognition/
upvoted 1 times
Therefore, option C is correct: The company should use Amazon Rekognition to build this capability
upvoted 1 times
Amazon Rekognition is a fully managed image and video analysis service that can detect and recognize objects, people, text, scenes, and
activities in images and videos. It can also detect inappropriate content, such as explicit or suggestive adult content, violence, and gore.
upvoted 1 times
Which responsibility belongs to AWS when a company hosts its databases on Amazon EC2 instances?
A. Database backups
Correct Answer: D
Therefore, among the options provided, the responsibility that belongs to AWS when a company hosts its databases on Amazon EC2
instances is option D, operating system installations. AWS provisions and manages the underlying operating system for EC2 instances,
ensuring that it is installed and ready for customer use.
upvoted 1 times
When a company hosts its databases on Amazon EC2 instances, the responsibility for database backups, database software patches,
operating system patches, and operating system installations would fall under the responsibility of the customer. AWS provides tools and
services that can help customers manage and automate these tasks, but ultimately it is the customer's responsibility to ensure that their
databases are backed up and secure, and that any necessary software patches or updates are applied in a timely manner.
upvoted 2 times
Am i crazy or the answer should b C? I know that the keyword si guest OS, but answed D can't be. The OS comes with the AMI so no need
to install that.
"Customers that deploy an Amazon EC2 instance are responsible for management of the guest operating system (including updates and
security patches), any application software or utilities installed by the customer on the instances, and the configuration of the AWS-
provided firewall (called a security group) on each instance. "
upvoted 2 times
A company wants to use Amazon S3 to store its legacy data. The data is rarely accessed. However, the data is critical and cannot be recreated.
The data needs to be available for retrieval within seconds.
Which S3 storage class meets these requirements MOST cost-effectively?
A. S3 Standard
D. S3 Glacier
Correct Answer: A
Here:
https://aws.amazon.com/about-aws/whats-new/2018/04/announcing-s3-one-zone-infrequent-access-a-new-amazon-s3-storage-class/
upvoted 1 times
S3 Standard-IA is for data that is accessed less frequently, but requires rapid access when needed. S3 Standard-IA offers the high
durability, high throughput, and low latency of S3 Standard, with a low per GB storage price and per GB retrieval charge. This combination
of low cost and high performance make S3 Standard-IA ideal for long-term storage, backups, and as a data store for disaster recovery
files. S3 Storage Classes can be configured at the object level and a single bucket can contain objects stored across S3 Standard, S3
Intelligent-Tiering, S3 Standard-IA, and S3 One Zone-IA. You can also use S3 Lifecycle policies to automatically transition objects between
storage classes without any application changes.
upvoted 5 times
B is incorrect, coz data is critical and cannot be recreated which need multiple AZs to support high availability.
upvoted 1 times
S3 Standard-IA: S3 Standard-IA is suitable for scenarios where data access patterns are infrequent but where high durability and
availability are required. It is a good choice for backup and restore data, disaster recovery, and long-term storage of rarely accessed data.
S3 One Zone-IA: S3 One Zone-IA is recommended for scenarios where data can be easily reproduced or has lower value, and the lower
cost is prioritized over the risk of data loss due to the loss of a single Availability Zone. It is suitable for scenarios such as secondary
backups, replicated data, or easily reproducible data.
upvoted 3 times
It has to be C: Standard IA is multi-zone (redundant), Standard Tire (speedy enough), and IA, lower performing storage so less cost
upvoted 2 times
S3 Standard-IA is designed for data that is accessed less frequently, but still requires immediate access when needed. This storage class
offers the same low latency and high throughput performance as S3 Standard, but with a lower storage cost. It charges a lower fee for
storage, but charges a retrieval fee when the data is accessed.
upvoted 2 times
C - yes
An online retail company wants to migrate its on-premises workload to AWS. The company needs to automatically handle a seasonal workload
increase in a cost- effective manner.
Which AWS Cloud features will help the company meet this requirement? (Choose two.)
B. Pay-as-you-go pricing
E. Centralized logging
Correct Answer: BD
Cross-Region workload deployment is not fitting for the question (there is nothing about globalisation).
There is nothing about checking any statuses or collecting logs, so there is no need for centralized logging.
upvoted 7 times
D. Auto Scaling policies: Auto Scaling allows the company to automatically adjust the number of resources based on demand. By setting
up Auto Scaling policies, the company can define rules to automatically add or remove resources to match the workload. This ensures that
the company can handle the seasonal workload increase efficiently while minimizing costs during periods of lower demand.
upvoted 1 times
Which AWS service helps developers use loose coupling and reliable messaging between microservices?
C. Amazon CloudFront
Correct Answer: D
Elastic Load Balancing automatically distributes incoming application traffic; it doesn’t help with developer work in this context.
Amazon SNS is used for email and notifications to users, not for developers.
Amazon CloudFront is a content delivery network (CDN) service built for securely delivering content to customers. It is not used for loose
coupling nor microservices.
upvoted 9 times
"Amazon SQS provides a simple and reliable way for customers to decouple and connect components (microservices) together using
queues."
https://aws.amazon.com/sqs/#:~:text=Amazon%20SQS%20provides%20a%20simple%20and%20reliable%20way%20for%20customers%20t
o%20decouple%20and%20connect%20components%20(microservices)%20together%20using%20queues.
upvoted 1 times
A company needs to build an application that uses AWS services. The application will be delivered to residents in European Counties. The
company must abide by regional regulatory requirements.
Which AWS service or program should the company use to determine which AWS services meet the regional requirements?
B. AWS Shield
D. AWS Artifact
Correct Answer: C
From AWS skillbuilder: "If you run software that deals with consumer data in the EU, you would need to make sure that you're in
compliance with GDPR, or if you run healthcare applications in the US you will need to design your architectures to meet HIPAA
compliance requirements. "
"One place you can access these documents is through a service called AWS Artifact. With AWS Artifact, you can gain access to compliance
reports done by third parties who have validated a wide range of compliance standards. Check out the AWS Compliance Center in order to
find compliance information all in one place. It will show you compliance enabling services as well as documentation like the AWS Risk and
Security Whitepaper, which you should read to ensure that you understand security and compliance with AWS. "
upvoted 7 times
AWS Artifact provides access to various compliance and regulatory documents, including the AWS Service Organization Controls (SOC)
reports, Payment Card Industry (PCI) Attestations of Compliance, and other compliance reports. These reports detail the security and
compliance measures implemented by AWS services in different regions.
By accessing AWS Artifact, the company can review and verify that the AWS services they plan to use meet the specific regional regulatory
requirements for European countries. This ensures that the company remains compliant with the necessary regulations while building
and delivering their application.
Additionally, AWS Artifact offers downloadable compliance reports and other relevant documents, providing the company with the
necessary documentation to demonstrate adherence to regional regulatory requirements during audits or assessments.
upvoted 1 times
The AWS Compliance Program provides a framework for customers to navigate regulatory requirements by providing information on how
AWS services and features can help address compliance needs. AWS compliance programs help customers meet compliance requirements
for industry-specific standards, such as HIPAA, PCI DSS, and others.
In this scenario, the company needs to ensure that it meets regional regulatory requirements while building an application that uses AWS
services. The AWS Compliance Program can help the company identify which AWS services and features can help meet regulatory
requirements for European countries. The program provides resources such as whitepapers, reports, and certifications that can help
organizations understand how AWS services can be used to support their compliance objectives.
upvoted 3 times
D - no coz it’s a service that provides on-demand access to AWS compliance reports
upvoted 2 times
The AWS Compliance Program includes information about the compliance status of various AWS services and regions, as well as details
about AWS's security and compliance controls. This information can help the company understand which AWS services are appropriate for
use in their application while meeting the regulatory requirements of the region.
Option D, AWS Artifact, is a service that provides on-demand access to AWS compliance reports and other documentation that customers
can use to meet their compliance and regulatory requirements.
upvoted 2 times
Artificact simply has all the regulations so that you can ensure you are compliant with your practices. Artifact doesn’t specify which AWS
Services are compliant.
upvoted 1 times
Question #169 Topic 1
A company needs to implement identity management for a fleet of mobile apps that are running in the AWS Cloud.
Which AWS service will meet this requirement?
A. Amazon Cognito
C. AWS Shield
D. AWS WAF
Correct Answer: A
AWS Security Hub is a cloud security posture management service that automates best practice checks, aggregates alerts, and supports
automated remediation. Not relevant.
AWS Shield and AWS WAF are for threat protection (Shield for DDoS, WAF for SQL injections), not relevant to the question.
upvoted 15 times
"Amazon Cognito helps you implement customer identity and access management (CIAM) into your web and mobile applications. You can
quickly add user authentication and access control to your applications in minutes."
https://aws.amazon.com/cognito/
upvoted 1 times
With Amazon Cognito, you can easily add user authentication and authorization to your mobile apps running in the AWS Cloud. It
supports various identity providers, such as social media platforms and enterprise identity systems, allowing your app users to sign in
using their existing credentials. You can also create and manage user pools to handle user registration and sign-up flows.
Furthermore, Amazon Cognito provides features for handling user data synchronization across devices, enabling users to seamlessly
access their data across multiple devices. It also integrates with other AWS services, such as AWS Lambda and Amazon API Gateway, to
enable secure access to backend resources.
upvoted 1 times
A company needs an Amazon EC2 instance for a rightsized database server that must run constantly for 1 year.
Which EC2 instance purchasing option will meet these requirements MOST cost-effectively?
C. On-Demand Instance
D. Spot Instance
Correct Answer: A
"Standard Reserved Instances typically provide the highest discount levels. One-year Standard Reserved Instances provide a similar
discount to three-year Convertible Reserved Instances."
https://aws.amazon.com/blogs/aws/ec2-reserved-instance-update-convertible-ris-and-regional-
benefit/#:~:text=Convertible%20Reserved%20Instances%20%2DConvertible%20RIs,Reserved%20Instance%20at%20any%20time.
upvoted 1 times
The "Standard" Reserved Instance option provides the highest discount among the Reserved Instance types. It offers a fixed discount
over the On-Demand instance pricing for the term of the reservation, which in this case would be 1 year.
Convertible Reserved Instances provide flexibility to change the instance type within the same instance family, but they usually have a
slightly higher price compared to Standard Reserved Instances.
upvoted 2 times
https://docs.aws.amazon.com/whitepapers/latest/cost-optimization-reservation-models/standard-vs.-convertible-offering-classes.html
https://aws.amazon.com/blogs/aws/ec2-reserved-instance-update-convertible-ris-and-regional-
benefit/#:~:text=Convertible%20Reserved%20Instances%20%2DConvertible%20RIs,Reserved%20Instance%20at%20any%20time.
upvoted 2 times
So, A is correct.
upvoted 3 times
Standard Reserved Instances typically provide the highest discount levels. One-year Standard Reserved Instances provide a similar
discount to three-year Convertible Reserved Instances.
upvoted 3 times
A company has multiple applications and is now building a new multi-tier application. The company will host the new application on Amazon EC2
instances. The company wants the network routing and traffic between the various applications to follow the security principle of least privilege.
Which AWS service or feature should the company use to enforce this principle?
A. Security groups
B. AWS Shield
Correct Answer: A
AWS Direct Connect is a cloud service that links your network directly to AWS to deliver consistent, low-latency performance.
upvoted 17 times
"A security group acts as a virtual firewall for your EC2 instances to control incoming and outgoing traffic. Inbound rules control the
incoming traffic to your instance, and outbound rules control the outgoing traffic from your instance. When you launch an instance, you
can specify one or more security groups. If you don't specify a security group, Amazon EC2 uses the default security group for the VPC.
You can add rules to each security group that allow traffic to or from its associated instances. You can modify the rules for a security
group at any time. New and modified rules are automatically applied to all instances that are associated with the security group. When
Amazon EC2 decides whether to allow traffic to reach an instance, it evaluates all of the rules from all of the security groups that are
associated with the instance."
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-security-groups.html
upvoted 1 times
Security groups provide granular control over traffic at the instance level and can be easily configured and managed through the AWS
Management Console, CLI, or SDKs. You can define specific rules to allow or deny traffic based on various criteria such as IP addresses,
port numbers, and protocols.
upvoted 1 times
A company's web application requires AWS credentials and authorizations to use an AWS service.
Which IAM entity should the company use as best practice?
A. IAM role
B. IAM user
C. IAM group
Correct Answer: A
"We recommend using IAM roles for human users and workloads that access your AWS resources so that they use temporary credentials.
However, for scenarios in which you need an IAM user or root user in your account, require MFA for additional security. With MFA, users
have a device that generates a response to an authentication challenge. Each user's credentials and device-generated response are
required to complete the sign-in process. "
https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#enable-mfa-for-privileged-users
upvoted 1 times
By assigning an IAM role to the web application, the application can assume the role and obtain temporary security credentials. These
credentials can then be used to make authorized API requests to AWS services on behalf of the application, without the need to directly
embed long-term access keys or IAM user credentials in the application code.
IAM roles provide an added layer of security by allowing you to define fine-grained permissions and policies for the role. This ensures that
the web application only has access to the necessary AWS services and resources required for its functionality, reducing the risk of
unauthorized access or misuse.
upvoted 1 times
IAM roles are a secure way to grant permissions to an entity that needs to access AWS resources. In this case, the web application needs
to access AWS services using AWS credentials and authorizations. By using an IAM role, the web application can assume the role and gain
temporary security credentials to access the AWS services. This eliminates the need to store and manage long-term access keys or secret
keys within the application code, reducing the risk of accidental exposure or misuse.
IAM roles can be assigned policies that define the specific permissions required by the web application to access the necessary AWS
services. This allows the company to grant the least privilege required for the web application to function correctly.
upvoted 2 times
Nav_een_Anand02 5 months, 3 weeks ago
Selected Answer: D
D is the answer
upvoted 2 times
A company is creating a document that defines the operating system patch routine for all the company's systems.
Which AWS resources should the company include in this document? (Choose two.)
Correct Answer: AD
So the correct answer is AE as we patch ECS by replacing AMI with newer version.
upvoted 8 times
"Amazon ECS Anywhere provides support for registering an external instance such as an on-premises server or virtual machine (VM), to
your Amazon ECS cluster."
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-anywhere.html#ecs-anywhere-supported-os
upvoted 1 times
D. Amazon RDS instances: Amazon RDS provides managed database services, and the instances running the databases also require
regular patching of the underlying operating system. The document should include information on how to patch RDS instances following
best practices.
upvoted 2 times
TAN_THE_MAN 2 weeks, 4 days ago
ec2,rds
upvoted 1 times
D. Amazon RDS instances: Amazon RDS (Relational Database Service) provides managed database services. If the company is using RDS
instances for their databases, it is important to include them in the patch routine document to ensure that the database operating
systems receive regular updates and security patches.
upvoted 2 times
Amazon EC2 instances and Amazon RDS instances are both infrastructure resources that require regular patching of the underlying
operating system.
upvoted 2 times
Both Amazon EC2 instances and Amazon RDS instances require regular operating system patching to ensure security and stability. AWS
Lambda functions, AWS Fargate tasks, and Amazon Elastic Container Service instances do not require operating system patching because
they are managed services.
upvoted 2 times
Amazon EC2 instances and Amazon RDS instances are Infrastructure-as-a-Service (IaaS) offerings that allow customers to run their own
operating systems on virtual machines in the cloud. Therefore, it is important to include these resources in the document defining the
operating system patch routine.
AWS Lambda functions, AWS Fargate tasks, and Amazon Elastic Container Service (Amazon ECS) instances are not related to operating
system patching as they are Platform-as-a-Service (PaaS) or serverless offerings that do not require customers to manage the underlying
operating systems.
upvoted 3 times
Which AWS service or feature gives a company the ability to control incoming traffic and outgoing traffic for Amazon EC2 instances?
A. Security groups
B. Amazon Route 53
D. Amazon VPC
Correct Answer: D
"A security group acts as a virtual firewall for your EC2 instances to control incoming and outgoing traffic. Inbound rules control the
incoming traffic to your instance, and outbound rules control the outgoing traffic from your instance. When you launch an instance, you
can specify one or more security groups. If you don't specify a security group, Amazon EC2 uses the default security group for the VPC.
You can add rules to each security group that allow traffic to or from its associated instances. You can modify the rules for a security
group at any time. New and modified rules are automatically applied to all instances that are associated with the security group. When
Amazon EC2 decides whether to allow traffic to reach an instance, it evaluates all of the rules from all of the security groups that are
associated with the instance."
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-security-groups.html
upvoted 1 times
A company is starting to build its infrastructure in the AWS Cloud. The company wants access to technical support during business hours. The
company also wants general architectural guidance as teams build and test new applications.
Which AWS Support plan will meet these requirements at the LOWEST cost?
Correct Answer: B
In addition to enhanced technical support and architectural guidance, Developer Support provides access to documentation and forums,
AWS Trusted Advisor, and AWS Personal Health Dashboard.
upvoted 12 times
AWS Business Support provides 24/7 access to AWS Trusted Advisor, AWS Infrastructure Event Management, and AWS Personal Health
Dashboard. It also includes email support during business hours, which would meet the requirement of having access to technical
support during business hours. Additionally, AWS Business Support provides general architectural guidance for new applications.
AWS Basic Support (Option A) provides access to AWS Trusted Advisor and the AWS Support Center, but it does not include email support
during business hours or architectural guidance.
AWS Developer Support (Option B) provides access to AWS Trusted Advisor, the AWS Support Center, and email support 24/7, but it does
not include architectural guidance.
AWS Enterprise Support (Option D) provides the most comprehensive support for large enterprises but is more expensive than the other
options and likely unnecessary for a company just starting to build its infrastructure in the AWS Cloud.
upvoted 2 times
B - no. provides technical support but does not provide architectural guidance
C - yes
A company is migrating its public website to AWS. The company wants to host the domain name for the website on AWS.
Which AWS service should the company use to meet this requirement?
A. AWS Lambda
B. Amazon Route 53
C. Amazon CloudFront
Correct Answer: B
In the case of hosting the domain name for the company's website on AWS, Amazon Route 53 can be used to register the domain name,
configure the DNS settings, and point the domain to the appropriate AWS resources, such as the website hosted on Amazon S3, Amazon
EC2 instances, or load balancers.
upvoted 2 times
A company needs to evaluate its AWS environment and provide best practice recommendations in five categories: cost, performance, service
limits, fault tolerance, and security.
Which AWS service can the company use to meet these requirements?
A. AWS Shield
B. AWS WAF
Correct Answer: C
https://aws.amazon.com/premiumsupport/technology/trusted-advisor/#:~:text=Advisor%20(11%3A45)-,Benefits,-
Checks%20from%20Trusted
upvoted 1 times
Trusted Advisor offers checks and recommendations in various areas, such as cost optimization (e.g., identifying idle resources,
suggesting reserved instances), performance improvement (e.g., analyzing service limits, identifying bottlenecks), security (e.g., checking
for security vulnerabilities, access control issues), fault tolerance (e.g., evaluating backup configurations, identifying single points of
failure), and more.
upvoted 1 times
Which AWS service provides the capability to view end-to-end performance metrics and troubleshoot distributed applications?
A. AWS Cloud9
B. AWS CodeStar
D. AWS X-Ray
Correct Answer: D
"AWS X-Ray helps developers analyze and debug production, distributed applications, such as those built using a microservices
architecture. With X-Ray, you can understand how your application and its underlying services are performing to identify and troubleshoot
the root cause of performance issues and errors. X-Ray provides an end-to-end view of requests as they travel through your application,
and shows a map of your application’s underlying components."
https://aws.amazon.com/xray/faqs/
upvoted 1 times
With AWS X-Ray, you can trace requests as they flow through different AWS resources and services, including AWS Lambda functions,
Amazon EC2 instances, Amazon ECS containers, and more. It captures information about each step of the request journey, including
response times, errors, and the dependencies between different components.
upvoted 1 times
Which cloud computing benefit does AWS demonstrate with its ability to offer lower variable costs as a result of high purchase volumes?
A. Pay-as-you-go pricing
B. High availability
C. Global reach
D. Economies of scale
Correct Answer: A
https://docs.aws.amazon.com/whitepapers/latest/aws-overview/six-advantages-of-cloud-
computing.html#:~:text=Benefit%20from%20massive,you%2Dgo%20prices.
upvoted 1 times
One of the ways AWS offers lower variable costs is through its pay-as-you-go pricing model. With this model, customers only pay for the
resources they consume on an as-needed basis, without any upfront or long-term commitments. This allows customers to scale their
usage up or down based on demand and only pay for what they actually use.
upvoted 2 times
AWS (Amazon Web Services) can offer lower variable costs due to its ability to achieve economies of scale, which means that as the
volume of purchases increases, the cost per unit decreases. This is because AWS is able to negotiate lower prices from suppliers, spread
fixed costs across a larger customer base, and optimize its operations for efficiency. As a result, customers who use AWS services can
benefit from lower costs, especially for large-scale deployments.
upvoted 2 times
AWS has the ability to offer lower variable costs as a result of high purchase volumes, which demonstrates the benefit of economies of
scale. Economies of scale refer to the reduction in unit costs that a company can achieve by increasing the scale of production. In the case
of AWS, the company is able to negotiate lower prices for hardware and other resources by buying in large volumes, and it can pass those
savings on to its customers in the form of lower prices for its cloud services.
upvoted 4 times
Which AWS service provides threat detection by monitoring for malicious activities and unauthorized actions to protect AWS accounts, workloads,
and data that is stored in Amazon S3?
A. AWS Shield
C. Amazon GuardDuty
D. Amazon Inspector
Correct Answer: C
https://aws.amazon.com/guardduty/faqs/
upvoted 1 times
Specifically, Amazon GuardDuty is designed to protect AWS accounts, workloads, and data stored in Amazon S3 from various types of
threats, including unauthorized access, compromised instances, and data exfiltration attempts. It uses machine learning algorithms and
threat intelligence to identify patterns and anomalies that may indicate malicious activity.
When GuardDuty detects a potential threat, it generates findings and alerts, which can be viewed in the AWS Management Console or
integrated with other AWS services for automated responses. This helps organizations quickly identify and respond to security incidents,
improving their overall security posture.
upvoted 1 times
Which AWS service can a company use to store and manage Docker images?
A. Amazon DynamoDB
Correct Answer: C
1.
https://aws.amazon.com/docker/#:~:text=AWS%20provides%20Amazon%20Elastic%20Container,and%20quickly%20retrieving%20Docker
%20images.
2. https://aws.amazon.com/ecr/
upvoted 1 times
Question #182 Topic 1
A company needs an automated security assessment report that will identify unintended network access to Amazon EC2 instances. The report
also must identify operating system vulnerabilities on those instances.
Which AWS service or feature should the company use to meet this requirement?
B. Security groups
C. Amazon Macie
D. Amazon Inspector
Correct Answer: D
"Amazon Inspector is an automated vulnerability management service that continually scans Amazon Elastic Compute Cloud (EC2), AWS
Lambda functions, and container workloads for software vulnerabilities and unintended network exposure."
https://aws.amazon.com/inspector/faqs/?nc=sn&loc=6
upvoted 1 times
Amazon Inspector uses a set of predefined rules to assess the security state of your resources. It analyzes the network activity, operating
system configurations, and application behavior to identify potential security issues. The assessment report provides detailed findings and
recommendations for remediation.
upvoted 1 times
From: https://www.linkedin.com/pulse/aws-trusted-advisor-vs-inspector-what-difference-netcom-learning?trk=organization-update-
content_share-article
AWS Inspector is a vulnerability management tool that automatically analyzes AWS workloads to find accidental network exposure and
software vulnerabilities. It provides a detailed list of security issues and recommendations to fix them. AWS Inspector helps users unite
their vulnerability management services for Amazon EC2 and ECR into a single managed solution. Users are also provided with an
accurate Inspector risk score helping them prioritize their vulnerable resources.
upvoted 3 times
Question #183 Topic 1
A global company is building a simple time-tracking mobile app. The app needs to operate globally and must store collected data in a database.
Data must be accessible from the AWS Region that is closest to the user.
What should the company do to meet these data storage requirements with the LEAST amount of operational overhead?
Correct Answer: C
Global tables build on the global Amazon DynamoDB footprint to provide you with a fully managed, multi-Region, and multi-active
database that delivers fast, local, read and write performance for massively scaled, global applications. Global tables replicate your
DynamoDB tables automatically across your choice of AWS Regions.
upvoted 12 times
By using Amazon DynamoDB global tables, the company can achieve low-latency data access and high availability without the need to
manage separate databases in multiple AWS Regions or set up complex replication mechanisms. DynamoDB takes care of the replication
and data consistency across Regions, automatically handling failovers and ensuring that the data is available in the desired AWS Region.
upvoted 1 times
Amazon DynamoDB global tables allow you to create a fully managed, multi-region, and multi-master database table that can provide
low-latency read and write performance for applications that have a global footprint. It automatically replicates your data across multiple
AWS Regions and enables automatic failover in the event of a regional disruption.
With DynamoDB global tables, the company can avoid the need to manage multiple databases in different regions or set up cross-Region
replication manually. DynamoDB takes care of all the underlying infrastructure, replication, and scaling, and ensures that the data is
always available from the AWS Region that is closest to the user.
upvoted 2 times
Which of the following are economic advantages of the AWS Cloud? (Choose two.)
Correct Answer: DE
E. Faster product launches: The AWS Cloud provides a wide range of pre-configured services and resources that can be quickly
provisioned and scaled as needed. This allows companies to accelerate their product development and deployment cycles, leading to
faster time-to-market and potential cost savings.
upvoted 1 times
A. Increased workforce productivity: The scalability and accessibility of the cloud allow teams to provision resources quickly, leading to
increased productivity and efficiency in their work.
E. Faster product launches: With AWS Cloud's capabilities, organizations can accelerate product launches by leveraging scalable resources,
automation, and various services offered by AWS.
upvoted 2 times
starsea 2 months, 2 weeks ago
Selected Answer: DE
TCO reduction is one of the biggest reason for going cloud
I am kind of torn between Productivity and Launch feature faster since both make sense and referred in
https://aws.amazon.com/economics/
upvoted 1 times
Simplified total cost of ownership (TCO) accounting is an economic advantage of the AWS Cloud because AWS offers pay-as-you-go
pricing, which allows customers to pay only for the resources they use and avoid upfront capital expenditures. This can simplify TCO
accounting by providing more predictable and manageable costs.
Faster product launches are another economic advantage of the AWS Cloud because AWS offers a range of services that enable customers
to quickly deploy, scale, and manage their applications. This can help businesses accelerate time-to-market for new products and services,
which can lead to increased revenue and a competitive advantage in the marketplace.
upvoted 1 times
Increased workforce productivity (A): By leveraging the various services and features offered by AWS, organizations can automate tasks,
streamline processes, and improve overall productivity of their workforce.
upvoted 2 times
E. Faster product launches: AWS Cloud provides a variety of tools and services that help in quickly developing, testing, and deploying
applications. This enables faster time-to-market and reduces the cost of product launches.
Option A, Increased workforce productivity, is not an economic advantage of the AWS Cloud, as it does not directly translate to cost
savings or revenue generation. Option B, Decreased need to encrypt user data, is incorrect, as data encryption is always necessary for
security purposes and is not affected by the choice of cloud provider. Option C, Manual compliance audits, is also incorrect, as AWS
provides a variety of compliance and security certifications, which can save time and resources needed for manual audits.
upvoted 2 times
A. Increased workforce productivity: The AWS Cloud provides a range of services and tools that enable teams to work collaboratively,
streamline processes, and automate tasks, resulting in increased productivity.
D. Simplified total cost of ownership (TCO) accounting: The AWS Cloud provides a pay-as-you-go pricing model, which means that you only
pay for the services and resources you use. This helps simplify TCO accounting and reduce costs.
upvoted 1 times
A. Increased workforce productivity: AWS provides managed services that allow developers to focus on building applications rather than
managing infrastructure. This increases productivity and reduces time to market.
D. Simplified total cost of ownership (TCO) accounting: AWS offers a pay-as-you-go pricing model, which allows organizations to pay only
for the resources they use. This reduces upfront costs and makes it easier to forecast and manage expenses.
upvoted 1 times
Capricorn27 4 months ago
The economic advantages of the AWS Cloud are:
A. Increased workforce productivity: AWS services enable teams to focus on their core business competencies rather than managing
infrastructure. This can lead to increased productivity and efficiency.
D. Simplified total cost of ownership (TCO) accounting: AWS offers a pay-as-you-go pricing model that eliminates the need for large
upfront capital expenditures. This helps simplify TCO accounting and makes it easier to forecast costs.
Option B is incorrect because user data should always be encrypted, regardless of the cloud provider.
Option C is incorrect because AWS provides automated compliance checks and certifications, reducing the need for manual compliance
audits.
Option E is not a direct economic advantage but rather a business agility advantage, as AWS services can help accelerate product launches
and time-to-market.
upvoted 2 times
D - yes. pay-as-you-go model eliminates upfront investments and reduces the overall cost of ownership.
Economic advantages of the AWS Cloud are typically related to cost savings or revenue generation, such as simplified TCO accounting,
faster product launches, reduced infrastructure management costs, or improved resource utilization. These economic advantages can
help organizations save money, increase revenue, or improve their overall financial performance.
upvoted 2 times
D. Simplified total cost of ownership (TCO) accounting: AWS Cloud provides a pay-as-you-go pricing model that allows companies to pay
only for the resources they use. This makes it easier for companies to calculate and forecast their expenses, as they do not need to invest
in expensive hardware or maintain infrastructure.
E. Faster product launches: AWS Cloud provides a variety of services that can help companies rapidly develop, test, and deploy their
applications. This results in faster time-to-market, which can provide a competitive advantage and increased revenue.
upvoted 1 times
Which controls does the customer fully inherit from AWS in the AWS shared responsibility model?
Correct Answer: A
For the audience, this is what AWS is teaching us, not always direct logic. Pick C to pass the exam.
upvoted 4 times
hoangngoclee 3 months, 2 weeks ago
https://myrestraining.com/blog/aws/aws-certified-cloud-practitioner/aws-inherited-and-shared-controls/
Inherited Controls – Controls which a customer fully inherits from AWS: Physical and Environmental controls
Shared Controls – Controls which apply to both the infrastructure layer and customer layers, but in completely separate contexts or
perspectives:
Patch Management – AWS is responsible for patching and fixing flaws within the infrastructure, but customers are responsible for
patching their guest OS and applications.
Configuration Management – AWS maintains the configuration of its infrastructure devices, but a customer is responsible for configuring
their own guest operating systems, databases, and applications.
Awareness & Training – AWS trains AWS employees, but a customer must train their own employees.
upvoted 2 times
The customer is responsible for patch management, awareness and training, configuration management, and other security controls
related to their data and applications running on AWS services.
upvoted 2 times
AWS assumes ownership of what makes up the cloud. The configuration management is in the cloud.
upvoted 4 times
https://aws.amazon.com/compliance/shared-responsibility-model/
upvoted 4 times
Which task is a customer's responsibility, according to the AWS shared responsibility model?
Correct Answer: A
"The customer assumes responsibility and management of the guest operating system (including updates and security patches), other
associated application software as well as the configuration of the AWS provided security group firewall."
https://aws.amazon.com/compliance/shared-responsibility-
model/#:~:text=The%20customer%20assumes%20responsibility%20and%20management%20of%20the%20guest%20operating%20system
%20(including%20updates%20and%20security%20patches)%2C%20other%20associated%20application%20software%20as%20well%20as%
20the%20configuration%20of%20the%20AWS%20provided%20security%20group%20firewall.
upvoted 1 times
A company needs to deliver new website features quickly in an iterative manner to minimize the time to market.
Which AWS Cloud concept does this requirement represent?
A. Reliability
B. Elasticity
C. Agility
D. High availability
Correct Answer: C
A company wants to increase its ability to recover its infrastructure in the case of a natural disaster.
Which pillar of the AWS Well-Architected Framework does this ability represent?
A. Cost optimization
B. Performance efficiency
C. Reliability
D. Security
Correct Answer: C
https://wa.aws.amazon.com/wellarchitected/2020-07-02T19-33-
23/wat.pillar.reliability.en.html#:~:text=Automatically%20recover%20from,before%20they%20occur.
upvoted 1 times
In the context of disaster recovery, AWS offers various services and features such as multi-region deployment, automated backups, and
replication capabilities to help organizations ensure the availability and recoverability of their infrastructure and data in the event of a
natural disaster.
upvoted 2 times
N9 8 months ago
Selected Answer: C
keyword - recover
upvoted 1 times
myan2492 9 months ago
C. reliability
upvoted 2 times
A. AWS Organizations
B. AWS Config
C. Amazon CloudWatch
D. AWS CloudTrail
Correct Answer: D
Which AWS service, feature, or tool uses machine learning to continuously monitor cost and usage for unusual cloud spending?
B. AWS Budgets
C. Amazon CloudWatch
Correct Answer: D
A company deployed an application on an Amazon EC2 instance. The application ran as expected for 6 months in the past week, users have
reported latency issues. A system administrator found that the CPU utilization was at 100% during business hours. The company wants a scalable
solution to meet demand.
Which AWS service or feature should the company use to handle the load for its application during periods of high demand?
C. Amazon Route 53
D. An Elastic IP address
Correct Answer: C
In this case, since the application is experiencing latency issues due to high CPU utilization, using Auto Scaling groups will allow the
company to automatically add more instances to handle the increased demand and distribute the workload across multiple instances.
This will help improve the application's performance and ensure it can handle the load during peak periods.
upvoted 1 times
An Auto Scaling group starts by launching enough instances to meet its desired capacity. It maintains this number of instances by
performing periodic health checks on the instances in the group. The Auto Scaling group continues to maintain a fixed number of
instances even if an instance becomes unhealthy. If an instance becomes unhealthy, the group terminates the unhealthy instance and
launches another instance to replace it.
upvoted 2 times
A company wants to migrate to AWS and use the same security software it uses on premises. The security software vendor offers its security
software as a service on AWS.
Where can the company purchase the security solution?
D. AWS Marketplace
Correct Answer: D
A company is generating large sets of critical data in its on-premises data center. The company needs to securely transfer the data to AWS for
processing. These transfers must occur daily over a dedicated connection.
Which AWS service should the company use to meet these requirements?
A. AWS Backup
B. AWS DataSync
D. AWS Snowball
Correct Answer: B
However, the question specifies that the transfers must occur daily over a dedicated connection. AWS DataSync does not provide a
dedicated connection for data transfer. Instead, it uses the internet to transfer data. While the internet is a reliable and secure way to
transfer data, it is not as fast and secure as a dedicated connection.
AWS Direct Connect, on the other hand, provides a dedicated private connection between an on-premises data center and AWS. With AWS
Direct Connect, you can establish a private, high-speed, and secure connection to AWS, which is ideal for transferring large volumes of
data securely and quickly.
upvoted 6 times
AWS Direct Connect is a dedicated network connection service that provides a high-speed, dedicated network connection between on-
premises infrastructure and AWS. It is ideal for transferring large amounts of data frequently, which requires a dedicated, private
connection to maintain the security and reliability of the data transfer.
upvoted 3 times
0x0045 3 months ago
Selected Answer: C
another vague question. from https://aws.amazon.com/datasync/faqs/
Q: How do I use AWS DataSync for recurring transfers between on-premises and AWS for ongoing workflows?
A: You can use AWS DataSync for ongoing transfers from on-premises systems into or out of AWS for processing.
but I can't find any supporting data about DataSync actually needing a dedicated link.
I do see this:
A: You can use AWS DataSync with your Direct Connect link to access public service endpoints
so neither are fully correct or incorrect based on the question as stated. I would hope any actual cert ??? on the topic are not a vague. I
picked C because of the word "dedicated"
upvoted 3 times
//You can use DataSync to copy data over AWS Direct Connect or internet links to AWS for one-time data migrations, recurring data
processing workflows, and automated replication for data protection and recovery.//
upvoted 1 times
C - yes
https://aws.amazon.com/cloud-data-migration/
//You can use DataSync to copy data over AWS Direct Connect or internet links to AWS for one-time data migrations, recurring data
processing workflows, and automated replication for data protection and recovery.//
upvoted 1 times
AWS Backup is a fully managed backup service that centralizes and automates the backup of data across AWS services and on-premises
applications.
AWS DataSync is a data transfer service that simplifies and automates moving data between on-premises storage and AWS services.
AWS Snowball is a petabyte-scale data transport solution that uses devices designed to be secure to transfer large amounts of data into
and out of AWS.
upvoted 4 times
A company wants to run production workloads on AWS. The company wants access to technical support from engineers 24 hours a day, 7 days a
week. The company also wants access to the AWS Health API and contextual architectural guidance for business use cases. The company has a
strong IT support team and does not need concierge support.
Which AWS Support plan will meet these requirements at the LOWEST cost?
Correct Answer: D
The cost of AWS Business Support is lower than AWS Enterprise Support, which provides additional features such as concierge support.
AWS Developer Support is designed for individual developers and may not provide the level of support required by a company running
production workloads.
upvoted 1 times
AWS Business Support provides 24/7 access to AWS Trusted Advisor, AWS Infrastructure Event Management, and the AWS Health API,
which are the features that the company needs. Business Support also includes access to AWS Support engineers during business hours,
which should be sufficient given the strong IT support team the company already has in place. Additionally, Business Support provides
architectural guidance for specific business use cases, which can be useful for optimizing the company's use of AWS.
upvoted 1 times
Guru4Cloud 3 months, 4 weeks ago
Selected Answer: C
Based on the given requirements, the AWS Support plan that will meet these requirements at the LOWEST cost is option C, AWS Business
Support.
AWS Business Support provides 24/7 access to AWS Trusted Advisor, AWS Personal Health Dashboard, and AWS Support engineers via
email, chat, and phone. This plan also includes architectural guidance for specific use cases and AWS Health API access
upvoted 1 times
https://aws.amazon.com/premiumsupport/plans/business/
upvoted 1 times
Which of the following is a managed AWS service that is used specifically for extract, transform, and load (ETL) data?
A. Amazon Athena
B. AWS Glue
C. Amazon S3
Correct Answer: B
AWS Glue is also serverless, but more of an ecosystem of tools to allow you to easily do schema discovery and ETL with auto-generated
scripts that can be modified either visually or via editing the script. The most commonly known components of Glue are Glue Metastore
and Glue ETL. Glue Metastore is a serverless hive compatible metastore which can be used in lieu of your own managed Hive. Glue ETL on
the other hand is a Spark service which allows customers to run Spark jobs without worrying about the configuration, manageability and
operationalization of the underlying Spark infrastructure. There are other services such as Glue Data Wrangler which we will keep outside
the scope of this discussion.
upvoted 3 times
Which of the following actions are controlled with AWS Identity and Access Management (IAM)? (Choose two.)
Correct Answer: AC
Which of the following are shared controls that apply to both AWS and the customer, according to the AWS shared responsibility model? (Choose
two.)
Correct Answer: AC
Shared Controls – Controls which apply to both the infrastructure layer and customer layers, but in completely separate contexts or
perspectives. In a shared control, AWS provides the requirements for the infrastructure and the customer must provide their own control
implementation within their use of AWS services. Examples include:
1. Patch Management – AWS is responsible for patching and fixing flaws within the infrastructure, but customers are responsible for
patching their guest OS and applications.
2. Configuration Management – AWS maintains the configuration of its infrastructure devices, but a customer is responsible for
configuring their own guest operating systems, databases, and applications.
3.
Awareness & Training - AWS trains AWS employees, but a customer must train their own employees.
upvoted 9 times
C. Employee awareness and training: Both AWS and the customer share the responsibility of ensuring that employees are aware of and
trained in security best practices and compliance requirements.
upvoted 1 times
A. Resource configuration management: AWS is responsible for the security of the cloud, while customers are responsible for the security
of their content and applications in the cloud. Customers are responsible for configuring their resources securely, including storage,
compute, and network configurations.
C. Employee awareness and training: AWS is responsible for ensuring that its employees are trained and aware of security risks and
procedures related to their work. Customers are responsible for ensuring that their employees are trained and aware of security risks and
procedures related to their use of AWS services.
upvoted 1 times
What information is found on an AWS Identity and Access Management (IAM) credential report? (Choose two.)
A. The date and time when an IAM user's password was last used to sign in to the AWS Management Console.
C. The User-Agent browser identifier for each IAM user currently logged in.
D. Whether multi-factor authentication (MFA) has been enabled for an IAM user.
E. The number of incorrect login attempts by each IAM user in the previous 30 days.
Correct Answer: AC
The date and time when the AWS account root user or IAM user's password was last used to sign in to an AWS website, in ISO 8601 date-
time format.
o The user's password has never been used.
mfa_active
When a multi-factor authentication (MFA) device has been enabled for the user, this value is TRUE. Otherwise it is FALSE.
upvoted 4 times
What is the LEAST expensive AWS Support plan that contains a full set of AWS Trusted Advisor best practice checks?
Correct Answer: B
https://aws.amazon.com/premiumsupport/plans/
upvoted 1 times
Which AWS service provides domain registration, DNS routing, and service health checks?
B. Amazon Route 53
C. Amazon CloudFront
Correct Answer: B
A bank needs to store recordings of calls made to its contact center for 6 years. The recordings must be accessible within 48 hours from the time
they are requested.
Which AWS service will provide a secure and cost-effective solution for retaining these files?
A. Amazon DynamoDB
B. Amazon S3 Glacier
C. Amazon Connect
D. Amazon ElastiCache
Correct Answer: C
S3 Glacier Deep Archive would have met the requirement better than S3 Glacier for cost savings.
To save even more on long-lived archive storage such as compliance archives and digital media preservation, choose S3 Glacier Deep
Archive, the lowest cost storage in the cloud with data retrieval from 12—48 hours.
upvoted 2 times
Which AWS service should be used to migrate a company's on-premises MySQL database to Amazon RDS?
Correct Answer: C
Which benefits does a company gain when the company moves from on-premises IT architecture to the AWS Cloud? (Choose two.)
A. Reduced or eliminated tasks for hardware troubleshooting, capacity planning, and procurement
C. Automatic security configuration of all applications that are migrated to the cloud
Correct Answer: AE
E. Faster deployment of new features and applications: AWS provides a wide range of services and tools that enable companies to quickly
deploy and scale applications. The cloud infrastructure allows for agility and faster time to market, as companies can easily provision
resources and take advantage of managed services for various functionalities.
upvoted 1 times
A. Reduced latency
C. Decreased costs
Correct Answer: B
Decoupling an AWS Cloud architecture involves breaking up a monolithic system into smaller, independent components that can be
upgraded or replaced without affecting the entire system. This results in greater flexibility, scalability, and agility, as well as the ability to
use different technologies or services for different components. Decoupling does not necessarily result in reduced latency, decreased
costs, or fewer components to manage.
upvoted 2 times
Which task is the responsibility of the customer according to the AWS shared responsibility model?
A. Maintain the security of the hardware that runs Amazon EC2 instances.
Correct Answer: B
Customers that deploy an Amazon EC2 instance are responsible for management of the guest operating system (including updates and
security patches), any application software or utilities installed by the customer on the instances, and the configuration of the AWS-
provided firewall (called a security group) on each instance.
upvoted 1 times
Which AWS Organizations feature can be used to track charges across multiple accounts and report the combined cost?
B. Cost Explorer
C. Consolidated billing
Correct Answer: C
https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/consolidated-billing.html
upvoted 1 times
Which of the following is a cloud benefit that AWS offers to its users?
Correct Answer: C
Go global in minutes
https://docs.aws.amazon.com/whitepapers/latest/aws-overview/six-advantages-of-cloud-computing.html
upvoted 5 times
An ecommerce company has migrated its IT infrastructure from an on-premises data center to the AWS Cloud.
Which cost is the company's direct responsibility?
Correct Answer: A
IMHO, this should be part of "Shared Responsibility Model" discussion, from above link
" The customer assumes responsibility and management of the guest operating system (including updates and security patches), other
associated application software as well as the configuration of the AWS provided security group firewall".
upvoted 1 times
N9 8 months ago
Selected Answer: A
A is the answer
https://docs.aws.amazon.com/license-manager/latest/userguide/license-manager.html
As you build out your cloud infrastructure on AWS, you can save costs by using Bring Your Own License model (BYOL) opportunities. That
is, you can re-purpose your existing license inventory for use with your cloud resources.
upvoted 4 times
Correct Answer: D
Reliability - Focuses on the ability to recover from failures and meet business continuity objectives by designing systems that can
automatically recover from infrastructure or service disruptions.
Performance efficiency - Focuses on using resources efficiently to meet system requirements, including selecting the right resource types
and sizes and optimizing performance as demands change.
Security - Focuses on protecting information, systems, and assets while delivering business value through risk assessments, data
protection mechanisms, and implementing various security controls.
Cost optimization - Focuses on avoiding unnecessary costs by optimizing resource usage, selecting the right pricing models, and analyzing
spending patterns to ensure cost-effectiveness.
upvoted 1 times
A company accepts enrollment applications on handwritten paper forms. The company uses a manual process to enter the form data into its
backend systems.
The company wants to automate the process by scanning the forms and capturing the enrollment data from scanned PDF files.
Which AWS service should the company use to build this process?
A. Amazon Rekognition
B. Amazon Textract
C. Amazon Transcribe
D. Amazon Comprehend
Correct Answer: B
B. Amazon Textract: Amazon Textract is a machine learning service that automatically extracts text and data from scanned documents,
including handwritten text. The service can extract tables, forms, and text in a variety of formats, including PDF and image files. With
Amazon Textract, the company can automate the process of extracting the enrollment data from scanned PDF files and integrate the data
with its backend systems.
upvoted 1 times
Amazon Textract is a machine learning (ML) service that automatically extracts text, handwriting, and data from scanned documents.
upvoted 2 times
AWS CodeCommit is a secure, highly scalable, managed source control service that hosts private Git repositories. It makes it easy for
teams to securely collaborate on code with contributions encrypted in transit and at rest.
upvoted 1 times
Question #211 Topic 1
Which AWS service should a company use to organize, characterize, and search large numbers of images?
A. Amazon Transcribe
B. Amazon Rekognition
C. Amazon Aurora
D. Amazon QuickSight
Correct Answer: B
"Searchable image and video libraries – Amazon Rekognition makes images and stored videos searchable so you can discover objects and
scenes that appear within them".
upvoted 1 times
Amazon Rekognition is a fully managed AI service that makes it easy to add image and video analysis to applications. It provides
capabilities such as object and scene detection, facial analysis, celebrity recognition, text detection, and image moderation. With Amazon
Rekognition, a company can easily organize and search large numbers of images based on their characteristics, such as visual content,
keywords, and tags.
upvoted 2 times
An ecommerce company wants to use Amazon EC2 Auto Scaling to add and remove EC2 instances based on CPU utilization.
Which AWS service or feature can initiate an Amazon EC2 Auto Scaling action to achieve this goal?
Correct Answer: B
With Amazon EC2 Auto Scaling, you can configure scaling policies based on CloudWatch alarms. In this scenario, you can set a CloudWatch
alarm to monitor CPU utilization of the EC2 instances. When the CPU utilization exceeds a certain threshold, the alarm triggers an auto
scaling action to add more instances to handle the increased load. Similarly, when the CPU utilization decreases, the auto scaling action
can remove instances to optimize costs.
upvoted 2 times
"Amazon EC2 Auto Scaling supports the following types of dynamic scaling policies:
Target tracking scaling—Increase and decrease the current capacity of the group based on a Amazon CloudWatch metric and a target
value. It works similar to the way that your thermostat maintains the temperature of your home—you select a temperature and the
thermostat does the rest".
upvoted 2 times
Amazon CloudWatch is a monitoring service that provides metrics and logs on AWS resources and applications. It can be used to monitor
the CPU utilization of an Amazon EC2 instance and trigger an Amazon EC2 Auto Scaling action based on predefined thresholds. To do this,
a CloudWatch alarm must be set up with the CPU utilization metric, and then configured to trigger the appropriate Amazon EC2 Auto
Scaling action when the threshold is met.
upvoted 2 times
You can use SNS to trigger an EC2 Auto Scaling action by creating an SNS topic that is subscribed to by the Auto Scaling group. When a
message is sent to the SNS topic, it triggers the Auto Scaling group to add or remove EC2 instances based on the configured scaling rules.
However, in the case of triggering EC2 Auto Scaling based on CPU utilization, Amazon CloudWatch is the more suitable service to use.
CloudWatch is specifically designed for monitoring and alerting on cloud resources, including EC2 instances, and provides more fine-
grained control over scaling rules and alarm thresholds. SNS, on the other hand, is a more general-purpose messaging service that can be
used for a wide range of use cases, including triggering EC2 Auto Scaling but may not provide the same level of control and granularity as
CloudWatch for this specific use case.
upvoted 3 times
A company wants to host a private version control system for its application code in the AWS Cloud.
Which AWS service should the company use to meet this requirement?
A. AWS CodePipeline
B. AWS CodeStar
C. AWS CodeCommit
D. AWS CodeDeploy
Correct Answer: C
AWS CodeCommit is a managed source control system that hosts Git repositories and works with all Git-based tools.
AWS CodeCommit will store code, binaries, and metadata in a redundant fashion with high availability. You will be able to collaborate with
local and remote teams to edit, compare, sync, and revise your code.
Because AWS CodeCommit runs in the AWS Cloud, you no longer need to worry about hosting, scaling, or maintaining your own source
code control infrastructure. CodeCommit automatically encrypts your files and is integrated with AWS Identity and Access Management
(IAM), allowing you to assign user-specific permissions to your repositories. This ensures that your code remains secure and you can
collaborate on projects across your team in a secure manner.
upvoted 11 times
AWS CodeCommit is a secure, highly scalable, managed source control service that hosts private Git repositories. It makes it easy for
teams to securely collaborate on code with contributions encrypted in transit and at rest.
upvoted 5 times
Using CodeCommit, the company can create private repositories to securely store and manage its application code. It offers features such
as branch management, pull requests, and code reviews to facilitate efficient development workflows. The code repositories in
CodeCommit are accessible only to authorized users within the company's AWS account, ensuring the privacy and security of the
codebase.
upvoted 1 times
Which AWS service or tool can a company set up to send notifications that a custom spending threshold has been reached or exceeded?
A. AWS Budgets
C. AWS CloudTrail
D. AWS Support
Correct Answer: A
With AWS Budgets, set custom budgets to track your costs and usage, and respond quickly to alerts received from email or SNS
notifications if you exceed your threshold.
upvoted 1 times
A. Amazon S3
C. AWS CloudFormation
Correct Answer: A
"You can use Amazon S3 to host a static website. On a static website, individual webpages include static content. They might also contain
client-side scripts.By contrast, a dynamic website relies on server-side processing, including server-side scripts, such as PHP, JSP, or
ASP.NET. Amazon S3 does not support server-side scripting, but AWS has other resources for hosting dynamic websites".
upvoted 1 times
By contrast
upvoted 1 times
Question #216 Topic 1
Which AWS service contains built-in engines to protect web applications that run in the cloud from SQL injection attacks and cross-site scripting?
A. AWS WAF
C. Amazon GuardDuty
D. Amazon Detective
Correct Answer: A
B. AWS Shield Advanced: Provides DDoS protection to applications that are running on AWS.
C. Amazon GuardDuty: Continuously monitors for malicious activity and unauthorized behavior in AWS accounts and workloads.
D. Amazon Detective: Helps to analyze, investigate, and identify the root cause of potential security issues or suspicious activities within
AWS resources.
upvoted 8 times
AWS WAF helps protects your website from common attack techniques like SQL injection and Cross-Site Scripting (XSS). In addition, you
can create rules that can block or rate-limit traffic from specific user-agents, from specific IP addresses, or that contain particular request
headers.
upvoted 1 times
A. Reserved Instances
B. Dedicated Hosts
C. Spot Instances
D. Dedicated Instances
Correct Answer: A
Amazon EC2 Dedicated Hosts allow you to use your existing per-socket, per-core, or per-VM software licenses that are bound to VMs,
sockets, or physical cores, subject to your license terms.
upvoted 5 times
https://aws.amazon.com/ec2/dedicated-hosts/
Dedicated Hosts allow you to use your existing per-socket, per-core, or per-VM software licenses, including Windows Server, SQL Server,
SUSE Linux Enterprise Server, Red Hat Enterprise Linux, or other software licenses that are bound to VMs, sockets, or physical cores,
subject to your license terms.
upvoted 2 times
Dedicated Hosts provide physical servers dedicated to a customer's use. This allows the customer to use their existing server-bound
software licenses without any additional licensing costs, as long as the licenses are compliant with the rules of the specific software
vendor.
upvoted 2 times
C - no. good for workloads that are flexible in terms of timing and availability
N9 8 months ago
Selected Answer: B
keyword - percore
upvoted 2 times
A company needs to set up user authentication for a new application. Users must be able to sign in directly with a user name and password, or
through a third- party provider.
Which AWS service should the company use to meet these requirements?
B. AWS Signer
C. Amazon Cognito
Correct Answer: C
"Amazon Cognito is an identity platform for web and mobile apps. It’s a user directory, an authentication server, and an authorization
service for OAuth 2.0 access tokens and AWS credentials. With Amazon Cognito, you can authenticate and authorize users from the built-
in user directory, from your enterprise directory, and from consumer identity providers like Google and Facebook."
upvoted 1 times
A company's IT team is managing MySQL database server clusters. The IT team has to patch the database and take backup snapshots of the data
in the clusters.
The company wants to move this workload to AWS so that these tasks will be completed automatically.
What should the company do to meet these requirements?
C. Use an AWS CloudFormation template to deploy MySQL database servers on Amazon EC2 instances.
Correct Answer: B
With Amazon RDS, AWS takes care of tasks such as database software patching, performing backups, and managing high availability and
automatic failover. The company can focus on using the database and its data without the need to worry about the operational aspects of
managing the underlying infrastructure.
upvoted 1 times
"Amazon RDS for MySQL frees you up to focus on application development by managing time-consuming database administration tasks,
including backups, upgrades, software patching, performance improvements, monitoring, scaling, and replication."
upvoted 1 times
Correct Answer: C
GuardDuty uses machine learning algorithms and anomaly detection techniques to identify patterns and behaviors that indicate
unauthorized access attempts, compromised instances, data exfiltration, and other malicious activities. It provides detailed findings and
alerts to help you quickly respond to and mitigate security incidents.
By leveraging GuardDuty, customers can enhance the security of their AWS workloads by proactively identifying and addressing potential
security threats and vulnerabilities. It helps to improve the overall security posture of your AWS environment by automating the detection
and response to security events.
upvoted 1 times
"Continuously monitor your AWS accounts, instances, serverless and container workloads, users, databases, and storage for potential
threats."
upvoted 1 times
Amazon GuardDuty is a threat detection service that continuously monitors for malicious activity and unauthorized behavior in AWS
accounts and workloads. It uses machine learning, anomaly detection, and integrated threat intelligence to identify potential security
threats, such as compromised instances, unauthorized access to AWS resources, and attempts to exploit known vulnerabilities. GuardDuty
alerts users to potential security findings and provides actionable insights to help remediate security issues in real-time.
Amazon GuardDuty is not designed for preventing DDoS attacks or protecting against SQL injection attacks, nor is it for automatic
provisioning of AWS resources.
upvoted 1 times
Which statements explain the business value of migration to the AWS Cloud? (Choose two.)
A. The migration of enterprise applications to the AWS Cloud makes these applications automatically available on mobile devices.
B. AWS availability and security provide the ability to improve service level agreements (SLAs) while reducing risk and unplanned downtime.
C. Companies that migrate to the AWS Cloud eliminate the need to plan for high availability and disaster recovery.
D. Companies that migrate to the AWS Cloud reduce IT costs related to infrastructure, freeing budget for reinvestment in other areas.
E. Applications are modernized because migration to the AWS Cloud requires companies to rearchitect and rewrite all enterprise applications.
Correct Answer: CD
D. Companies that migrate to the AWS Cloud reduce IT costs related to infrastructure, freeing budget for reinvestment in other areas.
AWS offers a pay-as-you-go pricing model, allowing businesses to scale resources up or down based on demand, and only pay for what
they use. This eliminates the need for upfront capital investments in hardware and reduces ongoing maintenance and operational costs.
The cost savings can be redirected to other strategic initiatives, such as innovation, product development, or expanding business
operations.
upvoted 1 times
B. Migrating to the AWS Cloud provides access to the AWS global infrastructure, which offers high availability and security to businesses.
AWS provides SLA's that guarantee service availability and reduce unplanned downtime. This enhances the customer experience and
reduces business risk.
D. Migrating to AWS provides businesses with an opportunity to reduce IT costs related to infrastructure. With AWS, there are no upfront
costs or long-term commitments, and businesses only pay for the resources they use. AWS provides an elastic infrastructure, which allows
businesses to scale up or down based on demand, which helps save costs.
upvoted 1 times
B - yes
D - yes
A company needs to identify personally identifiable information (PII), such as credit card numbers, from data that is stored in Amazon S3.
Which AWS service should the company use to meet this requirement?
A. Amazon Inspector
B. AWS Shield
C. Amazon GuardDuty
D. Amazon Macie
Correct Answer: D
Reference:
https://aws.amazon.com/macie/
By using Amazon Macie, the company can gain visibility into their data, identify potential risks or compliance violations, and take
appropriate actions to protect sensitive information and maintain data privacy.
upvoted 1 times
"Amazon Macie discovers sensitive data using machine learning and pattern matching, provides visibility into data security risks, and
enables automated protection against those risks."
upvoted 1 times
D - yes
upvoted 1 times
Which AWS services or tools are designed to protect a workload from SQL injections, cross-site scripting, and DDoS attacks? (Choose two.)
A. VPC endpoint
D. AWS Config
E. AWS WAF
Correct Answer: C
Reference:
https://aws.amazon.com/waf/
https://aws.amazon.com/shield/?whats-new-cards.sort-by=item.additionalFields.postDateTime&whats-new-cards.sort-order=desc
E. AWS WAF (Web Application Firewall): AWS WAF is a web application firewall that helps protect web applications from common web
exploits, including SQL injections and cross-site scripting (XSS) attacks. It allows you to define customizable security rules to block or allow
traffic based on various conditions and patterns.
upvoted 1 times
AWS Shield Standard is a managed DDoS protection service that safeguards web applications running on AWS by automatically detecting
and mitigating DDoS attacks.
AWS WAF (Web Application Firewall) is a managed service that provides protection against common web exploits and attacks, such as SQL
injections and cross-site scripting, by allowing customers to create custom security rules.
upvoted 1 times
Guru4Cloud 3 months, 3 weeks ago
Selected Answer: C
The AWS services or tools that are designed to protect a workload from SQL injections, cross-site scripting, and DDoS attacks are:
Admins, Please make correction on this questions> Choose Two, Not One
upvoted 2 times
AWS Shield Standard is a free service that provides always-on detection and automatic inline mitigation to minimize the effects of DDoS
attacks on AWS resources. It helps to protect against network and transport layer attacks such as SYN/UDP floods and reflection attacks.
AWS WAF (Web Application Firewall) is a web application firewall service that provides centralized control over the security of web
applications. It helps to protect web applications from common web exploits such as SQL injections and cross-site scripting attacks. AWS
WAF allows you to create custom rules to block, allow, or monitor web requests based on specific conditions.
upvoted 1 times
A company wants to forecast future costs and usage of AWS resources based on past consumption.
Which AWS service or tool will provide this forecast?
B. Amazon Forecast
D. Cost Explorer
Correct Answer: D
Reference:
https://docs.aws.amazon.com/cost-management/latest/userguide/ce-forecast.html
After you enable Cost Explorer, AWS prepares the data about your costs for the current month and the last 12 months, and then calculates
the forecast for the next 12 months.
https://aws.amazon.com/aws-cost-management/aws-cost-explorer/
upvoted 1 times
By using Cost Explorer, you can gain insights into your AWS resource usage and costs, analyze trends, and make informed decisions about
optimizing your infrastructure and budge
upvoted 2 times
"A forecast is a prediction of how much you will use AWS services over the forecast time period that you selected. This forecast is based on
your past usage. You can use a forecast to estimate your AWS bill and set alarms and budgets for based on predictions. Because forecasts
are predictions, the forecasted billing amounts are estimated and might differ from your actual charges for each statement period."
upvoted 2 times
N9 8 months ago
Selected Answer: D
D is correct answer in 2022
upvoted 3 times
N9 8 months ago
https://docs.aws.amazon.com/cost-management/latest/userguide/ce-forecast.html
A forecast is a prediction of how much you will use AWS services over the forecast time period that you selected. This forecast is based on
your past usage.
upvoted 2 times
https://docs.aws.amazon.com/cost-management/latest/userguide/ce-forecast.html
upvoted 4 times
Which AWS services use cloud-native storage that provides replication across multiple Availability Zones by default? (Choose two.)
A. Amazon ElastiCache
C. Amazon Neptune
E. Amazon Redshift
Correct Answer: CD
Reference:
https://docs.aws.amazon.com/documentdb/latest/developerguide/replication.html
https://docs.aws.amazon.com/neptune/latest/userguide/feature-overview-storage.html
C. Amazon Neptune
E. Amazon Redshift
Amazon Neptune is a fully managed graph database service that provides high availability and durability through automatic replication of
data across multiple Availability Zones.
Amazon Redshift is a fully managed data warehouse service that uses cloud-native storage with automatic replication across multiple
Availability Zones.
Amazon ElastiCache, Amazon RDS for Oracle, and Amazon DocumentDB (with MongoDB compatibility) also provide replication across
multiple Availability Zones for high availability, but they do not use cloud-native storage that provides replication by default. Instead,
customers need to configure replication when setting up these services.
upvoted 9 times
RDS and Redshift support multi az - but not by default. You have to configure it if you want multiAZ.
https://aws.amazon.com/blogs/big-data/enable-multi-az-deployments-for-your-amazon-redshift-data-
warehouse/#:~:text=Amazon%20Redshift%20now%20supports%20Multi,operating%20in%20unforeseen%20failure%20scenarios.
upvoted 5 times
D. Amazon DocumentDB (with MongoDB compatibility): Amazon DocumentDB is a fully managed MongoDB-compatible document
database service. It uses cloud-native storage that replicates data across multiple Availability Zones by default. This ensures high
availability and durability of the data, with automatic failover in case of a failure.
upvoted 2 times
So my choice is C,D
upvoted 2 times
Zonci 1 month, 2 weeks ago
Selected Answer: CD
Amazon Redshift does not use cloud-native storage. Instead, it uses a columnar storage technology specifically designed for data
warehousing workloads. The data in Amazon Redshift is stored on local disks attached to the Redshift cluster nodes. Redshift
automatically manages the distribution and replication of data across multiple nodes for performance and fault tolerance. This
architecture allows for efficient query processing and high throughput for analytic workloads.
upvoted 3 times
These services automatically replicate data across multiple Availability Zones for high availability and durability.
upvoted 2 times
https://aws.amazon.com/rds/features/multi-az/
upvoted 1 times
C. Amazon Neptune
E. Amazon Redshift
Option A, Amazon ElastiCache, is a fully managed in-memory data store and cache service. While it can be configured to use replication for
high availability, it does not use cloud-native storage that provides replication across multiple Availability Zones by default.
Option B, Amazon RDS for Oracle, is a managed relational database service that provides replication for high availability. However, it does
not use cloud-native storage that provides replication across multiple Availability Zones by default.
Option D, Amazon DocumentDB (with MongoDB compatibility), is a fully managed document database service that is designed to be
compatible with the MongoDB API. While it provides replication for high availability, it does not use cloud-native storage that provides
replication across multiple Availability Zones by default.
upvoted 2 times
Konop 3 months, 1 week ago
Selected Answer: CE
C&E
C. Amazon Neptune: Amazon Neptune is a fully-managed graph database service that is optimized for storing and querying highly
connected data. It automatically replicates data across multiple Availability Zones to provide high availability and data durability.
E. Amazon Redshift: Amazon Redshift is a fully-managed data warehousing service that makes it simple and cost-effective to analyze all
your data using standard SQL and existing Business Intelligence (BI) tools. It uses a highly available, distributed architecture that
automatically replicates data within and across multiple Availability Zones.
upvoted 2 times
C. Amazon Neptune
E. Amazon Redshift
Amazon Neptune is a fast, reliable, and fully-managed graph database service that uses a highly available, durable, and fault-tolerant
storage layer that automatically replicates data across multiple Availability Zones.
Amazon Redshift is a fully-managed, petabyte-scale data warehouse service that uses a highly available, durable, and fault-tolerant
storage layer that automatically replicates data across multiple Availability Zones.
upvoted 1 times
D. Amazon DocumentDB (with MongoDB compatibility) is a fully managed document database service that is designed to be compatible
with MongoDB workloads. DocumentDB uses a distributed, fault-tolerant architecture that is designed to provide high availability and
durability. Data is automatically replicated across multiple Availability Zones, which helps ensure that the database remains available even
in the event of a failure.
upvoted 2 times
A. AWS Fargate
C. Amazon EMR
D. Amazon S3
E. Amazon EC2
Correct Answer: AD
Reference:
https://aws.amazon.com/serverless/?nc2=h_ql_prod_serv_s
D. Amazon S3: Amazon S3 (Simple Storage Service) is a highly scalable and durable object storage service. It is considered serverless
because you can simply store and retrieve objects without the need to provision or manage any servers. You pay for the storage and data
transfer you use, without worrying about server maintenance or scaling.
upvoted 2 times
AWS Fargate, Amazon EMR, and Amazon EC2 are not serverless services as they require the provisioning and management of servers or
infrastructure. AWS Fargate abstracts the underlying infrastructure, but still requires the user to provision and manage the container
environment. Amazon EMR and Amazon EC2 are fully-managed services that provide the user with control over the underlying
infrastructure.
upvoted 1 times
D. Amazon S3: Amazon S3 (Simple Storage Service) is a serverless object storage service that allows customers to store and retrieve any
amount of data from anywhere on the web. S3 automatically scales to accommodate growing amounts of data, and customers are only
charged for the storage they use.
upvoted 2 times
https://aws.amazon.com/emr/
upvoted 2 times
Which task is the responsibility of AWS, according to the AWS shared responsibility model?
Correct Answer: C
Reference:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.html
A company needs to deploy a PostgreSQL database into Amazon RDS. The database must be highly available and fault tolerant.
Which AWS solution should the company use to meet these requirements?
Correct Answer: C
Reference:
https://aws.amazon.com/rds/features/multi-az/
A company wants to add facial identification to its user verification process on an application.
Which AWS service should the company use to meet this requirement?
A. Amazon Polly
B. Amazon Transcribe
C. Amazon Lex
D. Amazon Rekognition
Correct Answer: D
By integrating Amazon Rekognition into the application, the company can leverage its advanced capabilities to perform facial
identification and verification. This can enhance the security and user experience of the application by allowing users to verify their
identity through facial recognition.
upvoted 1 times
"When you provide an image that contains a face, Amazon Rekognition detects the face in the image, analyzes the facial attributes of the
face, and then returns a percent confidence score for the face and the facial attributes that are detected in the image."
upvoted 1 times
A company wants the ability to quickly upload its applications to the AWS Cloud without needing to provision underlying resources.
Which AWS service will meet these requirements?
A. AWS CloudFormation
C. AWS CodeDeploy
D. AWS CodeCommit
Correct Answer: B
Reference:
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/Welcome.html
This allows you to focus on developing your application and not worry about the underlying infrastructure setup and management. Elastic
Beanstalk provides an easy and quick way to deploy applications without needing to provision or manage the underlying resources
manually.
upvoted 1 times
Explanation:
AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services developed with Java, .NET, PHP,
Node.js, Python, Ruby, Go, and Docker on familiar servers such as Apache, Nginx, Passenger, and IIS. With Elastic Beanstalk, you can
simply upload your code and Elastic Beanstalk will automatically handle the deployment details of capacity provisioning, load balancing,
and automatic scaling of your application. You can easily monitor the performance of your application and access your application logs in
the Elastic Beanstalk console, or integrate Elastic Beanstalk with other AWS services to add features such as messaging and data storage.
upvoted 2 times
A. AWS CloudTrail
B. Amazon Inspector
C. AWS Config
D. Amazon CloudWatch
Correct Answer: D
Reference:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-cloudwatch.html
A company needs to label its AWS resources so that the company can categorize and track costs.
What should the company do to meet this requirement?
Correct Answer: A
Reference:
https://docs.aws.amazon.com/general/latest/gr/aws_tagging.html
By assigning cost allocation tags to AWS resources, the company can gain better insights into its AWS spending and allocate costs based
on specific tags. This allows for more accurate cost reporting and analysis, enabling better cost management and optimization.
upvoted 1 times
"You can use tags to organize your resources, and cost allocation tags to track your AWS costs on a detailed level. After you activate cost
allocation tags, AWS uses the cost allocation tags to organize your resource costs on your cost allocation report, to make it easier for you
to categorize and track your AWS costs."
upvoted 1 times
A company wants its employees to have access to virtual desktop infrastructure to securely access company-provided desktops through the
employees' personal devices.
Which AWS service should the company use to meet these requirements?
B. AWS AppSync
D. Amazon WorkSpaces
Correct Answer: D
Reference:
https://aws.amazon.com/workspaces/
The bottom line is that if you’re looking to move your existing legacy applications to AWS, you’ll want to look at Amazon AppStream 2.0 in
more detail, and if you’re just looking for a quick and easy way to deploy Windows virtual desktops for your users, Amazon WorkSpaces is
most likely an ideal solution
upvoted 6 times
Amazon WorkSpaces provides a persistent, cloud-based desktop experience for end users, and it allows administrators to manage users,
applications, and policies centrally. It offers the flexibility to scale up or down the number of desktops as needed, making it a suitable
solution for remote workforces or companies looking for a cost-effective, secure, and scalable virtual desktop infrastructure.
upvoted 2 times
"Amazon WorkSpaces enables you to provision virtual, cloud-based Microsoft Windows, Amazon Linux, or Ubuntu Linux desktops for your
users, known as WorkSpaces. WorkSpaces eliminates the need to procure and deploy hardware or install complex software. You can
quickly add or remove users as your needs change. Users can access their virtual desktops from multiple devices or web browsers."
upvoted 2 times
Correct Answer: D
Reference:
https://aws.amazon.com/organizations/
https://docs.aws.amazon.com/ram/latest/userguide/shareable.html
With AWS Organizations, you can set up a payment method in the payer account, and all the linked member accounts' charges will be
consolidated into a single bill. This enables the sharing of benefits, such as pre-purchased Reserved Instances and Savings Plans, across
all accounts in the organization. This is beneficial for cost optimization and resource management, as the payer account can optimize and
purchase reserved capacity that is automatically shared with all linked accounts.
upvoted 2 times
N9 8 months ago
https://docs.aws.amazon.com/ram/latest/userguide/getting-started-sharing.html#getting-started-sharing-orgs
upvoted 2 times
A user has been granted permission to change their own IAM user password.
Which AWS services can the user use to change the password? (Choose two.)
Correct Answer: AC
Reference:
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_passwords_user-change-own.html
C. AWS Management Console: The AWS Management Console is a web-based interface that allows users to interact with and manage their
AWS resources. The user can navigate to the IAM service in the AWS Management Console and change their own IAM user password
through the user interface.
upvoted 1 times
A company needs to run an application on Amazon EC2 instances. The instances cannot be interrupted at any time. The company needs an
instance purchasing option that requires no long-term commitment or upfront payment.
Which instance purchasing option will meet these requirements MOST cost-effectively?
A. On-Demand Instances
B. Spot Instances
C. Dedicated Hosts
D. Reserved Instances
Correct Answer: A
Reference:
https://aws.amazon.com/ec2/pricing/
"We recommend that you use On-Demand Instances for applications with short-term, irregular workloads that cannot be interrupted."
upvoted 2 times
A company uses Amazon EC2 instances to run its web application. The company uses On-Demand Instances and Spot Instances. The company
needs to visualize its monthly spending on both types of instances.
Which AWS service or feature will meet this requirement?
B. AWS Budgets
C. Amazon CloudWatch
Correct Answer: A
Reference:
https://aws.amazon.com/aws-cost-management/aws-cost-explorer/
"AWS Cost Explorer has an easy-to-use interface that lets you visualize, understand, and manage your AWS costs and usage over time."
upvoted 1 times
Which task can a user complete by using AWS Identity and Access Management (IAM)?
Correct Answer: D
Reference:
https://aws.amazon.com/iam/#:~:text=With%20AWS%20Identity%20and%20Access,to%20refine%20permissions%20across%20AWS
Explanation:
AWS Identity and Access Management (IAM) is a web service that enables Amazon Web Services (AWS) customers to manage access to
AWS services and resources securely. IAM enables you to create and manage AWS users and groups, and to grant permissions to access
AWS resources. With IAM, you can control who can access your AWS resources, and what actions they can perform on those resources.
IAM also enables you to set up access policies that grant permissions to AWS resources based on attributes such as user name, group
membership, or tags. By using IAM, you can grant permissions to applications that run on Amazon EC2 instances, and control which AWS
resources those applications can access.
upvoted 2 times
You can grant different permissions to different people for different resources. For example, you might allow some users complete access
to Amazon Elastic Compute Cloud (Amazon EC2), Amazon Simple Storage Service (Amazon S3), Amazon DynamoDB, Amazon Redshift, and
other AWS services. For other users, you can allow read-only access to just some S3 buckets, or permission to administer just some EC2
instances, or to access your billing information but nothing else.
https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html
upvoted 2 times
Question #239 Topic 1
A company needs to generate reports for business intelligence and operational analytics on petabytes of semistructured and structured data.
These reports are produced from standard SQL queries on data that is in an Amazon S3 data lake.
Which AWS service provides the ability to analyze this data?
A. Amazon RDS
B. Amazon Neptune
C. Amazon DynamoDB
D. Amazon Redshift
Correct Answer: D
Reference:
https://aws.amazon.com/data-warehouse/
https://aws.amazon.com/redshift/
upvoted 2 times
Question #240 Topic 1
A system automatically recovers from failure when a company launches its workload on the AWS Cloud services platform.
Which pillar of the AWS Well-Architected Framework does this situation demonstrate?
A. Cost optimization
B. Operational excellence
C. Performance efficiency
D. Reliability
Correct Answer: D
Reference:
https://wa.aws.amazon.com/wellarchitected/2020-07-02T19-33-23/wat.pillar.reliability.en.html
Correct Answer: C
Reference:
https://aws.amazon.com/about-aws/global-infrastructure/localzones/faqs/
"AWS Local Zones allow you to use select AWS services, like compute and storage services, closer to more end-users, providing them very
low latency access to the applications running locally. AWS Local Zones are also connected to the parent region via Amazon’s redundant
and very high bandwidth private network, giving applications running in AWS Local Zones fast, secure, and seamless access to the rest of
AWS services."
https://aws.amazon.com/about-aws/global-infrastructure/localzones/faqs/
upvoted 1 times
Each Local Zone is connected to its parent AWS Region through a low-latency network link, allowing resources in the Local Zone to interact
with other AWS services in the parent Region seamlessly. This architecture enables customers to deploy low-latency applications in
specific geographic areas without sacrificing the ability to leverage the full set of AWS services available in the parent Region.
upvoted 1 times
https://aws.amazon.com/about-aws/global-infrastructure/localzones/faqs/
AWS Local Zones allow you to use select AWS services, like compute and storage services, closer to more end-users, providing them very
low latency access to the applications running locally. AWS Local Zones are also connected to the parent region via Amazon’s redundant
and very high bandwidth private network, giving applications running in AWS Local Zones fast, secure, and seamless access to the rest of
AWS services.
upvoted 2 times
A retail company is migrating its IT infrastructure applications from on premises to the AWS Cloud.
Which costs will the company eliminate with this migration? (Choose two.)
Correct Answer: AD
Reference:
https://docs.aws.amazon.com/whitepapers/latest/aws-overview/six-advantages-of-cloud-computing.html
D. Cost of physical server hardware: On-premises IT infrastructure requires purchasing and maintaining physical server hardware. With
AWS Cloud, the company can use virtualized instances provided by AWS, and the underlying hardware is managed by AWS. This reduces
the need for upfront hardware purchases and maintenance costs.
upvoted 2 times
A. Cost of data center operations: The company will no longer need to manage and maintain its own data centers, which will eliminate the
costs associated with power, cooling, physical security, and other data center operations.
D. Cost of physical server hardware: The company will no longer need to purchase and maintain its own physical server hardware.
Instead, it can use the compute resources provided by AWS, such as Amazon EC2 instances, which can be scaled up or down as needed. E.
Cost of network management: While migrating to the AWS Cloud may reduce the cost of network management, it is not a guaranteed
elimination of this cost. Network management costs can vary depending on the complexity of the network architecture, security
requirements, and other factors.
upvoted 2 times
enc_0343 6 months ago
Selected Answer: AD
Using AWS eliminates a need for data centers and physical servers.
upvoted 3 times
What is a benefit of moving to the AWS Cloud in terms of improving time to market?
Correct Answer: C
Reference:
https://docs.aws.amazon.com/whitepapers/latest/aws-overview/six-advantages-of-cloud-computing.html
Which of the following are characteristics of a serverless application that runs in the AWS Cloud? (Choose two.)
Correct Answer: CE
Reference:
https://aws.amazon.com/serverless/#:~:text=Serverless%20on%20AWS&text=AWS%20offers%20technologies%20for%20running,increase%20a
gility%20and%
20optimize%20costs
E. The application can scale based on demand: Serverless applications can automatically scale based on the incoming workload. Services
like AWS Lambda and Amazon API Gateway automatically handle scaling to accommodate varying traffic patterns without any manual
intervention.
upvoted 1 times
A company has existing software licenses that it wants to bring to AWS, but the licensing model requires licensing physical cores.
How can the company meet this requirement in the AWS Cloud?
Correct Answer: B
Reference:
https://aws.amazon.com/ec2/dedicated-hosts/
"Amazon EC2 Dedicated Hosts allow you to use your eligible software licenses from vendors such as Microsoft and Oracle on Amazon EC2,
so that you get the flexibility and cost effectiveness of using your own licenses, but with the resiliency, simplicity and elasticity of AWS. An
Amazon EC2 Dedicated Host is a physical server fully dedicated for your use, so you can help address corporate compliance
requirements."
https://aws.amazon.com/ec2/dedicated-hosts/
upvoted 1 times
A company has a complex AWS architecture. The company needs assistance from a dedicated technical professional who can suggest strategies
regarding incidents, trade-offs, support, and risk management.
Which AWS Support plan will provide the required support?
Correct Answer: A
Reference:
https://aws.amazon.com/premiumsupport/plans/
AWS Enterprise Support provides a dedicated technical account manager (TAM) to help customers achieve their desired outcomes by
providing proactive and personalized guidance and support. The TAM is an experienced technical professional who works with customers
to understand their business and technical goals, and to provide guidance on AWS services and best practices. The TAM helps customers
to optimize their AWS infrastructure, troubleshoot issues, and manage incidents. The TAM also provides support for trade-offs, risk
management, and other business decisions related to the customer's use of AWS. With AWS Enterprise Support, customers have access to
24/7 technical support, as well as a range of AWS Trusted Advisor checks and guidance. Customers can also take advantage of AWS
Infrastructure Event Management (IEM) to receive proactive notifications about potential issues with their AWS resources.
upvoted 2 times
B - yes
N9 8 months ago
Selected Answer: B
TAM is keyowrd
upvoted 2 times
2. needs assistance from a dedicated technical professional who can suggest strategies regarding incidents, trade-offs, support, and risk
management.
https://aws.amazon.com/premiumsupport/plans/
upvoted 2 times
Which of the following is an advantage that the AWS Cloud provides to users?
Correct Answer: A
Reference:
https://docs.aws.amazon.com/whitepapers/latest/aws-overview/six-advantages-of-cloud-computing.html
"Stop guessing capacity – Eliminate guessing on your infrastructure capacity needs. When you make a capacity decision prior to deploying
an application, you often end up either sitting on expensive idle resources or dealing with limited capacity. With cloud computing, these
problems go away. You can access as much or as little capacity as you need, and scale up and down as required with only a few minutes’
notice."
https://docs.aws.amazon.com/whitepapers/latest/aws-overview/six-advantages-of-cloud-computing.html
upvoted 1 times
Which AWS services can use AWS WAF to protect against common web exploitations? (Choose two.)
A. Amazon Route 53
B. Amazon CloudFront
Correct Answer: BE
Reference:
https://aws.amazon.com/waf/faqs/#:~:text=AWS%20WAF%20can%20be%20deployed,content%20at%20the%20Edge%20locations
AWS Web Application Firewall (WAF) can be used to protect against common web exploitations such as SQL injection, cross-site scripting
(XSS), and other types of attacks. AWS WAF can be used in conjunction with Amazon CloudFront and Amazon API Gateway to protect your
web applications and APIs. AWS WAF allows you to create custom rules to block common attack patterns, and to monitor and
troubleshoot web requests by logging web requests that match or don't match the rules you've defined.
upvoted 6 times
https://aws.amazon.com/waf/faqs/#:~:text=What%20services%20does%20AWS%20WAF,content%20at%20the%20Edge%20locations.
AWS WAF can be deployed on Amazon CloudFront, the Application Load Balancer (ALB), Amazon API Gateway, and AWS AppSync. As part
of Amazon CloudFront it can be part of your Content Distribution Network (CDN) protecting your resources and content at the Edge
locations. As part of the Application Load Balancer it can protect your origin web servers running behind the ALBs. As part of Amazon API
Gateway, it can help secure and protect your REST APIs. As part of AWS AppSync, it can help secure and protect your GraphQL APIs.
upvoted 5 times
E. Amazon API Gateway: AWS WAF can also be used to protect web APIs exposed through Amazon API Gateway.
upvoted 1 times
AWS Web Application Firewall (WAF) is a service that you can use to protect your web applications from common web exploitations. You
can use AWS WAF to create rules that block, allow, or monitor (count) web requests based on conditions that you specify.
upvoted 1 times
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Managing.Backups.html
upvoted 3 times
Which controls are shared under the AWS shared responsibility model? (Choose two.)
C. Configuration management
Correct Answer: AC
Reference:
https://aws.amazon.com/compliance/shared-responsibility-model/
Patch Management – AWS is responsible for patching and fixing flaws within the infrastructure, but customers are responsible for
patching their guest OS and applications.
Configuration Management – AWS maintains the configuration of its infrastructure devices, but a customer is responsible for configuring
their own guest operating systems, databases, and applications.
Awareness & Training - AWS trains AWS employees, but a customer must train their own employees.
upvoted 19 times
A. Awareness and training: AWS provides guidance and resources to customers for establishing a strong security posture, but it is the
customer's responsibility to ensure that their employees and contractors are trained and aware of security policies, procedures, and best
practices.
E. Service and communications protection or security: AWS is responsible for the security of the underlying infrastructure that supports its
cloud services, while the customer is responsible for securing the applications and data that are deployed on top of that infrastructure.
The customer must also protect the communication channels used to access and manage their AWS resources.
A. Amazon CloudFront
Correct Answer: B
Reference:
https://aws.amazon.com/global-accelerator/faqs/#:~:text=A%3A%20AWS%20Global%20Accelerator%20provides,AWS%20Regions%2C%20to%
20improve%20redundancy
AWS Global Accelerator is a service that improves the availability and performance of global applications by routing traffic to the optimal
AWS region for a given user. It uses Anycast technology, which directs traffic to the nearest healthy endpoint and it allows to provision a
set of static IP addresses that act as a fixed entry point to the applications hosted in one or more AWS regions. This service can be useful
for the company that manages global applications that require static IP addresses and want to improve the availability and performance
of their applications, as it enables the company to provide users with low-latency access to their applications, even when the underlying
infrastructure is distributed across multiple regions.
upvoted 6 times
N9 8 months ago
Selected Answer: B
keyword - static IP address
upvoted 3 times
A: AWS Global Accelerator provides you with a set of static IP addresses that can map to multiple application endpoints across AWS
Regions, to improve redundancy. If your application experiences failure in a specific AWS Region, AWS Global Accelerator automatically
detects the unhealthy endpoints and redirects traffic to the next optimal AWS Region, ensuring high availability and disaster recovery.
upvoted 3 times
A. Amazon Lightsail
C. AWS CloudFormation
D. AWS Batch
E. Amazon Inspector
Correct Answer: AD
Reference:
https://aws.amazon.com/products/compute/
Amazon Lightsail and AWS Batch are both compute services provided by AWS.
Amazon Lightsail is a simplified compute service that makes it easy for developers, small businesses, and startups to launch and manage
web applications, databases, and more. Lightsail includes everything needed to launch and run web applications, including a virtual
private server (VPS), storage, data transfer, DNS, and a static IP address.
AWS Batch is a service that makes it easy to run batch computing workloads on the AWS Cloud. It enables developers, scientists, and
engineers to easily and efficiently run hundreds of thousands of batch computing jobs on a managed cluster of Amazon EC2 instances.
AWS Systems Manager, AWS CloudFormation, and Amazon Inspector are not compute services, but rather management, provisioning,
and security assessment services respectively.
upvoted 11 times
https://docs.aws.amazon.com/whitepapers/latest/aws-overview/compute-services.html
upvoted 11 times
D. AWS Batch is a fully-managed service that enables developers, scientists, and engineers to easily and efficiently run batch computing
workloads of any scale on AWS. It dynamically provisions the optimal quantity and type of compute resources (e.g., CPU or memory
optimized instances) based on the volume and specific resource requirements of the batch jobs submitted.
upvoted 1 times
A company needs to report on events that involve the specific AWS services that the company uses.
Which AWS service or resource can the company use with Amazon CloudWatch to meet this requirement?
A. Amazon Inspector
Correct Answer: D
Reference:
https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-log-file-examples.html
AWS CloudTrail logs provide a record of all the AWS Management Console sign-in events and API calls made in the AWS account.
CloudTrail logs can be used to report on events that involve specific AWS services that a company uses. The company can use CloudTrail
logs with Amazon CloudWatch to monitor and alert on specific activities that occur within their AWS environment.
CloudWatch is a monitoring service for AWS resources and the applications you run on AWS. You can use CloudWatch to collect and track
metrics, collect and monitor log files, and set alarms.
Amazon Inspector, AWS Personal Health Dashboard, and AWS Trusted Advisor are not the right services to use for this specific use case.
Inspector is an automated security assessment service, Personal Health Dashboard is a service that provides a personalized view into the
performance and availability of the AWS resources, and Trusted Advisor is a service that provides real-time guidance to help optimize
performance, security, and cost.
upvoted 6 times
By using AWS CloudTrail logs with Amazon CloudWatch, the company can report on events that involve specific AWS services, track user
activities, and monitor changes to resources. This combination allows for comprehensive monitoring and reporting on various aspects of
the AWS environment.
upvoted 2 times
AWS CloudTrail serve to log USER activities (as stop a instance or change its configuration).
https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-log-file-examples.html
https://aws.amazon.com/premiumsupport/technology/aws-health-
dashboard/#:~:text=The%20AWS%20Health%20Dashboard%20is,particular%20AWS%20account%20or%20organization.
upvoted 3 times
The question states: "the specific AWS services that the company uses."
"When you sign in to the AWS Health Dashboard, you have a personalized view of the AWS service status that powers your application.
Use the AWS Health Dashboard to learn about specific operational issues that affect your account. For example, if you receive an event for
a lost Amazon Elastic Block Store (EBS) volume associated with one of your Amazon EC2 instances, you can quickly view how your
resources are impacted, helping you to troubleshoot and remediate."
upvoted 1 times
You can also use CloudWatch Events with services that do not emit events and are not listed on this page. AWS CloudTrail is a service
that automatically records events such as AWS API calls. You can create CloudWatch Events rules that trigger on the information
captured by CloudTrail. For more information about CloudTrail, see What is AWS CloudTrail?
https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/EventTypes.html#events-for-services-not-listed
upvoted 6 times
Question #253 Topic 1
A company with AWS Enterprise Support needs help understanding its monthly AWS bill and wants to implement billing best practices.
Which AWS tool or resource is available to accomplish these goals?
A. Resource tagging
D. AWS Support
Correct Answer: B
Reference:
https://aws.amazon.com/premiumsupport/plans/enterprise/
AWS Enterprise Support customers have access to an AWS Concierge Support team, which is a dedicated team of technical support
engineers that provide personalized guidance and best practices to help customers optimize their AWS usage and costs. This team can
help customers understand their monthly AWS bills, identify and resolve issues that contribute to unexpected costs, and implement billing
best practices.
AWS Resource tagging is a way to organize AWS resources by attaching metadata, called tags, to the resources. This can help to categorize
and manage resources, but it's not the best option for understanding the monthly bill and implementing billing best practices.
AWS Support and AWS Abuse team are different teams, Support team deals with technical issues and Abuse team deals with abuse and
security issues.
upvoted 2 times
Which of the following is an AWS key-value database offering consistent single-digit millisecond performance at any scale?
A. Amazon RDS
B. Amazon Aurora
C. Amazon DynamoDB
D. Amazon Redshift
Correct Answer: C
Reference:
https://aws.amazon.com/dynamodb/
Amazon DynamoDB is a fully managed, highly scalable key-value database service that delivers consistent single-digit millisecond
performance at any scale. It allows you to store and retrieve any amount of data, and serves any level of request traffic. DynamoDB
automatically spreads the data and traffic for a table over a sufficient number of servers to handle the request capacity specified by the
customer and the amount of data stored, while maintaining consistent and fast performance.
Amazon RDS, Amazon Aurora, and Amazon Redshift are also AWS services but they are not key-value database services, they are relational
database services, a postgres-compatible relational database service, and a data warehousing service respectively.
upvoted 7 times
A company is developing a new Node.js application. The application must have a scalable NoSQL database to meet increasing demand as the
popularity of the application grows.
Which AWS service will meet the requirements for the database?
B. Amazon ElastiCache
C. Amazon DynamoDB
D. Amazon Redshift
Correct Answer: C
Reference:
https://aws.amazon.com/dynamodb/
Amazon DynamoDB is a fully managed, highly scalable NoSQL database service that is well suited for building highly available, scalable,
and performant applications. It is a key-value and document database that supports both document and key-value data models. It allows
you to store and retrieve any amount of data, and serves any level of request traffic. DynamoDB automatically spreads the data and traffic
for a table over a sufficient number of servers to handle the request capacity specified by the customer and the amount of data stored,
while maintaining consistent and fast performance.
Amazon Aurora Serverless, Amazon ElastiCache and Amazon Redshift are not NoSQL databases, they are relational database service, a
web service that makes it easy to deploy, operate, and scale an in-memory cache in the cloud and a data warehousing service respectively.
upvoted 4 times
A company wants to set up an entire development and continuous delivery toolchain for coding, building, testing, and deploying code.
Which AWS service will meet these requirements?
A. Amazon CodeGuru
B. AWS CodeStar
C. AWS CodeCommit
D. AWS CodeDeploy
Correct Answer: B
B. AWS CodeStar: es un servicio completamente administrado que permite a los equipos de desarrollo crear, compilar, probar y desplegar
aplicaciones en AWS rápidamente. Proporciona plantillas preconfiguradas para proyectos de diferentes lenguajes de programación y
marcos de aplicaciones.
C. AWS CodeCommit: es un servicio de control de versiones totalmente administrado que permite a los desarrolladores almacenar,
administrar y colaborar en el código fuente de la aplicación de forma segura y escalable en la nube.
D. AWS CodeDeploy: es un servicio de implementación automatizado que ayuda a los desarrolladores a implementar aplicaciones de
forma rápida y confiable en diferentes servicios de cómputo de AWS, como Amazon EC2, AWS Fargate, AWS Lambda y otros.
upvoted 1 times
Which service enables customers to audit API calls in their AWS accounts?
A. AWS CloudTrail
C. Amazon Inspector
D. AWS X-Ray
Correct Answer: A
Reference:
https://docs.aws.amazon.com/audit-manager/latest/userguide/logging-using-cloudtrail.html
AWS CloudTrail is a service that enables customers to audit API calls in their AWS accounts. It captures API calls made to AWS services,
including the identity of the API caller, the time of the API call, the source IP address of the API caller, the request parameters, and the
response elements returned by the service. CloudTrail logs are stored in an S3 bucket and can be analyzed using tools such as Amazon
Athena, Amazon EMR, and AWS Glue, to identify trends and troubleshoot issues.
AWS Trusted Advisor, Amazon Inspector, and AWS X-Ray are not service that enable customers to audit API calls. Trusted Advisor is a
service that provides real-time guidance to help optimize performance, security, and cost, Inspector is an automated security assessment
service and X-Ray is a service for analyzing and debugging distributed applications.
upvoted 4 times
N9 8 months ago
Selected Answer: A
https://docs.aws.amazon.com/audit-manager/latest/userguide/logging-using-cloudtrail.html
upvoted 2 times
Redes 8 months, 1 week ago
Selected Answer: A
With AWS CloudTrail, you can monitor your AWS deployments in the cloud by getting a history of AWS API calls for your account, including
API calls made by using the AWS Management Console, the AWS SDKs, the command line tools, and higher-level AWS services.
upvoted 2 times
A company is moving its office and must establish an encrypted connection to AWS.
Which AWS service will help meet this requirement?
A. AWS VPN
B. Amazon Route 53
D. Amazon Connect
Correct Answer: A
Reference:
https://aws.amazon.com/vpn/
AWS Virtual Private Network (VPN) is a service that enables customers to establish an encrypted connection to the AWS infrastructure. A
VPN connection can be used to connect the company's office network to the AWS cloud, allowing the company to access their data and
applications securely and with minimal latency. AWS VPN supports both Internet Protocol security (IPsec) and Secure Sockets Layer (SSL)
protocols, and can be used to connect to VPCs, AWS Direct Connect, and other AWS services.
Amazon Route 53, Amazon API Gateway, and Amazon Connect are not the right services to use for this specific use case. Route 53 is a
highly available and scalable cloud Domain Name System (DNS) web service, API Gateway is a fully managed service that makes it easy for
developers to create, publish, maintain, monitor, and secure APIs at any scale, and Connect is a cloud-based contact center service.
upvoted 6 times
Use advanced managed security services such as Amazon Macie, which assists in discovering and securing personal data that is stored in
Amazon S3.
upvoted 1 times
Lsartore 1 year, 4 months ago
Selected Answer: A
A simple VPN connection can fulfill it
upvoted 3 times
A company needs steady and predictable performance from its Amazon EC2 instances at the lowest possible cost. The company also needs the
ability to scale resources to ensure that it has the right resources available at the right time.
Which AWS service or resource will meet these requirements?
A. Amazon CloudWatch
C. AWS Batch
Correct Answer: D
Reference:
https://aws.amazon.com/autoscaling/
Amazon EC2 Auto Scaling is a service that automatically adjusts the number of Amazon Elastic Compute Cloud (EC2) instances in response
to changes in demand for the application. It enables the company to scale resources to ensure that it has the right resources available at
the right time, while keeping costs low. It also allows customers to maintain steady and predictable performance by automatically
increasing or decreasing the number of EC2 instances based on predefined policies and schedules.
Amazon CloudWatch is a monitoring service for AWS resources and the applications you run on AWS, it doesn't provide the automatic
scaling feature. Application Load Balancer, is a service that automatically routes incoming application traffic across multiple target
instances, it can help with scaling but it's not the best option for this specific use case. AWS Batch is a service that makes it easy to run
batch computing workloads on the AWS Cloud, it doesn't provide automatic scaling feature as well.
upvoted 3 times
Which action will provide documentation to help a company evaluate whether its use of the AWS Cloud is compliant with local regulatory
standards?
Correct Answer: B
Reference:
https://aws.amazon.com/artifact/
"AWS Artifact is your go-to, central resource for compliance-related information that matters to you. It provides on-demand access to
security and compliance reports from AWS and ISVs who sell their products on AWS Marketplace."
https://aws.amazon.com/artifact/
upvoted 1 times
A company wants a cost-effective option when running its applications in an Amazon EC2 instance for short time periods. The applications can be
interrupted.
Which EC2 instance type will meet these requirements?
A. Spot Instances
B. On-Demand Instances
C. Reserved Instances
D. Dedicated Instances
Correct Answer: A
Reference:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-purchasing-options.html
Spot Instances are a cost-effective option for running applications on Amazon EC2 for short periods of time. These instances are available
at a significant discount compared to On-Demand instances, with hourly rates that can be up to 90% lower. Spot Instances can be
interrupted by Amazon EC2 with a two-minute notification when EC2 needs the capacity back, this makes them suitable for applications
that can be interrupted.
On-Demand Instances are instances that you can launch at any time and pay for by the hour, they are the most flexible and they don't
have the same cost-effectiveness as Spot instances.
Reserved Instances are a cost-effective option for customers who can commit to using a specific instance type for a period of time, they
are not as cost-effective as Spot instances for short-term runs.
Dedicated Instances are a type of EC2 instance that runs on a dedicated, single-tenant host, This type of instances are more expensive
than the others and not suitable for short-term runs.
upvoted 5 times
"Spot Instances are a cost-effective choice if you can be flexible about when your applications run and if your applications can be
interrupted. For example, Spot Instances are well-suited for data analysis, batch jobs, background processing, and optional tasks."
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-spot-instances.html
upvoted 1 times
A retail company is building a new mobile app. The company is evaluating whether to build the app at an on-premises data center or in the AWS
Cloud.
Which of the following are benefits of building this app in the AWS Cloud? (Choose two.)
E. Ability to pick the specific data centers that will host the application servers
Correct Answer: BD
Reference:
https://docs.aws.amazon.com/whitepapers/latest/aws-overview/six-advantages-of-cloud-computing.html
D. Flexibility to scale up in minutes as the application becomes popular: AWS offers the ability to scale up or down resources based on
demand, allowing you to easily handle increases in user traffic or demand for your application without the need to worry about hardware
limitations or capacity planning. This scalability helps ensure a smooth experience for users even during peak times.
upvoted 2 times
B - yes
D - yes
Option C is incorrect because AWS follows a shared responsibility model where the physical security of the infrastructure is the
responsibility of AWS, while the customer is responsible for securing the applications and data. Therefore, the customer does not have
complete control over the physical security of the infrastructure.
Option E is incorrect because while AWS allows customers to choose the region in which their application is hosted, they cannot pick
specific data centers.
upvoted 2 times
linux_admin 4 months, 3 weeks ago
Selected Answer: BD
E. Ability to pick the specific data centers that will host the application servers - This statement is not entirely accurate. The AWS Cloud
provides customers with multiple availability zones in different geographic regions, but customers do not have the ability to pick the
specific data centers that will host their application servers.
upvoted 2 times
Building the mobile app in the AWS Cloud provides the retail company with the following benefits:
Flexibility to scale up in minutes as the application becomes popular: With AWS, the company can easily add more resources (such as
compute and storage) to the app as needed, without a large upfront capital expense.
Ability to pick the specific data centers that will host the application servers: AWS provides a global infrastructure with multiple availability
zones and regions, allowing the company to choose the specific geographic locations where their app will be hosted.
A large, upfront capital expense and low variable expenses, Increased speed for trying out new projects and complete control over the
physical security of the infrastructure are not benefits of building the app in the cloud, They are benefits of on-premises data centers.
upvoted 2 times
A developer is working on enhancing applications at AWS. The developer needs a service that can securely host GitHub-based code, repositories,
and version controls.
Which AWS service should the developer use?
A. AWS CodeStar
B. Amazon CodeGuru
C. AWS CodeCommit
D. AWS CodePipeline
Correct Answer: C
Reference:
https://docs.aws.amazon.com/codecommit/latest/userguide/welcome.html
"AWS CodeCommit is a secure, highly scalable, fully managed source control service that hosts private Git repositories."
https://aws.amazon.com/codecommit/
upvoted 1 times
AWS CodeCommit is a fully managed source control service that makes it easy for the developer to host Git-based code, repositories, and
version controls securely. CodeCommit provides a fully managed service, which means that it automatically scales to meet the needs of
the developer's repositories and integrates with AWS services like IAM for access control and CloudTrail for auditing. The developer can
use CodeCommit to store, manage, and track code changes for software development projects, it's especially useful for developers who
want to use Git-based version control system but don't want to manage their own infrastructure.
upvoted 4 times
A. A broad set of global, cloud-based products that include compute, storage, and databases
B. A physical location around the world where data centers are clustered
C. One or more discrete data centers with redundant power, networking, and connectivity
D. A service that developers use to build applications that deliver latencies of single-digit milliseconds to users
Correct Answer: B
Reference:
https://aws.amazon.com/about-aws/global-infrastructure/regions_az/
An AWS (Amazon Web Services) Region is a physical location around the world where data centers are clustered. AWS operates in multiple
geographic regions around the world, each of which comprises multiple Availability Zones. Each region is completely independent and
designed to be isolated from the others. This means that each region is entirely self-contained, with its own infrastructure, security, and
availability zones. By using multiple regions, customers can choose the location that best meets their needs in terms of latency,
compliance, and other factors.
upvoted 1 times
An AWS Region is a physical location around the world where AWS has multiple data centers. Each Region is designed to be isolated from
the other regions, and provides a low-latency network connection to other regions. This allows customers to store data and run
applications closer to their customers and end-users, and provides better performance, lower latencies, and compliance with data
sovereignty and other regulatory requirements. AWS currently has more than 60 regions across the world, each region contains one or
more availability zones, which are discrete data centers with redundant power, networking, and connectivity.
upvoted 4 times
Redes 8 months, 2 weeks ago
Selected Answer: B
https://aws.amazon.com/about-aws/global-infrastructure/regions_az/?nc1=h_ls
AWS has the concept of a Region, which is a physical location around the world where we cluster data centers.
upvoted 1 times
https://docs.aws.amazon.com/whitepapers/latest/aws-overview/global-infrastructure.html
upvoted 1 times
Question #265 Topic 1
Which AWS benefit enables users to deploy cloud infrastructure that consists of multiple geographic regions connected by a network with low
latency, high throughput, and redundancy?
A. Economies of scale
B. Security
C. Elasticity
D. Global reach
Correct Answer: D
Reference:
https://docs.aws.amazon.com/whitepapers/latest/aws-overview/global-infrastructure.html
AWS Global Reach enables users to deploy cloud infrastructure that consists of multiple geographic regions connected by a network with
low latency, high throughput, and redundancy. This allows users to store data and run applications closer to their customers and end-
users, providing better performance, lower latencies, and compliance with data sovereignty and other regulatory requirements. This
feature enables customers to select the region where their data will be stored, and also allows customers to distribute their applications
across multiple regions for high availability and disaster recovery.
Economies of scale, Security and Elasticity are other benefits of using AWS, but they are not related to Global reach.
Economies of scale allow customers to take advantage of the massive scale of AWS to reduce costs. Security provides customers with a
secure environment to run their workloads and protect their data. Elasticity allows customers to scale their resources up and down to
match the demand of their applications.
upvoted 2 times
JA2018 8 months, 2 weeks ago
Selected Answer: D
made sense
upvoted 1 times
Question #266 Topic 1
A company is considering a migration from on premises to the AWS Cloud. The company's IT team needs to offload support of the workload.
What should the IT team do to accomplish this goal?
A. Use AWS Managed Services to provision, run, and support the company infrastructure.
C. Use Amazon Elastic Container Service (Amazon ECS) on Amazon EC2 instances.
D. Overprovision compute capacity for seasonal events and traffic spikes to prevent downtime.
Correct Answer: A
Reference:
https://docs.aws.amazon.com/managedservices/latest/userguide/what-is-ams.html
The IT team should use AWS Managed Services to provision, run, and support the company infrastructure. AWS Managed Services provide
fully managed, highly available, and scalable infrastructure services. These services are designed to offload the operational responsibilities
of provisioning, running, and scaling the infrastructure. This will allow the IT team to focus on higher-value tasks such as application
development, while AWS takes care of the underlying infrastructure. Managed services can include Amazon RDS, Amazon Elasticsearch
Service, Amazon DynamoDB, Amazon SNS and many others.
upvoted 5 times
Correct Answer: D
Reference:
https://aws.amazon.com/serverless/
One of the primary benefits of using AWS serverless computing is that the management of infrastructure is offloaded to AWS. This means
that the customer does not need to worry about provisioning or managing servers or other infrastructure resources. Instead, the focus
can be on building and deploying applications. AWS manages the underlying infrastructure, including servers, storage, and networking,
and provides automatic scaling and availability.
upvoted 2 times
A company plans to launch an application that will run in multiple locations within the United States. The company needs to identify the two AWS
Regions where the application can operate at the lowest price.
Which AWS service or feature should the company use to determine the Regions that offer the lowest price?
A. Cost Explorer
B. AWS Budgets
Correct Answer: D
The other three options help with post deployment in AWS cloud.
Reference:
https://calculator.aws/#/
https://calculator.aws/#/addService/ec2-enhancement
upvoted 1 times
Correct Answer: D
Reference:
https://aws.amazon.com/kms/features/
Encrypting data by using AWS Key Management Service (AWS KMS) is an approach that will enhance a user's security on AWS. AWS KMS is
a service that allows users to create and manage encryption keys used to encrypt and decrypt data. By using KMS to encrypt data, users
can secure data at rest and in transit, meet compliance requirements, and maintain control over their encryption keys.
Using Multi-AZ deployments with Amazon RDS, creating a hybrid architecture by using AWS Direct Connect, and monitoring application-
specific information with AWS X-Ray are all important security best practices, but they do not directly enhance the security of data like
encryption does.
upvoted 3 times
Which AWS service or tool is associated with an Amazon EC2 instance and acts as a virtual firewall to control inbound and outbound traffic?
A. AWS WAF
B. AWS Shield
D. Security group
Correct Answer: D
Reference:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-security-groups.html
A security group is an AWS service or tool that is associated with an Amazon EC2 instance and acts as a virtual firewall to control inbound
and outbound traffic. Security groups enable you to specify which traffic is allowed to reach your instances, based on a set of rules that
you define. Security groups act at the instance level, not the subnet level, so each instance in a subnet in your VPC must be associated with
a security group. You can add or remove rules from a security group at any time, and you can associate multiple security groups with a
single instance.
AWS WAF is a web application firewall that helps protect web applications from common web exploits, AWS Shield is a service that
provides DDoS protection for web applications hosted on AWS, and Network access control list (ACL) is another service that controls traffic
in and out of a VPC, but it acts at the subnet level, not the instance level like security groups.
upvoted 4 times
Question #271 Topic 1
A company wants to migrate its on-premises Microsoft SQL Server database server to the AWS Cloud. The company has decided to use Amazon
EC2 instances to run this database.
Which of the following is the company responsible for managing, according to the AWS shared responsibility model?
A. EC2 hypervisor
Correct Answer: B
Reference:
https://aws.amazon.com/compliance/shared-responsibility-model/
According to the AWS shared responsibility model, when a company uses Amazon EC2 instances to run a Microsoft SQL Server database,
the company is responsible for managing the security patching of the guest operating system. This includes ensuring that the operating
system is updated with the latest security patches, and that the database software is configured securely.
AWS is responsible for the security of the EC2 hypervisor, network connectivity of the host server, and uptime service level agreement
(SLA) for the EC2 instances.
upvoted 2 times
A developer wants to deploy an application on a container-based service. The service must automatically provision and manage the backend
instances. The service must provision only the necessary resources.
Which AWS service will meet these requirements?
A. Amazon EC2
B. Amazon Lightsail
D. AWS Fargate
Correct Answer: D
Reference:
https://aws.amazon.com/fargate/
Fargate is a serverless compute engine for containers that allows you to run containers without having to manage the underlying EC2
instances, this service automatically provisions and manages the resources required to run the containers. Fargate will only provision the
necessary resources, like CPU and memory, for the containers, this feature ensures that the developer doesn't have to over-allocate
resources and pay for resources they don't use.
Amazon EC2, Amazon Lightsail, and Amazon Elastic Kubernetes Service (Amazon EKS) are other AWS services but they don't provide the
automatic provisioning and management of backend instances, this feature is only available on AWS Fargate. Amazon EC2 is a web service
that provides resizable compute capacity in the cloud. Amazon Lightsail is a fully managed service that makes it easy to launch and
manage web applications and blogs. Amazon EKS is a managed service that makes it easy to deploy, scale, and operate containerized
applications using Kubernetes.
upvoted 15 times
Which tasks require use of the AWS account root user? (Choose two.)
Correct Answer: AE
Reference:
https://docs.aws.amazon.com/general/latest/gr/root-vs-iam.html
https://docs.aws.amazon.com/accounts/latest/reference/root-user-tasks.html
upvoted 1 times
E. Closing an AWS account: The AWS account root user has the permissions to close an AWS account. This is a permanent action and
cannot be undone, it will delete all resources associated with the account, including all data and configurations. Before closing the
account, the root user should ensure that all resources are backed up and that any necessary permissions or access keys have been
revoked. This can be done via AWS Management Console, AWS Support page.
upvoted 3 times
B. AWS Outposts
C. Amazon S3
Correct Answer: A
Reference:
https://aws.amazon.com/sqs/#:~:text=Amazon%20Simple%20Queue%20Service%20(SQS,distributed%20systems%2C%20and%20serverless%
20applications
"Amazon SQS provides a simple and reliable way for customers to decouple and connect components (microservices) together using
queues."
https://aws.amazon.com/sqs/#:~:text=Amazon%20SQS%20provides%20a%20simple%20and%20reliable%20way%20for%20customers%20t
o%20decouple%20and%20connect%20components%20(microservices)%20together%20using%20queues.
upvoted 1 times
Which of the following describes some of the core functionality of Amazon S3?
A. Amazon S3 is a high-performance block storage service that is designed for use with Amazon EC2.
B. Amazon S3 is an object storage service that provides high-level performance, security, scalability, and data availability.
C. Amazon S3 is a fully managed, highly reliable, and scalable file storage system that is accessible over the industry-standard SMB protocol.
D. Amazon S3 is a scalable, fully managed elastic NFS for use with AWS Cloud services and on-premises resources.
Correct Answer: A
"Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security,
and performance. Customers of all sizes and industries can store and protect any amount of data for virtually any use case, such as data
lakes, cloud-native applications, and mobile apps. With cost-effective storage classes and easy-to-use management features, you can
optimize costs, organize data, and configure fine-tuned access controls to meet specific business, organizational, and compliance
requirements."
https://aws.amazon.com/s3/#:~:text=Amazon%20Simple%20Storage,and%20compliance%20requirements.
upvoted 1 times
Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security,
and performance.
upvoted 1 times
How does consolidated billing help reduce costs for a company that has multiple AWS accounts?
A. It aggregates usage across accounts so that the company can reach volume discount thresholds sooner.
C. It provides a simplified billing invoice that the company can process more quickly than a standard invoice.
D. It gives AWS resellers the ability to bill their customers for usage.
Correct Answer: A
Reference:
https://aws.amazon.com/about-aws/whats-new/2010/02/09/announcing-consolidated-billing-for-aws-
accounts/#:~:text=Consolidated%20Billing%
20enables%20you%20to,associated%20with%20your%20paying%20account
https://aws.amazon.com/about-aws/whats-new/2010/02/09/announcing-consolidated-billing-for-aws-
accounts/#:~:text=Consolidated%20Billing%20enables,tiers%20more%20quickly.
upvoted 1 times
B - no
D - no
upvoted 1 times
Answer is A.
upvoted 3 times
A company wants to secure its consumer web application by using SSL/TLS to encrypt traffic.
Which AWS service can the company use to meet this goal?
A. AWS WAF
B. AWS Shield
C. Amazon VPC
Correct Answer: D
Reference:
https://aws.amazon.com/certificate-manager/
Use AWS Certificate Manager (ACM) to provision, manage, and deploy public and private SSL/TLS certificates for use with AWS services
and your internal connected resources. ACM removes the time-consuming manual process of purchasing, uploading, and renewing
SSL/TLS certificates.
To use SSL/ TLS encryption, you need to setup and assign the SSL/ TLS certificates first.
upvoted 11 times
"Use AWS Certificate Manager (ACM) to provision, manage, and deploy public and private SSL/TLS certificates for use with AWS services
and your internal connected resources. ACM removes the time-consuming manual process of purchasing, uploading, and renewing
SSL/TLS certificates."
https://aws.amazon.com/certificate-
manager/#:~:text=Use%20AWS%20Certificate%20Manager%20(ACM)%20to%20provision%2C%20manage%2C%20and%20deploy%20public
%20and%20private%20SSL/TLS%20certificates%20for%20use%20with%20AWS%20services%20and%20your%20internal%20connected%20r
esources.%20ACM%20removes%20the%20time%2Dconsuming%20manual%20process%20of%20purchasing%2C%20uploading%2C%20an
d%20renewing%20SSL/TLS%20certificates.
upvoted 1 times
D - yes
upvoted 1 times
Which of the following are advantages of moving to the AWS Cloud? (Choose two.)
B. AWS assumes all responsibility for the security of infrastructure and applications.
E. Users can move hardware from their data center to the AWS Cloud.
Correct Answer: CD
Reference:
https://docs.aws.amazon.com/whitepapers/latest/aws-overview/six-advantages-of-cloud-computing.html
Moving to the AWS Cloud provides increased speed and agility as users can provision and scale resources quickly, enabling faster
application deployment and reducing time-to-market.
AWS operates at a large scale, which allows them to provide cost-effective solutions. This enables users to benefit from significant cost
savings compared to running infrastructure on-premises.
upvoted 2 times
A company stores configuration files in an Amazon S3 bucket. These configuration files must be accessed by applications that are running on
Amazon EC2 instances.
According to AWS security best practices, how should the company grant permissions to allow the applications for access the S3 bucket?
B. Use the AWS access key ID and the EC2 secret access key.
Correct Answer: C
Reference:
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2.html
For applications on Amazon EC2 or other AWS services to access Amazon S3 resources, they must include valid AWS credentials in their
AWS API requests. You should not store AWS credentials directly in the application or Amazon EC2 instance. These are long-term
credentials that are not automatically rotated and could have a significant business impact if they are compromised.
Instead, you should use an IAM role to manage temporary credentials for applications or services that need to access Amazon S3. When
you use a role, you don't have to distribute long-term credentials (such as a user name and password or access keys) to an Amazon EC2
instance or AWS service such as AWS Lambda. The role supplies temporary permissions that applications can use when they make calls to
other AWS resources.
upvoted 2 times
Nguyen25183 8 months, 3 weeks ago
Selected Answer: C
"Use IAM roles for applications and AWS services that require Amazon S3 access"
https://docs.aws.amazon.com/AmazonS3/latest/userguide/security-best-practices.html
upvoted 1 times
Question #280 Topic 1
A company needs an AWS service that will continuously monitor the company's AWS account for suspicious activity. The service must have the
ability to initiate automated actions against threats that are identified in the security findings.
Which service will meet these requirements?
B. Amazon Detective
C. Amazon Inspector
D. Amazon GuardDuty
Correct Answer: D
Reference:
https://aws.amazon.com/guardduty/faqs/
"Amazon GuardDuty is a threat detection service that continuously monitors your AWS accounts and workloads for malicious activity and
delivers detailed security findings for visibility and remediation."
https://aws.amazon.com/guardduty/
upvoted 1 times
With GuardDuty, CloudWatch Events, and AWS Lambda, you have the flexibility to set up automated remediation actions based on a
security finding. For example, you can create a Lambda function to modify your AWS security group rules based on security findings. If
you receive a GuardDuty finding indicating one of your EC2 instances is being probed by a known malicious IP, you can address it through
a CloudWatch Events rule, initiating a Lambda function to automatically modify your security group rules and restrict access on that port.
upvoted 4 times
Question #281 Topic 1
A company wants to analyze streaming user data and respond to customer queries in real time.
Which AWS service can meet these requirements?
A. Amazon QuickSight
B. Amazon Redshift
Correct Answer: C
Reference:
https://aws.amazon.com/kinesis/data-analytics/
https://aws.amazon.com/kinesis/data-streams/faqs/?nc=sn&loc=6&refid=fc81dabe-57e1-4c46-8d33-cfd3acf1ef08
upvoted 1 times
Amazon QuickSight is a very fast, easy-to-use, cloud-powered business analytics service that makes it easy for all employees within an
organization to build visualizations, perform ad-hoc analysis, and quickly get business insights from their data, anytime, on any device.
upvoted 1 times
Who can create and manage access keys for an AWS account root user?
Correct Answer: A
Reference:
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_root-user.html
The AWS account root user is the master user for the AWS account and has full access to all resources and services. Only the AWS account
owner can create and manage access keys for the root user, as they have the highest level of permissions within the account. It is not
recommended to use the root user for regular activities. Instead, create an IAM user with necessary permissions and use that user for
day-to-day activities.
upvoted 4 times
Which AWS service can help a company detect an outage of its website servers and redirect users to alternate servers?
A. Amazon CloudFront
B. Amazon GuardDuty
C. Amazon Route 53
Correct Answer: C
Reference:
https://aws.amazon.com/about-aws/whats-new/2013/02/11/announcing-dns-failover-for-route-53/
Amazon Route 53 is a highly available and scalable Domain Name System (DNS) web service. It can be used to detect an outage of a
company's website servers and redirect users to alternate servers through the use of health checks and failover routing policies. It can
also be used to route users to the optimal location for faster content delivery using geographic routing.
upvoted 1 times
A web application is hosted on AWS using an Elastic Load Balancer, multiple Amazon EC2 instances, and Amazon RDS.
Which security measures fall under the responsibility of AWS? (Choose two.)
D. Encrypting communication between the EC2 instances and the Elastic Load Balancer
E. Configuring a security group and a network access control list (NACL) for EC2 instances
Correct Answer: BC
Reference:
https://docs.aws.amazon.com/acm/latest/userguide/data-protection.html https://aws.amazon.com/compliance/shared-responsibility-model/
RDS is managed database service, AWS will take care of security patches
By elimination I can determine that protection against IP spoofing and packet sniffing is their responsibility.
All of this agrees with the general AWS shared responsibility model. AWS has responsibility for managed services and network
infrastructure in this case.
upvoted 18 times
"Periodically, Amazon RDS performs maintenance on Amazon RDS resources. Maintenance most often involves updates to the DB
instance's underlying hardware, underlying operating system (OS), or database engine version. Updates to the operating system most
often occur for security issues and should be done as soon as possible.
Some maintenance items require that Amazon RDS take your DB instance offline for a short time. Maintenance items that require a
resource to be offline include required operating system or database patching. Required patching is automatically scheduled only for
patches that are related to security and instance reliability. Such patching occurs infrequently (typically once every few months) and
seldom requires more than a fraction of your maintenance window."
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.Maintenance.html
upvoted 1 times
C is not correct:
RDS is fully managed service, although customers are responsible for patch management. Security of own application and database is
responsibility of customers.
upvoted 1 times
N9 8 months ago
Too heavy for begineer
upvoted 1 times
E: Configuring a security group and a network access control list (NACL) for EC2 instances << this is obviously a customer responsibility.
So in summary : B and C
upvoted 1 times
awsawsmaster 1 year, 9 months ago
B, D !
upvoted 1 times
Which of the following is an AWS Well-Architected Framework design principle for operational excellence in the AWS Cloud?
A. Go global in minutes.
Correct Answer: B
Reference:
https://aws.amazon.com/architecture/well-architected/
"Make frequent, small, reversible changes: Design workloads to allow components to be updated regularly. Make changes in small
increments that can be reversed if they fail (without affecting customers when possible)."
https://wa.aws.amazon.com/wellarchitected/2020-07-02T19-33-23/wat.pillar.operationalExcellence.en.html
upvoted 1 times
Which AWS service provides intelligent recommendations to improve code quality and identify an application's most expensive lines of code?
A. Amazon CodeGuru
B. AWS CodeStar
C. AWS CodeCommit
D. AWS CodeDeploy
Correct Answer: A
Reference:
https://aws.amazon.com/codeguru/#:~:text=Amazon%20CodeGuru%20is%20a%20developer,most%20expensive%20lines%20of%20code
"CodeGuru has two components: Amazon CodeGuru Security and Amazon CodeGuru Profiler. CodeGuru Security is a machine learning
(ML) and program analysis-based tool that finds security vulnerabilities in your Java, Python, and JavaScript code. CodeGuru Security also
scans for hardcoded credentials. CodeGuru Profiler optimizes performance for applications running in production and identifies the most
expensive lines of code, reducing operational costs significantly."
https://aws.amazon.com/codeguru/faqs/
upvoted 1 times
Amazon CodeGuru is a developer tool that provides intelligent recommendations to improve code quality and identify an application’s
most expensive lines of code. - https://aws.amazon.com/codeguru/
upvoted 4 times
Question #287 Topic 1
A company wants to expand from one AWS Region into a second AWS Region.
What does the company need to do to expand into the second Region?
Correct Answer: C
Reference:
https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-plan-region.html
Expanding into a second AWS Region requires creating new resources in the target Region. This involves deploying the application and its
related resources, such as EC2 instances, databases, load balancers, and other services in the new Region. It is important to ensure that
the necessary security and compliance measures are in place before deploying the application in the new Region. The company can use
tools like AWS CloudFormation or AWS Elastic Beanstalk to automate the process of deploying resources to the new Region.
upvoted 1 times
Expanding into a second AWS Region involves deploying resources in that Region, such as Amazon Elastic Compute Cloud (EC2) instances,
Amazon Simple Storage Service (S3) buckets, and Amazon Relational Database Service (RDS) instances. This can be done using the AWS
Management Console, the AWS Command Line Interface (CLI), or programmatically using the AWS SDKs or APIs. The company may also
want to consider setting up a Virtual Private Cloud (VPC) in the second Region and configuring VPC peering or VPN connections to connect
the two VPCs. There is no need to contact an AWS account manager to sign a new contract or move an availability zone, and there is no
need to download the AWS Management Console for the second Region.
upvoted 3 times
Which AWS service provides storage that can be mounted across multiple Amazon EC2 instances?
A. Amazon WorkSpaces
Correct Answer: B
Reference:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Storage.html
Amazon Elastic File System (EFS) is a fully managed, elastic, NFS file system that can be mounted across multiple Amazon EC2 instances. It
allows you to create and configure file systems that are accessible to multiple instances, making it easy to share data across instances and
applications. With EFS, you can scale your storage up or down as needed, and pay only for the storage you use. EFS also provides a high-
availability storage service that automatically replicates your data across multiple availability zones, ensuring that your data is highly
available and durable.
upvoted 4 times
A company needs to deploy applications in the AWS Cloud as quickly as possible. The company also needs to minimize the complexity that is
related to the management of AWS resources.
Which AWS service should the company use to meet these requirements?
A. AWS Config
C. Amazon EC2
D. Amazon Personalize
Correct Answer: B
Reference:
https://docs.aws.amazon.com/elastic-beanstalk/index.html
A company has a set of databases that are stored on premises. The company wants to bring its existing Microsoft SQL Server licenses when the
company moves the databases to run on Amazon EC2 instances.
Which EC2 instance purchasing option should the company use to meet these requirements?
A. Dedicated Instances
B. Reserved Instances
C. Dedicated Hosts
D. Spot Instances
Correct Answer: A
Reference:
https://aws.amazon.com/windows/resources/licensing/
While Dedicated Instances are extremely valuable from a compliance perspective, Dedicated Hosts also give you the visibility into the
physical host that is required for a Bring Your Own License (BYOL) model — i.e., if you want to use your own Windows Server, SQL Server,
SUSE, or RHEL licenses that are provided on a CPU core basis.
upvoted 12 times
A dedicated host is a physical EC2 server that is fully dedicated to your use, and on which you can run one or more Amazon EC2 instances.
With dedicated hosts, you can use your existing Microsoft SQL Server licenses on Amazon EC2 instances. This allows you to bring your
existing licenses and use them on EC2 instances, rather than purchasing new licenses. In addition, dedicated hosts offer added security,
compliance and regulatory requirements.
It is important to mention that Reserved Instances and Dedicated Instances also could be used in this scenario, however, the use of a
dedicated host allows for more control over the underlying infrastructure, and in addition, it allows the company to have more visibility
and control over their instances, which is beneficial when running databases that have high performance requirements.
upvoted 6 times
By using Dedicated Hosts, you can leverage your existing Microsoft SQL Server licenses and maintain compliance with licensing
requirements while running your databases on EC2 instances in the AWS Cloud.
upvoted 1 times
https://aws.amazon.com/windows/resources/licensing/
Your existing licenses may be used on AWS with Amazon EC2 Dedicated Hosts, Amazon EC2 Dedicated Instances, or EC2 instances with
default tenancy using Microsoft License Mobility through Software Assurance.
upvoted 1 times
//* Microsoft SQL Server with License Mobility through Software Assurance, and Windows Virtual Desktop Access (VDA) licenses can be
used with Dedicated Instance.//
upvoted 1 times
Which of the following is a way to use Amazon EC2 Auto Scaling groups to scale capacity in the AWS Cloud?
Correct Answer: A
Reference:
https://aws.amazon.com/ec2/autoscaling/faqs/
Amazon EC2 Auto Scaling groups allows you to automatically scale the number of Amazon Elastic Compute Cloud (EC2) instances in your
application in or out, in response to changes in demand for your application. This helps ensure that you have the correct number of
instances running to handle the traffic to your application. Auto Scaling can be configured to use CloudWatch alarms to determine when
to scale the number of instances, and it can also be configured to scale based on a schedule. This can be done by creating policies that
increase or decrease the number of instances, based on CloudWatch metrics, schedule or even by using a custom metric.
It is important to mention that Auto Scaling groups can also scale the size of EC2 instances up or down, but this is not the only way to use
it.
upvoted 4 times
A company discovered unauthorized access to resources in its on-premises data center. Upon investigation, the company found that the requests
originated from a resource hosted on AWS.
Which AWS team should the company contact to report this issue?
Correct Answer: C
Reference:
https://aws.amazon.com/premiumsupport/knowledge-center/report-aws-abuse/
Which of the following are aspects of the AWS shared responsibility model? (Choose two.)
B. For Amazon S3, AWS operates the infrastructure layer, the operating systems, and the platforms.
D. AWS is responsible for training the customer's employees on AWS products and services.
E. For Amazon EC2, AWS is responsible for maintaining the guest operating system.
Correct Answer: BC
Reference:
https://aws.amazon.com/compliance/shared-responsibility-
model/#:~:text=AWS%20responsibility%20%E2%80%9CSecurity%20of%20the,that%20run
%20AWS%20Cloud%20services
A company needs real-time guidance to follow AWS best practices to save money, improve system performance, and close security gaps.
A. Amazon GuardDuty
Correct Answer: B
https://aws.amazon.com/premiumsupport/technology/trusted-advisor/
upvoted 3 times
Question #295 Topic 1
A company wants to organize its users so that the company can grant permissions to the users as a group.
Which AWS service or tool can the company use to meet this requirement?
A. Security groups
C. Resource groups
Correct Answer: B
AWS Identity and Access Management (IAM) is a service that enables the company to control and manage access to AWS resources. IAM
allows the company to create and manage users, groups and roles, and to grant permissions to those groups and roles. This enables the
company to grant permissions to a group of users, rather than having to grant permissions individually to each user. With IAM, the
company can create IAM users and groups and then grant permissions to those groups using policies. This allows the company to grant
permissions to a set of users as a group, which helps to simplify the process of managing access to resources.
upvoted 2 times
"An account administrator can control access to AWS resources by attaching permissions policies to IAM identities (users, groups, and
roles)."
upvoted 3 times
A company runs applications that process credit card information. Auditors have asked if the AWS environment has changed since the previous
audit. If the AWS environment has changed, the auditors want to know how it has changed.
A. AWS Artifact
C. AWS Config
D. AWS CloudTrail
Correct Answer: CD
AWS Config is a fully managed service that provides you with an AWS resource inventory, configuration history, and configuration change
notifications to enable security and governance.
https://www.google.com/search?
q=+AWS+Elastic+Beanstalk+vs+aws+config&rlz=1C5CHFA_enSG1031SG1031&ei=RJR0Y6eiM8uSseMPhfOckAo&ved=0ahUKEwin1J3bmbL7Ah
VLSWwGHYU5B6IQ4dUDCA8&uact=5&oq=+AWS+Elastic+Beanstalk+vs+aws+config&gs_lcp=Cgxnd3Mtd2l6LXNlcnAQAzIECCEQFToECAAQQz
oFCAAQgAQ6BggAEBYQHjoFCAAQhgM6BQghEKABOggIIRAWEB4QHToKCCEQFhAeEA8QHToHCAAQHhCiBDoFCAAQogQ6BgghEA0QFUoEC
EEYAEoECEYYAFAAWIxCYJZEaABwAXgAgAHpAYgBtAqSAQYxMC40LjGYAQCgAQKgAQHAAQE&sclient=gws-wiz-
serp#:~:text=AWS%20Config%20FAQs,%E2%80%BA%20config%20%E2%80%BA%20faq
upvoted 7 times
AWS CloudTrail: AWS CloudTrail is a service that records API calls and events for your AWS account. It provides a history of AWS API calls
made by or on behalf of your account and delivers log files to an Amazon S3 bucket. CloudTrail allows you to track who made the API call,
when it was made, which resources were accessed, and the source IP address of the requester.
Both AWS Config and AWS CloudTrail can be useful for providing information about changes to your AWS environment, helping you
understand how your resources have changed since the previous audit.
upvoted 1 times
AWS CloudTrail is a service that records all API calls made in an AWS account and stores the information in an Amazon S3 bucket. This
information can be used for security analysis, resource change tracking, and compliance auditing.
upvoted 1 times
A company wants to use a template to reliably provision, manage, and update its infrastructure in the AWS Cloud.
A. AWS Lambda
B. AWS CloudFormation
C. AWS Fargate
D. AWS CodeDeploy
Correct Answer: B
Using AWS CloudFormation, you can launch complex infrastructures with a single click, update existing stacks, and rollback to previous
versions if needed. It helps you maintain consistency, reduce human errors, and automate the process of managing AWS resources,
making it a reliable way to provision, manage, and update your infrastructure in the AWS Cloud.
upvoted 1 times
https://aws.amazon.com/cloudformation/resources/templates/
upvoted 3 times
Question #298 Topic 1
A company is reviewing the current costs of running its own infrastructure on premises. The company wants to compare these on-premises costs
to the costs of running infrastructure in the AWS Cloud.
Correct Answer: D
The tool provides a detailed cost comparison, taking into account various factors such as hardware, software, data center costs, and AWS
service costs. It helps the company understand the Total Cost of Ownership (TCO) of their current infrastructure and how it compares to
the TCO of running the same workload on AWS. This allows the company to evaluate the financial benefits of migration and make an
informed decision about moving its infrastructure to the AWS Cloud.
upvoted 1 times
A company needs a low-code, visual workflow service that developers can use to build distributed applications.
B. AWS Config
C. AWS Lambda
D. Amazon CloudWatch
Correct Answer: A
Step Functions supports a wide range of AWS services, and it can be used to create complex workflows that involve multiple AWS
resources and microservices. It simplifies the process of creating distributed applications and makes it easier to manage the flow of tasks
and actions.
upvoted 1 times
https://aws.amazon.com/step-functions/
upvoted 2 times
Question #300 Topic 1
A company wants to accelerate migration from its data center to the AWS Cloud.
Which combination of AWS services should the company use to meet this requirement? (Choose two.)
A. Amazon Connect
D. Amazon Route 53
E. AWS Organizations
Correct Answer: BC
C. AWS Server Migration Service (AWS SMS): AWS SMS is a service that simplifies and automates the process of migrating on-premises
virtual machines (VMs) to Amazon EC2 instances in the AWS Cloud. It allows the company to quickly and efficiently migrate their existing
virtualized workloads to AWS without the need for manual intervention.
upvoted 1 times
AWS Direct Connect is a service that enables the company to establish a dedicated network connection from its data center to AWS. This
connection can be used to transfer data between the company's data center and the AWS Cloud, providing a more reliable, lower-latency
way to move large amounts of data.
AWS Server Migration Service (SMS) is a service that automates the process of migrating on-premises servers to the AWS Cloud. It can
help the company to quickly and efficiently migrate large numbers of servers to the AWS Cloud, with minimal downtime. SMS allows to
automate the replication of an entire on-premises infrastructure to the cloud and also allows to schedule the migration and monitor the
progress.
By using these two services together, the company can accelerate its migration to the AWS Cloud, by establishing a dedicated, high-speed
connection between its data center and the AWS Cloud and by automating the process of migrating servers to the AWS Cloud.
upvoted 4 times
SLEON01 8 months, 2 weeks ago
Selected Answer: BC
BC Agreed
upvoted 2 times