Shubham Aws
Shubham Aws
Shubham Aws
SUBMITTED BY SUBMITTED TO
Shubham Sharma Prof. Ranjit Osari
EN21CS303053
Aug-Dec24
Report Approval
ii
Declaration
Shubham Sharma
15/11/2024
iii
Certificates
iv
v
Acknowledgements
It is their help and support, due to which we became able to complete the design and
technical report.
Without their support this report would not have been possible.
Shubham Sharma
EN21CS303053
B.Tech. IV Year
Department of Computer Science & Engineering
Faculty of Engineering
Medi-Caps University, Indore
vi
Table of Content
1.2 Abbreviation’s 10
8
List of Figures
9
Abbreviations
VM Virtual Machine
10
Chapter-1
Cloud computing is the on-demand delivery of compute power, database, storage, applications,
and other IT resources via the internet with pay-as-you-go pricing. These resources run on server
computers that are located in large data centres in different locations around the world. When
you use a cloud service provider like AWS, that service provider owns the computers that you
are using. These resources can be used together like building blocks to build solutions that help
meet business goals and satisfy technology requirements.
Rather than maintaining hardware and software on-site, users can access these resources remotely
through cloud providers like Amazon Web Services, Microsoft Azure, and Google Cloud. This
approach enables businesses and individuals to harness powerful computing capabilities on an
as-needed basis, without investing in extensive infrastructure. Cloud computing has become
popular due to its flexibility and cost-effectiveness, allowing users to focus on their core activities
rather than on managing IT resources.
One of the defining features of cloud computing is its on-demand self-service, where users can
access computing power, storage, and other resources whenever they need them. This is
combined with broad network access, meaning resources can be accessed from any internet-
enabled device, such as laptops, tablets, or smartphones, providing a highly accessible, versatile
solution. Additionally, cloud providers use resource pooling, which allows them to allocate
resources efficiently across multiple users, adapting dynamically to demand changes. This
flexibility extends to scalability and elasticity, so resources can be scaled up or down instantly,
making it ideal for businesses with variable workloads.
There are three main cloud service models. Each model represents a different part of the cloud
computing stack and gives you a different level of control over your IT resources:
Infrastructure as a service (IaaS): Services in this category are the basic building blocks for
11
cloud IT and typically provide you with access to networking features, computers (virtual or
on dedicated hardware), and data storage space. IaaS provides you with the highest level of
flexibility and management control over your IT resources. It is the most similar to existing
IT resources that many IT departments and developers are familiar with today.
Platform as a service (PaaS): PaaS removes the need for you to manage underlying
infrastructure (usually hardware and operating systems), and allows you to focus on the
deployment and management of your applications. This helps you be more efficient as you do
not need to worry about resource procurement, capacity planning, software maintenance,
patching, or any of the other undifferentiated heavy lifting involved in running your
application.
Software as a service (SaaS): SaaS provides you with a complete product that is run and
managed by the service provider. In most cases, people referring to SaaS are referring to end-
user applications (such as web-based email). With a SaaS offering, you do not have to think
about how the service is maintained or how the underlying infrastructure is managed. You
only need to think about how you will use that particular software.
Not all clouds are the same and no single type of cloud computing is right for everyone. Several
12
different models, types, and services have evolved to help offer the right solution for your needs.
First, you need to determine the type of cloud deployment, or cloud computing architecture, that
your cloud services will be implemented on. There are three different ways to deploy cloud
services:
Public Cloud :
In a public cloud model, computing resources are owned and managed by a third-party cloud
service provider and shared among multiple users over the internet. Organizations use the
public cloud to access scalable resources without having to manage physical hardware, paying
only for what they use.
Example: Amazon Web Services (AWS) offers a range of public cloud services, from virtual
servers (EC2) to storage (S3). Companies like Netflix use AWS to deliver streaming content
globally, scaling resources up or down as demand fluctuates.
Private Cloud :
A private cloud is a dedicated cloud infrastructure used exclusively by one organization. It
can be hosted on-premises or managed by an external provider, offering greater control,
security, and customization to meet specific business needs. Private clouds are often used by
organizations with strict regulatory or data privacy requirements.
Example: VMware Cloud on AWS, IBM Cloud Private etc. Banks and healthcare
organizations, such as healthcare provider Kaiser Permanente, often use private clouds to ensure
sensitive patient data remains secure. Kaiser Permanente’s private cloud infrastructure enables
it to meet strict health regulations and maintain tight control over data access.
Hybrid Cloud :
A hybrid cloud combines both public and private clouds, allowing data and applications to
move between them. This model provides greater flexibility by leveraging the scalability of a
public cloud for non-sensitive workloads while keeping critical workloads on a private cloud
for security.
Example: Azure Stack, Google Anthos, AWS Outposts etc. Many retail companies, like
Target, use hybrid clouds to manage their online and in-store data. For instance, they may store
customer shopping data on a private cloud to maintain privacy but use a public cloud for
seasonal sales, which requires scalable resources to handle higher traffic.
13
Community Cloud :
A community cloud is shared by multiple organizations with similar needs or regulatory
requirements, such as government agencies or healthcare institutions. the infrastructure is
jointly used and managed by the participating organizations.
Example: Government or university community clouds, some healthcare-specific clouds.
Compute:
These services provide scalable computing capacity, enabling users to run applications,
manage virtual servers, and scale resources as needed. Some of the compute services are:
15
Amazon EC2 (Elastic Compute Cloud): Virtual servers to run applications.
AWS Lambda: Serverless computing to run code in response to events without
provisioning servers.
Elastic Beanstalk: A platform to deploy and scale web applications and services.
Storage:
AWS offers various storage solutions that range from object storage to file and archival
storage. Some of the services are :
Amazon S3 (Simple Storage Service): Object storage for storing and retrieving large
amounts of data.
Amazon EBS (Elastic Block Store): Block storage designed for use with EC2.
Amazon Glacier: Low-cost archival storage for long-term backup.
DataBase:
AWS provides managed databases, offering options for SQL, NoSQL, in-memory, and
data warehousing needs.
Amazon RDS (Relational Database Service): Managed relational database services
for MySQL, PostgreSQL, Oracle, and more.
Amazon DynamoDB: A fully managed NoSQL database service.
Amazon Redshift: A managed data warehouse for large-scale data analytics.
16
Amazon Rekognition: Image and video analysis service.
Amazon Comprehend: Natural language processing for sentiment analysis and entity
recognition.
Analytics
Analytics services help users process and analyze data to gain insights and make data-
driven decisions.
Amazon EMR (Elastic MapReduce): Managed Hadoop framework for big data
processing.
Amazon Kinesis: Real-time data streaming and analytics.
AWS Glue: Managed ETL (Extract, Transform, Load) service for data preparation
and transformation.
Developer Tools
Developer tools enable rapid and collaborative application development and deployment
on AWS.
AWS CodeCommit: A source control service for Git repositories.
AWS CodeBuild: Continuous integration service for building and testing code.
AWS CodePipeline: Automates the CI/CD pipeline to streamline application updates.
17
AWS IoT Analytics: Analyzes data from IoT devices.
18
1.5.2 Billing Structure :
AWS calculates charges based on specific metrics for each service, such as compute
time, data storage, and data transfer:
Compute: Charges are based on instance type, operating system, and running time.
For example, AWS EC2 instances are billed per second or hour, depending on the
instance family.
Storage: Data storage costs are calculated per gigabyte (GB) per month. For
example, Amazon S3 charges for storage capacity, data transfer out, and the number
of requests made.
Data Transfer: AWS charges for data transfers outside of AWS, but data transfer
within the same region (between certain services) is free.
19
1.5.4 Factors Affecting AWS Costs:
Several factors impact AWS pricing, such as:
Service Region: Prices vary by region due to operational costs.
Instance Type and Size: Larger or more specialized instances (like GPU-based
instances) are more expensive.
Usage Duration: Resources billed per second or hour can be more cost-effective if
managed carefully.
Data Transfer Volume: High-volume data transfers out of AWS can increase costs
significantly.
Regions:
AWS operates in multiple geographic locations worldwide, known as Regions. A Region
is a physical location that consists of multiple Availability Zones (AZs). Each AWS Region
is isolated from others to reduce the risk of failure in a specific region affecting other
20
regions. Customers can choose the Region that best meets their needs based on factors like
proximity, regulatory requirements, and performance.
Availability Zones (AZs):
Within each Region, AWS operates multiple Availability Zones. An AZ is a data center or
a group of data centers within a Region, designed to be independent from failures in other
AZs. This ensures that applications can be architected for fault tolerance and high
availability by distributing resources across multiple AZs within a Region.
Edge Locations:
AWS also has a network of Edge Locations used primarily for content delivery through
Amazon CloudFront, AWS’s content delivery network (CDN). These locations provide
lower-latency access to end users by caching copies of content closer to where users are
geographically located. Edge locations are typically positioned in major cities worldwide
to enhance user experience for applications, websites, and streaming services.
21
Chapter-2
IAM
Work Done: Set up and configured role-based access control with IAM, including user
and role creation and applying policies to secure resources.
Observations: Gained a strong understanding of IAM's critical role in security,
particularly how policies and multi-factor authentication (MFA) enhance account and
resource protection.
22
EC2
Work Done: Launched, configured, and managed EC2 instances, exploring different
instance types, storage options, and security group configurations.
Observations: Learned how EC2's scaling capabilities and cost optimization strategies
can be tailored to various workloads, allowing flexibility and efficiency.
S3
Work Done: Set up and managed S3 buckets, configured access policies, and explored
lifecycle management for transitioning data to different storage tiers.
Observations: Observed the importance of lifecycle policies for cost-effective data
storage, as well as how S3’s configuration options help with data security and
accessibility.
Lambda
Work Done: Created and deployed AWS Lambda functions to support event-driven
processing and explored function triggers and configurations.
Observations: Noted how serverless functions provide low-latency response to events,
ideal for applications requiring fast, scalable processing without server management.
RDS
Work Done: Set up and configured an Amazon RDS instance, focusing on backup
automation, replication, and database management.
Observations: Observed how managed database services like RDS reduce
administrative tasks, allowing a focus on data management while ensuring data
reliability and security.
CloudWatch
Work Done: Configured monitoring for AWS resources, set up alerts, and explored the
CloudWatch dashboard for tracking system health and performance.
Observations: Understood how CloudWatch’s real-time monitoring and alert system
enhances operational monitoring, helping detect issues early and optimize resource
usage.
23
Figure 2.1 Aws Management Console Home Page
The AWS Management Console provides a central interface to access and manage various
AWS services. The "Recently visited" section shows frequently accessed services, while
"Applications" and "Cost and usage" sections provide insights into application setup and
billing. This interface helps users navigate AWS tools like EC2, S3, CloudWatch, and more
with ease.
Amazon VPC (Virtual Private Cloud): Used to design secure network architectures
and isolated networks within AWS.
Elastic Load Balancing (ELB): Learned about distributing traffic across instances for
high availability.
Auto Scaling: Used for scaling compute resources based on demand.
AWS CloudFormation: Deployed complex infrastructure using code templates, gaining hands-
on experience with infrastructure as code.
AWS Trusted Advisor: Utilized for best practice recommendations on security, cost
optimization, and performance.
Amazon Route 53: Configured DNS routing for web applications.
AWS Well-Architected Tool: Reviewed best practices for architecture, particularly
focusing on the security and cost pillars.
AWS Systems Manager: Used for centralized management and operational insights.
24
2.2.2 Work Done\ Observation:
The amount of time spent to complete each section was 3 weeks. Below are the tasks
completed and the observations from each service:
VPC
Work Done: Designed a secure VPC architecture, focused on network isolation, and
configured firewall rules for fine-grained access control.
Observations: I observed the importance of network isolation for security, learning how
to design custom networks and control traffic flow effectively within AWS
environments.
Work Done: Set up Elastic Load Balancing (ELB) and Auto Scaling policies,
ensuring that applications remain available during peak demand.
CloudFormation
Route 53
Work Done: Configured Route 53 to set up routing policies and custom domains,
implementing DNS routing for scalable cloud architectures.
Observations: I learned how Route 53 enables highly available DNS services, ensuring
reliable traffic routing and enhancing the resilience of cloud-based applications.
Work Done: Utilized AWS Trusted Advisor and the Well-Architected Tool to review
security and cost recommendations, optimizing AWS infrastructure.
25
Observations: I gained valuable insights into improving security, performance, and
cost-efficiency in AWS environments by adhering to AWS’s best practices for
infrastructure management.
Systems Manager
26
Chapter-3
3.1 Introduction :
Learning continues even after completing AWS training courses, and it is crucial to leverage the
resources and knowledge gained during training in real-world scenarios. AWS training provides
a foundation, but true mastery comes through hands-on practice, staying engaged with the AWS
community, and consistently updating one's knowledge on the latest AWS advancements. By
applying concepts to actual projects, learners can build practical experience that enhances their
ability to use core AWS services such as Amazon EC2, S3, and IAM, as well as more advanced
tools like AWS Lambda, Elastic Load Balancing (ELB), and CloudFormation for automating
infrastructure.
Additionally, continued AWS learning and certification can lead to career growth and new
opportunities, as many employers place a high value on hands-on AWS expertise. Since AWS
integrates with a wide ecosystem of third-party tools and solutions, such skills can open doors in
various industries, from finance to healthcare.
After formal training, continuous learning is essential for remaining competitive and proficient
in cloud technologies. Some of the key ways which can be helpful to deepen AWS knowledge
are:
Engage with the AWS Community:
AWS has a vibrant, global community that includes online forums, meetups, user groups, and
events like AWS re. By participating in these communities, professionals can gain new
insights, discuss challenges with peers, and exchange best practices. Engaging with other
professionals often highlights innovative ways to solve complex issues and helps bridge
knowledge gaps that formal training may not address, especially in specialized fields or for
new AWS offerings.
Certification Tracks:
AWS offers certification paths that help professionals expand their skills in targeted areas.
Options include Cloud Practitioner, Solutions Architect, Developer, SysOps Administrator,
and DevOps Engineer certifications, each tailored to specific AWS roles. These certifications
build upon core skills, and each level provides a structured path to deeper knowledge across
areas such as cloud architecture, serverless computing, and DevOps.
28
Explorer and Trusted Advisor help manage expenses, and hybrid cloud solutions such as
AWS Outposts support integration with on-premises environments, meeting needs for low-
latency processing and secure local data handling.
E-commerce Solutions
AWS offers scalable infrastructure for e-commerce companies, with services like Amazon
RDS for databases, CloudFront for content delivery, and DynamoDB for flexible database
operations. Auto Scaling and ELB ensure applications can handle high traffic, while security
tools like AWS Shield and WAF protect against cyber threats. Amazon Personalize adds
value by delivering tailored recommendations, enhancing customer experience and boosting
conversion rates.
29
Chapter-4
Discussion
The discussion around AWS courses, particularly in the areas of Cloud Fundamentals and
Architecting, highlights the essential role these training programs play in equipping professionals
with the knowledge and skills necessary to navigate the complexities of cloud computing. As
organizations increasingly migrate their workloads to the cloud, understanding the foundational
principles and architectural best practices becomes crucial for effective cloud utilization.
AWS Fundamentals courses serve as an introduction to cloud computing concepts and the AWS
ecosystem. They are designed for individuals with varying levels of experience, including those
who may be new to cloud technologies. The curriculum typically covers key topics such as the
basics of cloud computing, the benefits of using AWS, and an overview of the core services
offered, including computing (EC2), storage (S3), databases (RDS), and networking (VPC).
One significant advantage of the AWS Fundamentals training is its emphasis on the shared
responsibility model, which clarifies the security and compliance responsibilities of both AWS
and its customers. This understanding is vital for professionals, as it shapes how they approach
security in cloud environments. Additionally, the training includes hands-on labs and exercises
that enable participants to apply what they have learned in practical scenarios, reinforcing their
understanding of service configurations and best practices.
Another critical aspect of the Fundamentals courses is the focus on cost management and billing.
As organizations strive to optimize their cloud spending, understanding AWS pricing models,
including On-Demand Instances, Reserved Instances, and Savings Plans, equips learners with
the knowledge needed to make informed decisions about resource allocation. This financial
literacy is increasingly important for cloud professionals, as it directly impacts an organization’s
bottom line.
30
4.2 AWS Architecture Training :
Building on the foundational knowledge, AWS Architecting courses delve deeper into the
principles and practices of designing scalable, resilient, and secure applications on the AWS
platform. These courses target individuals who aspire to work as solutions architects or those
involved in designing and implementing cloud solutions within their organizations.
A key focus of the Architecting training is understanding how to leverage AWS services to meet
specific business requirements. This includes designing for high availability, fault tolerance, and
scalability using various AWS services like Elastic Load Balancing, Auto Scaling, and AWS
Lambda. Participants learn to architect solutions that can handle varying workloads, ensuring
that applications remain performant even during peak demand.
Moreover, the Architecting courses emphasize the importance of security and compliance in
cloud design. Learners are taught best practices for implementing IAM policies, encryption, and
network security, which are crucial for protecting sensitive data and maintaining compliance with
industry regulations. The use of AWS tools like AWS Trusted Advisor and AWS Well-
Architected Tool also helps participants evaluate their architectures against best practices and
optimize for cost, performance, and security.
Hands-on labs and case studies are integral to the Architecting courses, allowing participants to
engage in real-world scenarios that simulate the challenges faced by cloud architects. This
experiential learning not only builds technical skills but also enhances critical thinking and
problem-solving abilities, preparing individuals to make strategic decisions in their
organizations.
31
Chapter-5
Conclusion
In conclusion, AWS courses offer an invaluable opportunity for professionals to deepen their
understanding of cloud computing and acquire the skills needed to thrive in today’s technology-
driven environment. These courses cover a broad spectrum of topics, including cloud
architecture, application development, data analytics, and machine learning, catering to a diverse
audience with varying levels of expertise. By engaging with these subjects, learners can harness
the full potential of AWS services, transforming theoretical knowledge into practical application.
The emphasis on hands-on learning is a standout feature of AWS training. Participants engage in
real-world projects that mimic actual scenarios they might encounter in their careers, fostering
critical thinking and problem-solving abilities. This practical experience not only reinforces
learning but also enhances participants' resumes, making them more attractive candidates in a
competitive job market. Additionally, many courses prepare individuals for AWS certifications,
which are widely recognized and valued in the industry, further validating their skills and
knowledge.
Moreover, AWS training emphasizes the importance of continuous learning and adaptation in
the rapidly changing field of cloud computing. As new services and technologies emerge, the
courses encourage a mindset of lifelong learning, equipping professionals with the tools to stay
current and relevant. This proactive approach is essential for navigating the complexities of cloud
architecture, security, and cost management, allowing organizations to leverage AWS solutions
effectively.
Networking and collaboration are also crucial outcomes of AWS training. Participants often have
the chance to connect with peers, instructors, and industry professionals, fostering a community
of support and knowledge sharing. This network can be a valuable resource for mentorship,
collaboration on projects, and sharing insights about the latest developments in cloud technology.
Engaging with this community can inspire innovative ideas and approaches to tackling cloud-
related challenges. In a world increasingly reliant on cloud solutions, the skills and knowledge
acquired through AWS courses empower individuals and organizations to drive digital
32
transformation. Whether it is optimizing existing infrastructures, migrating applications to the
cloud, or developing new, cloud-native applications, AWS-trained professionals are well-
prepared to lead their organizations into the future. The impact of these courses extends beyond
technical expertise; they shape strategic thinkers capable of leveraging cloud technology to
enhance business outcomes, improve operational efficiency, and foster innovation.
Ultimately, AWS courses lay the groundwork for a successful career in cloud computing,
positioning individuals to contribute meaningfully to their organizations and the broader
technology landscape. By mastering AWS services and best practices, participants emerge as
confident cloud professionals ready to tackle the challenges of today’s digital age, ensuring their
relevance and effectiveness in a fast-evolving industry.
33
References
34
Appendix
Appendix I
Glossary of Terms
Cloud Computing: A model for enabling on-demand access to shared computing resources
(e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned
and released with minimal management effort.
Virtual Machine (VM): A virtualized environment that simulates a physical computer,
allowing multiple OS environments to run concurrently on a single hardware infrastructure.
Elastic Compute Cloud (EC2): An AWS service providing resizable compute capacity in
the cloud, enabling users to quickly scale resources up or down as needed.
Simple Storage Service (S3): A scalable object storage service from AWS used for storing
and retrieving any amount of data.
Identity and Access Management (IAM): A service for managing secure access to AWS
services and resources, allowing users to create and manage permissions.
Appendix II
AWS Community Resources
Learning about AWS does not end with formal training. AWS’s global community offers
resources to stay updated and gain insights:
AWS re: An annual conference with sessions, workshops, and networking opportunities.
AWS Forums and Meetups: Online communities and local events where AWS professionals share
knowledge and best practices.
AWS Documentation and Blogs: Official resources for staying updated on new releases, best
practices, and case studies.
Appendix III
35
Netflix
Services Used: Amazon EC2, S3, Auto Scaling, and CloudFront.
Overview: Netflix uses AWS to deliver streaming services to millions of users globally,
leveraging auto-scaling for high availability and content delivery for minimal latency.
Airbnb
Services Used: RDS, Lambda, DynamoDB.
Overview: Airbnb relies on AWS’s flexible database and serverless architecture to handle
booking management, reducing costs, and supporting global user traffic.
General Electric (GE)
Services Used: AWS IoT Core, SageMaker.
Overview: GE uses AWS IoT and machine learning services to monitor industrial equipment,
predict maintenance needs, and optimize performance.
Appendix IV
36