0% found this document useful (0 votes)
1 views22 pages

Evolution of cloud computing (1)

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1/ 22

Cloud Computing

Dr. Anchal Thakur


Cloud Computing
Cloud computing allows users to access a wide range of services stored in the cloud or on the
Internet. Cloud computing services include computer resources, data storage, apps, servers,
development tools, and networking protocols. It is most commonly used by IT companies
and for business purposes. Cloud computing is a general term for the delivery of hosted computing
services and IT resources over the internet with pay-as-you-go pricing. Users can obtain technology
services such as processing power, storage and databases from a cloud provider, eliminating the need for
purchasing, operating and maintaining on-premises physical data centers and servers.
A cloud can be private, public or a hybrid. A public cloud sells services to anyone on the internet. A
private cloud is a proprietary network or a data center that supplies hosted services to a limited number
of people, with certain access and permissions settings. A hybrid cloud offers a mixed computing
environment where data and resources can be shared between both public and private clouds.
Regardless of the type, the goal of cloud computing is to provide easy, scalable access to computing
resources and IT services.
Cloud infrastructure involves the hardware and software components required for the proper
deployment of a cloud computing model. Cloud computing can also be thought of as utility
computing or on-demand computing. The name cloud computing was inspired by the cloud symbol
that's often used to represent the internet in flowcharts and diagrams.
How does it work:
Cloud computing lets client devices access rented computing resources,
such as data, analytics and cloud applications over the internet. It relies
on a network of remote data centers, servers and storage systems that
are owned and operated by cloud service providers. The providers are
responsible for ensuring the storage capacity, security and computing
power needed to maintain the data users send to the cloud.
1.Typically, the following steps are involved in cloud computing:An
internet network connection links the front end -- the accessing client
device, browser, network and cloud software applications -- with the
back end, which consists of databases, servers, operating systems and
computers.

2.The back end functions as a repository, storing data accessed by the


front end.

3.A central server manages communications between the front and back
ends. It relies on protocols to facilitate the exchange of data. The central
server uses both software and middleware to manage connectivity
between different client devices and cloud servers.

4.Typically, there's a dedicated server for each application or workload.


Cloud computing relies heavily on virtualization and automation technologies. Virtualization lets IT
organizations create virtual instances of servers, storage and other resources that let multiple VMs or
cloud environments run on a single physical server using software known as a hypervisor. This simplifies
the abstraction and provisioning of cloud resources into logical entities, letting users easily request and
use these resources. Automation and accompanying orchestration capabilities provide users with a high
degree of self-service to provision resources, connect services and deploy workloads without direct
intervention from the cloud provider's IT staff.
Evolution of Cloud
Computing
The phrase “Cloud Computing” was first introduced
in the 1950s to describe internet-related services.
It evolved from distributed computing to the
modern technology known as cloud computing.
Cloud services include those provided by Amazon,
Google, and Microsoft. Cloud computing allows
users to access a wide range of services stored in
the cloud or on the Internet. Cloud computing
services include computer resources, data storage,
apps, servers, development tools, and networking
protocols.
Distributed System is a composition of multiple independent systems but all of
them are depicted as a single entity to the users. The purpose of distributed systems is to
share resources and also use them effectively and efficiently. Distributed systems possess
characteristics such as scalability, concurrency, continuous availability, heterogeneity, and
independence in failures. But the main problem with this system was that all the systems
were required to be present at the same geographical location. Thus to solve this problem,
distributed computing led to three more types of computing and they were-Mainframe
computing, cluster computing, and grid computing.

Mainframe Computing
Mainframes which first came into existence in 1951 are highly powerful and reliable
computing machines. These are responsible for handling large data such as massive input-
output operations. Even today these are used for bulk processing tasks such as online
transactions etc. These systems have almost no downtime with high fault tolerance. After
distributed computing, these increased the processing capabilities of the system. But these
were very expensive. To reduce this cost, cluster computing came as an alternative to
mainframe technology.
Cluster Computing
In 1980s, cluster computing came as an alternative to mainframe computing. Each machine in
the cluster was connected to each other by a network with high bandwidth. These were way
cheaper than those mainframe systems. These were equally capable of high computations.
Also, new nodes could easily be added to the cluster if it was required. Thus, the problem of the
cost was solved to some extent but the problem related to geographical restrictions still
pertained. To solve this, the concept of grid computing was introduced.
Grid Computing
In 1990s, the concept of grid computing was introduced. It means that different systems were
placed at entirely different geographical locations and these all were connected via the internet.
These systems belonged to different organizations and thus the grid consisted of heterogeneous
nodes. Although it solved some problems but new problems emerged as the distance between
the nodes increased. The main problem which was encountered was the low availability of high
bandwidth connectivity and with it other network associated issues. Thus. cloud computing is
often referred to as “Successor of grid computing”.
Virtualization
Virtualization was introduced nearly 40 years back. It refers to the process of creating a virtual
layer over the hardware which allows the user to run multiple instances simultaneously on the
hardware. It is a key technology used in cloud computing. It is the base on which major cloud
computing services such as Amazon EC2, VMware vCloud, etc work on. Hardware virtualization
is still one of the most common types of virtualization.
Web 2.0
Web 2.0 is the interface through which the cloud computing services interact with the clients.
It is because of Web 2.0 that we have interactive and dynamic web pages. It also increases
flexibility among web pages. Popular examples of web 2.0 include Google Maps, Facebook,
Twitter, etc. Needless to say, social media is possible because of this technology only. It
gained major popularity in 2004.
Service Orientation
A service orientation acts as a reference model for cloud computing. It supports low-cost,
flexible, and evolvable applications. Two important concepts were introduced in this
computing model. These were Quality of Service (QoS) which also includes the SLA (Service
Level Agreement) and Software as a Service (SaaS).
Utility Computing
Utility Computing is a computing model that defines service provisioning techniques for
services such as compute services along with other major services such as storage,
infrastructure, etc which are provisioned on a pay-per-use basis.
Cloud Computing
Cloud Computing means storing and accessing the data and programs on remote servers that
are hosted on the internet instead of the computer’s hard drive or local server. Cloud
computing is also referred to as Internet-based computing, it is a technology where the
resource is provided as a service through the Internet to the user. The data that is stored can
be files, images, documents, or any other storable document.
Essential components of Cloud Computing
Cloud architecture requires several components to work. This section will explore the resources needed to create IT
environments that virtualise, pool, and share scalable resources online.
Front endThe front-end infrastructure is what the user sees when working in the cloud. This includes:
•User interface: The things you use to make requests of cloud computing (e.g., Gmail or
Outlook)
•Software: Encompasses your applications and web browsers (e.g., Chrome, Firefox, Safari)
•Client devices: Such as an on-premises PC or remote desktop, or your laptop, tablet, or mobile
phone
Back end The back-end infrastructure is the behind-the-scenes technology running the cloud. This includes several
components:
•Hardware: Even though you are in the cloud, there are still actual servers, storage, routers, and
switches that the cloud service provider manages in real life. This hardware is where the actual
workloads run.
•Virtualisation layer: Virtualisation creates many virtual machines that can run simultaneously.
Abstracting the physical resources lets many users efficiently access networks, servers, or storage
in the cloud.
•Middleware: This is the applications and software that enable the networked computers,
applications, and software to communicate and allocate resources for tasks.
Characteristics of Cloud
Computing
There are many characteristics of Cloud Computing here are few of them :
1.On-demand self-services: The Cloud computing services does not require any human administrators, user
themselves are able to provision, monitor and manage computing resources as needed.

2.Broad network access: The Computing services are generally provided over standard networks and heterogeneous
devices.

3.Rapid elasticity: The Computing services should have IT resources that are able to scale out and in quickly and on
a need basis. Whenever the user require services it is provided to him and it is scale out as soon as its requirement
gets over.

4.Resource pooling: The IT resource (e.g., networks, servers, storage, applications, and services) present are shared
across multiple applications and occupant in an uncommitted manner. Multiple clients are provided service from a
same physical resource.

5.Measured service: The resource utilization is tracked for each application and occupant, it will provide both the
user and the resource provider with an account of what has been used. This is done for various reasons like monitoring
billing and effective use of resource.

6.Multi-tenancy: Cloud computing providers can support multiple tenants (users or organizations) on a single set of
shared resources.
7.Virtualization: Cloud computing providers use virtualization
technology to abstract underlying hardware resources and present
them as logical resources to users.

8.Resilient computing: Cloud computing services are typically


designed with redundancy and fault tolerance in mind, which
ensures high availability and reliability.

9.Flexible pricing models: Cloud providers offer a variety of


pricing models, including pay-per-use, subscription-based, and spot
pricing, allowing users to choose the option that best suits their
needs.

10.Security: Cloud providers invest heavily in security measures


to protect their users’ data and ensure the privacy of sensitive
information.

11.Automation: Cloud computing services are often highly


automated, allowing users to deploy and manage resources with
minimal manual intervention.

12.Sustainability: Cloud providers are increasingly focused on


sustainable practices, such as energy-efficient data centers and the
use of renewable energy sources, to reduce their environmental
Requirements of Cloud Computing
The emergence of cloud computing and SaaS architectures disrupted the overall IT industry
and has extended to networking. The convenience of being able to connect and access
resources from anywhere in the world has encouraged companies to embrace this
technological revolution.
As companies move to the cloud, they want to scale, attract customers and generate more
profits. But it's not always easy for network engineers to tackle cloud strategies or work with
cloud teams. Collaboration between networking and cloud teams helps companies meet their
cloud expectations. Together, those teams should consider the following networking
requirements for cloud computing:
• Bandwidth and latency optimization.
• Security.
• Network resilience and redundancy.
• Quality of service (QoS).
• Network automation and orchestration.
1. Bandwidth and latency optimization
Bandwidth and latency optimization play a crucial role in efficient cloud service delivery. Bandwidth refers to the
amount of data that can be transferred over a network in a given period of time. Latency is the time it takes for data
to travel from one point to another.
When teams optimize their bandwidth and latency requirements, cloud services can be delivered efficiently. That
optimization leads to the following benefits:
•Improved UX due to faster response time and fewer interruptions.

•Reduced costs.

•Increased reliability of cloud services.


Time-sensitive and data-intensive applications require the real-time processing of large amounts of data. This
requirement is common in industries such as finance, streaming and healthcare. But it can be challenging to
develop and deploy applications with optimized bandwidth and latency.
Below are some methods to optimize bandwidth and latency for applications and workloads in the cloud:
•Scalability. Teams can scale cloud-based platforms up or down to meet the needs of the application.

•Elasticity. A cloud-based application can allocate resources automatically to the application as required.

•Reliability. Cloud-based applications are typically highly reliable and offer high availability.
Another option is to use a content delivery network (CDN), which is a network of servers distributed around the
world. CDNs cache content in multiple locations, enabling servers to deliver requested content to end users
more quickly. CDNs improve the performance, availability and cost-effectiveness of websites and applications.
2. Security
While the ubiquity of PaaS, IaaS and SaaS architectures makes cloud computing a compelling choice for
enterprises, it's not enough to be in the cloud. Companies must also prioritize security to ensure secure data.
Once data is compromised, bad actors have the opportunity to exploit anything they want. They can lock data for
ransom, download sensitive files and prevent owners from accessing their network and its resources.
Networking and cloud teams should work with security teams to discuss security designs that protect the
company, its users and the overall cloud strategy.
End-to-end encryption
Encryption is paramount to protect at-rest and in-motion data for continuous protection and privacy against cyber
attacks.
Enterprises can choose from different types of encryption and security protocols, including Advanced Encryption
Standard and Transport Layer Security. These methods encrypt data when it travels from client to server and
vice versa. Other encryption methods are available to secure data at rest. Network engineers and cloud teams
should choose encryption based on their business needs.
Identity and access management
Identity and access management (IAM) can use cloud-based services to verify who users are and what they
can do with the resources. IAM helps manage the cloud access rights of different users and groups, such as
employees, IT teams and customers. It also protects the cloud resources from unauthorized access or
malicious actors by enforcing security policies and auditing user actions.
Below are some of the benefits of IAM services:
•It's possible to use a single identity provider, such as AWS, Azure and Salesforce, to authenticate users in a
multi-cloud environment and applications.
•Teams can use a single interface to manage the access rights of users and groups.

•IAM services can incorporate machine learning to detect and remove malicious access rights that might
provoke a security risk.
Network segmentation
Network segmentation is a way of dividing a cloud network into smaller parts called subnets or segments. Each
segment has its own policies and controls. Segmentation helps improve security, cloud monitoring and authorized
access, and it can result in less risk of data breaches.
Benefits of network segmentation include the following:
•Improves network performance. It reduces the number of users in specific zones.

•Protects the network from attacks. A segmented network helps limit the scope of potential attacks.

•Protects vulnerable devices. Segmentation can stop malicious traffic from reaching devices unable to protect
themselves from an attack.
Industry regulations
Cloud computing is an evolving field, and each region in the world has its own regulations. These industry
regulations affect providers and users and influence how enterprises use services and share responsibilities.
Some of the most well-known compliance frameworks and regulations are the following:
•General Data Protection Regulation (GDPR). GDPR is a European framework aimed to harmonize data
protection across members of the EU. It applies to any organization that processes the personal data of
individuals in the EU, regardless of where the organization or data is located.
•Health Insurance Portability and Accountability Act (HIPAA). HIPAA is a U.S. law that regulates the privacy
and security of health information. HIPAA-compliant cloud services must ensure that personal health information
is encrypted and accessible only to the authorized party.
•International Organization for Standardization (ISO). ISO is the global organization that develops and
publishes industry standards, including cloud computing.
Some relevant cloud standards to know are the following:
•ISO/IEC 27001 on information security management systems.

•ISO/IEC 27017 on security controls for cloud services.

•ISO/IEC 27018 on cloud privacy protection.


Quality of service

QoS plays a crucial role in network management because of its ability to prioritize network traffic based on
priorities. It ensures that critical applications and traffic receive the bandwidth and resources they need to
perform well, even when the network is congested.
Some of the benefits of QoS are the following:
•Improved performance of critical applications and services.

•Improved UX.

•Reduced latency and jitter.


QoS is implemented at the network layer as part of network management. Network teams can use the following
strategies to improve UX and bolster QoS of their cloud-based applications and services:
•Use a cloud-based monitoring platform.

•Use a cloud-based CDN.

•Use a cloud provider with a strong track record of QoS.


Network automation and orchestration
Network engineers and cloud teams can use network automation and orchestration to streamline network
management. These methods reduce the number of manual configurations and help teams work efficiently.
Network automation uses software to automate repetitive tasks, such as network device configuration, change
deployment and device provisioning. Orchestration uses software to manage multiple automation tasks as part
of a larger workflow. Basically, orchestration enables teams to run a larger workflow at once.
Benefits of network orchestration and automation include the following:
•Improved efficiency.

•Reduced errors.

•Increased agility.

•Reduced costs.
Network and cloud teams can use countless strategies to
implement network automation and orchestration in the cloud, including the following:
•Choose a cloud-native automation and orchestration platform. Options include
Azure Resource Manager, AWS CloudFormation or Google Cloud Deployment
Manager.
•Choose a hybrid automation and orchestration platform. Options include
Ansible, Chef and Puppet.
•Choose a combination of cloud-native and hybrid platforms. This option is
suitable for multi-cloud environments or for automation and orchestration in legacy
networks that aren't cloud-native.
•Start small. Automation and orchestration aren't easy tasks, so start by discovering
specific issues before automating the entire network.
•Test regularly. It's important to test any new network scripts in a staging environment
before deploying them in production.
•Monitor and maintain. After deploying scripts, it's paramount to ensure they work as
expected and to stay up to date with network changes.

You might also like