Evolution of cloud computing (1)
Evolution of cloud computing (1)
Evolution of cloud computing (1)
3.A central server manages communications between the front and back
ends. It relies on protocols to facilitate the exchange of data. The central
server uses both software and middleware to manage connectivity
between different client devices and cloud servers.
Mainframe Computing
Mainframes which first came into existence in 1951 are highly powerful and reliable
computing machines. These are responsible for handling large data such as massive input-
output operations. Even today these are used for bulk processing tasks such as online
transactions etc. These systems have almost no downtime with high fault tolerance. After
distributed computing, these increased the processing capabilities of the system. But these
were very expensive. To reduce this cost, cluster computing came as an alternative to
mainframe technology.
Cluster Computing
In 1980s, cluster computing came as an alternative to mainframe computing. Each machine in
the cluster was connected to each other by a network with high bandwidth. These were way
cheaper than those mainframe systems. These were equally capable of high computations.
Also, new nodes could easily be added to the cluster if it was required. Thus, the problem of the
cost was solved to some extent but the problem related to geographical restrictions still
pertained. To solve this, the concept of grid computing was introduced.
Grid Computing
In 1990s, the concept of grid computing was introduced. It means that different systems were
placed at entirely different geographical locations and these all were connected via the internet.
These systems belonged to different organizations and thus the grid consisted of heterogeneous
nodes. Although it solved some problems but new problems emerged as the distance between
the nodes increased. The main problem which was encountered was the low availability of high
bandwidth connectivity and with it other network associated issues. Thus. cloud computing is
often referred to as “Successor of grid computing”.
Virtualization
Virtualization was introduced nearly 40 years back. It refers to the process of creating a virtual
layer over the hardware which allows the user to run multiple instances simultaneously on the
hardware. It is a key technology used in cloud computing. It is the base on which major cloud
computing services such as Amazon EC2, VMware vCloud, etc work on. Hardware virtualization
is still one of the most common types of virtualization.
Web 2.0
Web 2.0 is the interface through which the cloud computing services interact with the clients.
It is because of Web 2.0 that we have interactive and dynamic web pages. It also increases
flexibility among web pages. Popular examples of web 2.0 include Google Maps, Facebook,
Twitter, etc. Needless to say, social media is possible because of this technology only. It
gained major popularity in 2004.
Service Orientation
A service orientation acts as a reference model for cloud computing. It supports low-cost,
flexible, and evolvable applications. Two important concepts were introduced in this
computing model. These were Quality of Service (QoS) which also includes the SLA (Service
Level Agreement) and Software as a Service (SaaS).
Utility Computing
Utility Computing is a computing model that defines service provisioning techniques for
services such as compute services along with other major services such as storage,
infrastructure, etc which are provisioned on a pay-per-use basis.
Cloud Computing
Cloud Computing means storing and accessing the data and programs on remote servers that
are hosted on the internet instead of the computer’s hard drive or local server. Cloud
computing is also referred to as Internet-based computing, it is a technology where the
resource is provided as a service through the Internet to the user. The data that is stored can
be files, images, documents, or any other storable document.
Essential components of Cloud Computing
Cloud architecture requires several components to work. This section will explore the resources needed to create IT
environments that virtualise, pool, and share scalable resources online.
Front endThe front-end infrastructure is what the user sees when working in the cloud. This includes:
•User interface: The things you use to make requests of cloud computing (e.g., Gmail or
Outlook)
•Software: Encompasses your applications and web browsers (e.g., Chrome, Firefox, Safari)
•Client devices: Such as an on-premises PC or remote desktop, or your laptop, tablet, or mobile
phone
Back end The back-end infrastructure is the behind-the-scenes technology running the cloud. This includes several
components:
•Hardware: Even though you are in the cloud, there are still actual servers, storage, routers, and
switches that the cloud service provider manages in real life. This hardware is where the actual
workloads run.
•Virtualisation layer: Virtualisation creates many virtual machines that can run simultaneously.
Abstracting the physical resources lets many users efficiently access networks, servers, or storage
in the cloud.
•Middleware: This is the applications and software that enable the networked computers,
applications, and software to communicate and allocate resources for tasks.
Characteristics of Cloud
Computing
There are many characteristics of Cloud Computing here are few of them :
1.On-demand self-services: The Cloud computing services does not require any human administrators, user
themselves are able to provision, monitor and manage computing resources as needed.
2.Broad network access: The Computing services are generally provided over standard networks and heterogeneous
devices.
3.Rapid elasticity: The Computing services should have IT resources that are able to scale out and in quickly and on
a need basis. Whenever the user require services it is provided to him and it is scale out as soon as its requirement
gets over.
4.Resource pooling: The IT resource (e.g., networks, servers, storage, applications, and services) present are shared
across multiple applications and occupant in an uncommitted manner. Multiple clients are provided service from a
same physical resource.
5.Measured service: The resource utilization is tracked for each application and occupant, it will provide both the
user and the resource provider with an account of what has been used. This is done for various reasons like monitoring
billing and effective use of resource.
6.Multi-tenancy: Cloud computing providers can support multiple tenants (users or organizations) on a single set of
shared resources.
7.Virtualization: Cloud computing providers use virtualization
technology to abstract underlying hardware resources and present
them as logical resources to users.
•Reduced costs.
•Elasticity. A cloud-based application can allocate resources automatically to the application as required.
•Reliability. Cloud-based applications are typically highly reliable and offer high availability.
Another option is to use a content delivery network (CDN), which is a network of servers distributed around the
world. CDNs cache content in multiple locations, enabling servers to deliver requested content to end users
more quickly. CDNs improve the performance, availability and cost-effectiveness of websites and applications.
2. Security
While the ubiquity of PaaS, IaaS and SaaS architectures makes cloud computing a compelling choice for
enterprises, it's not enough to be in the cloud. Companies must also prioritize security to ensure secure data.
Once data is compromised, bad actors have the opportunity to exploit anything they want. They can lock data for
ransom, download sensitive files and prevent owners from accessing their network and its resources.
Networking and cloud teams should work with security teams to discuss security designs that protect the
company, its users and the overall cloud strategy.
End-to-end encryption
Encryption is paramount to protect at-rest and in-motion data for continuous protection and privacy against cyber
attacks.
Enterprises can choose from different types of encryption and security protocols, including Advanced Encryption
Standard and Transport Layer Security. These methods encrypt data when it travels from client to server and
vice versa. Other encryption methods are available to secure data at rest. Network engineers and cloud teams
should choose encryption based on their business needs.
Identity and access management
Identity and access management (IAM) can use cloud-based services to verify who users are and what they
can do with the resources. IAM helps manage the cloud access rights of different users and groups, such as
employees, IT teams and customers. It also protects the cloud resources from unauthorized access or
malicious actors by enforcing security policies and auditing user actions.
Below are some of the benefits of IAM services:
•It's possible to use a single identity provider, such as AWS, Azure and Salesforce, to authenticate users in a
multi-cloud environment and applications.
•Teams can use a single interface to manage the access rights of users and groups.
•IAM services can incorporate machine learning to detect and remove malicious access rights that might
provoke a security risk.
Network segmentation
Network segmentation is a way of dividing a cloud network into smaller parts called subnets or segments. Each
segment has its own policies and controls. Segmentation helps improve security, cloud monitoring and authorized
access, and it can result in less risk of data breaches.
Benefits of network segmentation include the following:
•Improves network performance. It reduces the number of users in specific zones.
•Protects the network from attacks. A segmented network helps limit the scope of potential attacks.
•Protects vulnerable devices. Segmentation can stop malicious traffic from reaching devices unable to protect
themselves from an attack.
Industry regulations
Cloud computing is an evolving field, and each region in the world has its own regulations. These industry
regulations affect providers and users and influence how enterprises use services and share responsibilities.
Some of the most well-known compliance frameworks and regulations are the following:
•General Data Protection Regulation (GDPR). GDPR is a European framework aimed to harmonize data
protection across members of the EU. It applies to any organization that processes the personal data of
individuals in the EU, regardless of where the organization or data is located.
•Health Insurance Portability and Accountability Act (HIPAA). HIPAA is a U.S. law that regulates the privacy
and security of health information. HIPAA-compliant cloud services must ensure that personal health information
is encrypted and accessible only to the authorized party.
•International Organization for Standardization (ISO). ISO is the global organization that develops and
publishes industry standards, including cloud computing.
Some relevant cloud standards to know are the following:
•ISO/IEC 27001 on information security management systems.
QoS plays a crucial role in network management because of its ability to prioritize network traffic based on
priorities. It ensures that critical applications and traffic receive the bandwidth and resources they need to
perform well, even when the network is congested.
Some of the benefits of QoS are the following:
•Improved performance of critical applications and services.
•Improved UX.
•Reduced errors.
•Increased agility.
•Reduced costs.
Network and cloud teams can use countless strategies to
implement network automation and orchestration in the cloud, including the following:
•Choose a cloud-native automation and orchestration platform. Options include
Azure Resource Manager, AWS CloudFormation or Google Cloud Deployment
Manager.
•Choose a hybrid automation and orchestration platform. Options include
Ansible, Chef and Puppet.
•Choose a combination of cloud-native and hybrid platforms. This option is
suitable for multi-cloud environments or for automation and orchestration in legacy
networks that aren't cloud-native.
•Start small. Automation and orchestration aren't easy tasks, so start by discovering
specific issues before automating the entire network.
•Test regularly. It's important to test any new network scripts in a staging environment
before deploying them in production.
•Monitor and maintain. After deploying scripts, it's paramount to ensure they work as
expected and to stay up to date with network changes.