Assignment - Cloud Computing

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 26

1. "Cloud computing is a family of distributed computing system", explain with suitable example.

Distributed computing is a field of computer science that studies distributed systems. A


distributed system is a system whose components are located on different networked
computers, which communicate and coordinate their actions by passing messages to one
another. The components interact with one another to achieve a common goal.

Cloud computing is the on-demand availability of computer system resources, especially data
storage (cloud storage) and computing power, without direct active management by the user.
Cloud computing is a model for enabling convenient, on-demand network access to a shared
pool of configurable computing resources (e.g., networks, servers, storage, applications, and
services) that can be rapidly provisioned and released with minimal management effort or
service provider interaction.

Cloud computing delivers services or applications in on demand environment with targeted


goals of achieving increased scalability and transparency, security, monitoring and management.
Distributed computing provides collaborative resource sharing by connecting users and
resources. So, we can say Cloud computing is a family of distributed computing system.

Most organizations today use Cloud computing services either directly or indirectly. For
instance, when we use the services of Amazon or Google, we are directly storing into the cloud.
Using Twitter is an example of indirectly using cloud computing services, as Twitter stores all our
tweets into the cloud. Distributed and Cloud computing have emerged as novel computing
technologies because there was a need for better networking of computers to process data
faster. Another example can be of Facebook. It has a lot of daily active users and distributed
computing systems cannot alone provide such high availability, resistant to failure and
scalability. Thus, Cloud Distributed Computing is the need of the hour to meet the computing
challenges.

2. Explain the deployment models of cloud computing? What are the various Challenges for
cloud computing?
Cloud is about outsourcing of IT services and infrastructure to make them accessible remotely
via the Internet. Utilizing cloud-computing models boosts not only productivity but also provide
a competitive edge to organizations. The growing popularity of cloud computing has given rise
to different types of cloud service deployment models and strategies.

Cloud deployment model represents the exact category of cloud environment based on
proprietorship, size, and access and describes the nature and purpose of the cloud. Most
organizations implement cloud infrastructure to minimize capital expenditure & regulate
operating costs.

There are 4 deployment models of cloud computing. They are:

1. Public cloud
This type of cloud deployment model supports all users who want to make use of a
computing resource, such as hardware (OS, CPU, memory, storage) or software (application
server, database) on a subscription basis. Most common uses of public clouds are for
application development and testing, non-mission-critical tasks such as file-sharing, and e-
mail service.
The public cloud deployment model is the first choice for businesses that operate within the
industries with low privacy concerns. When it comes to popular public cloud deployment
models, examples are Amazon Elastic Compute Cloud (Amazon EC2) the top service
provider, Microsoft Azure, Google App Engine, IBM Cloud, Salesforce Heroku and others.

Its advantages are:


a. Flexible
b. Reliable
c. High Scalable
d. Low cost

Its disadvantages are:

a. Less Secured
b. Poorly Customizable

2. Private cloud
A private cloud is typically infrastructure used by a single organization. Such infrastructure
may be managed by the organization itself to support various user groups, or it could be
managed by a service provider that takes care of it either on-site or off-site. Private clouds
are more expensive than public clouds due to the capital expenditure involved in acquiring
and maintaining them. However, private clouds are better able to address the security and
privacy concerns of organizations today. Private clouds permit only authorized users,
providing the organizations greater control over data and its security. Business organizations
that have dynamic, critical, secured, management demand-based requirement should adopt
private cloud.
Its advantages are:
a. Highly private and secured
b. Control oriented (Private clouds provide more control over its resources than public
cloud as it can be accessed within the organization’s boundary.)

Its disadvantages are:

a. Poor scalability
b. Costly
c. Pricing
d. Restriction (It can be accessed locally within an organization and is difficult to
expose globally.)

3. Community cloud
This deployment model supports multiple organizations sharing computing resources that
are part of a community; examples include universities cooperating in certain areas of
research, or police departments within a county or state sharing computing resources.
Access to a community cloud environment is typically restricted to the members of the
community. For joint business organizations, ventures, research organizations and tenders
community cloud is the appropriate solution.

Its advantages are:


a. Cost reduction
b. Improved security, privacy, and reliability
c. Ease of data sharing and collaboration

Its disadvantaged are:

a. Sharing of fixed storage and bandwidth capacity


b. Not widespread so far

4. Hybrid cloud
Hybrid cloud is another cloud computing type, which is integrated, i.e., it can be a
combination of two or more cloud servers, i.e., private, public or community combined as
one architecture, but remain individual entities. In a hybrid cloud, an organization makes
use of interconnected private and public cloud infrastructure. Many organizations make use
of this model when they need to scale up their IT infrastructure rapidly, such as when
leveraging public clouds to supplement the capacity available within a private cloud. For
example, if an online retailer needs more computing resources to run its Web applications
during the holiday season it may attain those resources via public clouds.

Its advantages are:


a. Flexible
b. Secure
c. Cost effective
d. Rich scalable

Its disadvantages are:

a. Complex networking problem


b. Organization’s security Compliance

Cloud computing is used for enabling global access to mutual pools of resources such as
services, apps, data, servers, and computer networks. Because cloud technology depends on
the allocation of resources to attain consistency and economy of scale, like a utility, it is also
cost-effective, making it the choice for many small businesses and firms.

But there are also several challenges involved in cloud computing, which are mentioned as
follows:
1. Cost
2. Service Provider Reliability
3. Downtime
4. Password Security
5. Data privacy
6. Vendor lock-in

3. Software defined data center gives better flexibility for cloud computing. Explain with
suitable example.

Data center virtualization technology allows facilities to recreate the storage and computing
capabilities of IT hardware in software form. Rather than offering dedicated physical servers to
customers, facilities can use server virtualization to harness the power of that hardware and
effectively multiply the number of services they offer. In a software-defined data center (SDDC),
all elements of the infrastructure — networking, storage, CPU and security – are virtualized and
delivered as a service. They offer their storage and computing services primarily through
software-driven tools rather than traditional hardware.

In the past, a client might purchase a certain number of distinct servers that would form the
basis of their service needs. If they wanted to increase their performance, power, or storage,
they had to buy additional server space. An SDDC service model, however, operates more like a
cloud computing model. The facility has virtualized its resources using high-density deployments
and customers can simply purchase what they need when they need it without having to worry
about how many servers will be required to power their business.

By implementing an SDDC on cloud-based infrastructure, we can reduce the time and risks
involved with re-architecting the existing infrastructure. A cloud based SDDC also helps control
costs by avoiding the purchase of physical infrastructure. For example, we can establish an
entirely new environment in the cloud and eliminate all capital expenditures. Or we can
leverage the existing infrastructure in a hybrid environment, using the cloud to expand the
resources without having to buy and deploy more physical systems. By tapping into cloud-based
infrastructure, we can scale up capacity and access the latest technologies without having to
constantly upgrade the physical systems. Deploying an SDDC in the cloud helps us deliver
exceptional application performance and sufficient capacity to support data growth while
avoiding large capital expenditures.

4. Why cloud interoperability is required? List out various standards for cloud interoperability.

Interoperability means enabling the cloud ecosystem so that multiple cloud platforms can
exchange information. Cloud interoperability is the ability of a customer’s system to interact
with a cloud service or the ability for one cloud service to interact with other cloud services by
exchanging information according to a prescribed method to obtain predictable results.
Interoperability is required to increase the customer choice, competition, and innovation.
Interoperability reduces technical complexity by eliminating custom gateways, converters, and
interfaces.

In examining the issue of standardization through the provider lens, let’s look at the three main
service models:

 Infrastructure-as-a-service (IaaS). IaaS stands to benefit the most from standardization


because the main building blocks are workloads that are represented as VM images and
storage units, whether type data or raw data. This finding also ties back to the first two
use cases identified earlier, which were workload migration and data migration.
 Platform-as-a-service (PaaS). Organizations that buy into PaaS, do so for the perceived
advantages of the development platform. The platform provides many capabilities out
of the box such as managed application environments, user authentication, data
storage, reliable messaging, and other functionality in the form of libraries that can be
integrated into applications. Organizations that adopt PaaS are not thinking only of
extending their IT resources but are seeking value-added features (such as libraries and
platforms) that can help them develop and deploy applications more quickly.
 Software-as-a-service (SaaS). SaaS stands to benefit the least from standardization. SaaS
is different from IaaS and PaaS in that it represents a licensing agreement to third-party
software instead of a different deployment model for existing resources that range from
data storage to applications. Organizations that adopt SaaS are acquiring complete
software solutions or services that can be integrated into applications.

Organizations select PaaS and SaaS specifically for these value-added features, and end up in a
commitment similar what one experiences when purchasing software. Expecting PaaS and SaaS
providers to standardize these features would be equivalent to asking an enterprise resource-
planning software vendor to standardize all its features; it's not going to happen because it's not
in their best interests.

5. What is cloud in a Box? Describe the different layers of DCN.


A cloud in a box is a datacenter in a box that lets users or customers interact with cloud
computing resources. It is also widely known as cloud in a can. Cloud in a box allows the IT
department to deploy cloud services within the company relatively quickly. Instead of building
the infrastructure from scratch, the IT department is free to concentrate on integrating the new
hardware and software with legacy systems. Most cloud-in-a-box products offer pre-tested
processing hardware, software, and storage with connections already in place.

Some of the examples of Cloud-in-a-box are as follows:

1. CloudStart - HP offers a mix of hardware, software, and services that the company says can
get a business up and running as a cloud provider in less than a month.
2. BizCloud - Computer Sciences Corp offers hardware and software which can be deployed on
a customer's site within ten weeks.
3. Azure Appliance - Microsoft now allows customers to buy its Azure offering as an appliance
and run it in their own datacenters.
4. Exalogic Elastic Cloud - Oracle is perhaps the first “cloud in a box” solution that is actually
delivered to the customer in a box.

The layers of the data center design are the core, aggregation, and access layers.

1. Core layer: Provides the high-speed packet switching backplane for all flows going in and out
of the data center. The core layer provides connectivity to multiple aggregation modules and
provides a resilient Layer 3 routed fabric with no single point of failure. The core layer runs an
interior routing protocol, such as OSPF or EIGRP, and load balances traffic between the campus
core and aggregation layers using Cisco Express Forwarding-based hashing algorithms.

2. Aggregation layer: Provide important functions, such as service module integration, Layer 2
domain definitions, spanning tree processing, and default gateway redundancy. Server-to-server
multi-tier traffic flows through the aggregation layer and can use services, such as firewall and
server load balancing, to optimize and secure applications. The smaller icons within the
aggregation layer switch in Figure 1-1 represent the integrated service modules. These modules
provide services, such as content switching, firewall, SSL offload, intrusion detection, network
analysis, and more.

3. Access layer: Where the servers physically attach to the network. The server components
consist of 1RU servers, blade servers with integral switches, blade servers with pass-through
cabling, clustered servers, and mainframes with OSA adapters. The access layer network
infrastructure consists of modular switches, fixed configuration 1 or 2RU switches, and integral
blade server switches. Switches provide both Layer 2 and Layer 3 topologies, fulfilling the
various server broadcast domain or administrative requirements.

6. Why virtualization? How it supports the cloud business? Explain its recent advantage?
Virtualization is the creation of virtual servers, infrastructures, devices, and computing
resources. Virtualization changes the hardware-software relations and is one of the foundational
elements of cloud computing technology that helps utilize the capabilities of cloud computing to
the full.

Virtualization in Cloud Computing is making a virtual platform of server operating system and
storage devices. This will help the user by providing multiple machines at the same time it also
allows sharing a single physical instance of resource or an application to multiple users. Cloud
Virtualizations also manage the workload by transforming traditional computing and make it
more scalable, economical, and efficient. One of the important features of virtualization is that it
allows sharing of applications to multiple customers and companies.

Cloud Computing can also be known as services and application delivered to help the virtualized
environment. With the help of virtualization, the customer can maximize the resources and
reduces the physical system which is in need. Virtualizations in cloud computing has numerous
benefits, which are explained below:

1. Security: Security is one of the important concerns. The security can be provided with the
help of firewalls, which will help to prevent unauthorized access and will keep the data
confidential. Encryption process also takes place with protocols which will protect the data
from other threads. So, the customer can virtualize all the data store and can create a
backup on a server in which the data can store.
2. Flexible operations: With the help of a virtual network, the work of it professional is
becoming more efficient and agile. The network switch implement today is very easy to use,
flexible and saves time. With the help of virtualization in Cloud Computing, technical
problems can solve in physical systems. It eliminates the problem of recovering the data
from crashed or corrupted devices and hence saves time.
3. Economical: Virtualization in Cloud Computing, save the cost for a physical system such as
hardware and servers. It stores all the data in the virtual server, which are quite economical.
It reduces the wastage, decreases the electricity bills along with the maintenance cost. Due
to this, the business can run multiple operating system and apps in a particular server.
4. Flexible transfer of data: The data can transfer to the virtual server and retrieve anytime.
With the help of virtualization, it will be very easy to locate the required data and transfer
them to the allotted authorities. This transfer of data has no limit and can transfer to a long
distance with the minimum charge possible. Additional storage can also provide, and the
cost will be as low as possible.

VMware is a well-known name when it comes to server virtualization market. Recently,


Microsoft came up with the only non-Linux hypervisor, Hyper-V, to compete in a tight server
virtualization market that VMware currently dominates. Not easily outdone in the data center
space, Microsoft offers attractive licensing for its Hyper-V product and the operating systems
that live on it.

7. Explain the autonomic cloud computing? what are the role of autonomic computing in cloud
computing.
Autonomic Computing is the ability of distributed system to manage its resources with little or
no human intervention. It involves intelligently adapting to environment and requests by users
in such a way the user does not even know. It was started in 2001 by IBM to help reduce
complexity of managing large distributed systems. Autonomic cloud computing helps address
challenges related to Quality of Service (QoS) by ensuring Service-level agreement (SLA) are met.
In addition, autonomic cloud computing helps reduce the carbon footprint of data centers and
cloud consumers by automatically scaling up or down energy usage base on cloud activity.

Autonomic computing has been widely adopted by academia, Information Technology society
and the business world. This is due to computing systems increasing complexity. Complexity
implies more functionalities and capabilities are demanded by users and businesses. This
resulted in need to set-up or quickly integrate a new solution into an existing system. With new
devices and increased mobility, interoperability is also a major concern. Cloud computing has
provided a solution that allows users of this technology to have a perception of limitless
computing power and functionality by subscribing to services they need on the go. This is
distributed and available according to terms agreed in SLAs, SLAs ensure QoS are maintained.
This has pushed the industry into providing a solution that is self-star, self-optimization, self-
healing, self-protection, and self-configuration. Hence, the autonomic cloud computing.
Autonomic cloud computing solutions have largely been deployed to solve issues that arise from
managing existing cloud service(s). Autonomic solutions have so far been based on frameworks
that support dynamic nature but have failed to achieve full autonomic capabilities.

8. Describe the emergence of Cloud Computing in Nepal along with its motivational factors.

Cloud computing is becoming the latest disruptive force that everyone should be aware of,
regardless as to whether you are a supplier or a consumer of technology services. It is going to
change the landscape of IT and how services are provided, and it is in your best interest to
understand the impact. In Nepal, cloud computing is emerging, and the following factors have
played the biggest role in the emergence of cloud computing:

Cost, Scalability, Globalization, Productivity, Data protection

The evolution of cloud computing is ultimately taking us towards providing IT as a Service


(ITaaS). This is about transforming IT to a more business-centric approach so that IT can focus on
areas such as operational efficiency, competitiveness, and faster response. This means a shift
from producing IT services to optimizing production and the way services are consumed in ways
consistent with business need. Ultimately, the role of IT changes from that of a cost center to a
center of strategic value.

Cloud computing has various motivational factors because of the future scopes in Nepal. Cloud
computing services has huge opportunity in Nepal market due to large number of small and
medium and sized business. Most of the leading companies are already on cloud. E-governance
can be implemented on rural areas using cloud. Distance learning to remote areas and schools
since it is the future scope. Finally, there is the increasing trend of outsourcing jobs in Nepal.
cloud computing is real and warrants your scrutiny as a new set of platforms for business
applications.

9. Explain deployment models of cloud computing with suitable example? what are the
essentials of cloud computing?

There are 4 deployment models of cloud computing which are explained below:

1. Public cloud
This type of cloud deployment model supports all users who want to make use of a computing
resource, such as hardware (OS, CPU, memory, storage) or software (application server,
database) on a subscription basis. Most common uses of public clouds are for application
development and testing, non-mission-critical tasks such as file-sharing, and e-mail service.
The public cloud deployment model is the first choice for businesses that operate within the
industries with low privacy concerns. When it comes to popular public cloud deployment
models, examples are Amazon Elastic Compute Cloud (Amazon EC2) the top service provider,
Microsoft Azure, Google App Engine, IBM Cloud, Salesforce Heroku and others.

2. Private cloud
A private cloud is typically infrastructure used by a single organization. Such infrastructure may
be managed by the organization itself to support various user groups, or it could be managed by
a service provider that takes care of it either on-site or off-site. Private clouds are more
expensive than public clouds due to the capital expenditure involved in acquiring and
maintaining them. However, private clouds are better able to address the security and privacy
concerns of organizations today. Private clouds permit only authorized users, providing the
organizations greater control over data and its security. Business organizations that have
dynamic, critical, secured, management demand-based requirement should adopt private cloud.

3. Community cloud
This deployment model supports multiple organizations sharing computing resources that are
part of a community; examples include universities cooperating in certain areas of research, or
police departments within a county or state sharing computing resources. Access to a
community cloud environment is typically restricted to the members of the community. For
joint business organizations, ventures, research organizations and tenders community cloud is
the appropriate solution.

4. Hybrid cloud
Hybrid cloud is another cloud computing type, which is integrated, i.e., it can be a combination
of two or more cloud servers, i.e., private, public or community combined as one architecture,
but remain individual entities. In a hybrid cloud, an organization makes use of interconnected
private and public cloud infrastructure. Many organizations make use of this model when they
need to scale up their IT infrastructure rapidly, such as when leveraging public clouds to
supplement the capacity available within a private cloud. For example, if an online retailer needs
more computing resources to run its Web applications during the holiday season it may attain
those resources via public clouds.
Essential characteristics of cloud computing are as follows:
1. On-demand self-service: A consumer can unilaterally provision computing capabilities, such
as server time and network storage, as needed automatically without requiring human
interaction with each service provider.
2. Broad network access: Capabilities are available over the network and accessed through
standard mechanisms that promote use by heterogeneous thin or thick client platforms
(e.g., mobile phones, tablets, laptops and workstations).
3. Resource pooling: The provider's computing resources are pooled to serve multiple
consumers using a multi-tenant model, with different physical and virtual resources
dynamically assigned and reassigned according to consumer demand. There is a sense of
location independence in that the customer generally has no control or knowledge over the
exact location of the provided resources but may be able to specify location at a higher level
of abstraction (e.g., country, state or datacenter). Examples of resources include storage,
processing, memory and network bandwidth.
4. Rapid elasticity: Capabilities can be elastically provisioned and released, in some cases
automatically, to scale rapidly outward and inward commensurate with demand. To the
consumer, the capabilities available for provisioning often appear to be unlimited and can
be appropriated in any quantity at any time.
5. Measured service: Cloud systems automatically control and optimize resource use by
leveraging a metering capability at some level of abstraction appropriate to the type of
service (e.g., storage, processing, bandwidth and active user accounts). Resource usage can
be monitored, controlled and reported, providing transparency for the provider and
consumer.

10. What is deperimeterization? Describe role of internet network in data center interconnection
network (DCIN).

In network security, deperimeterization is a strategy for protecting a company's data on multiple


levels by using encryption and dynamic data-level authentication. Network administrators
commonly use a castle analogy to explain their security strategy. Network devices are placed
behind a firewall and security efforts are focused on keeping intruders out. Thus, company data
is protected on the perimeter.

Data Center Interconnect (DCI) technology connects two or more data centers together over
short, medium, or long distances using high-speed packet-optical connectivity. DCI technology
enables the smooth transit of critical assets over short, medium, and long distances between
data centers. The most effective transport for DCI is high-speed packet-optical connectivity built
on technological innovations such as coherent optics. With a speedy, reliable connection in
place, physically separate data centers can more easily share resources and balance workloads.
Some large operations use DCI to connect their own data centers within their extended
enterprise infrastructures, while others connect to partners, cloud providers or data center
operators to enable simpler data and resource sharing or handle disaster recovery needs.
DCI plays a critical role in meeting the exploding needs for data and insatiable demand for cloud-
based services and content. As a result, DCI solutions are evolving to meet new requirements for
ultra-high capacity, massive scalability, power efficiency, and management simplicity to make
interconnecting data centers faster, easier, and more cost-effective.
DCI technologies move content to, from, and between data centers. The technology is at work in
many industries around the world. It can help hospitals meet rigorous business and clinical
needs and prepare for growth. DCI is also a necessary ingredient for sharing data with third-
party providers and financial exchanges that are part of a bank’s digital services ecosystem.
Enterprises are just starting to move their IT resources to both multi-tenant and public clouds.
As this trend accelerates, DCI connectivity from enterprise data centers and between cloud data
centers will grow in lockstep.

11. Describe the data center network and explain its various layer.

Data center is a physical facility that organizations use to house their critical applications and
data. A data center's design is based on a network of computing and storage resources that
enable the delivery of shared applications and data. The key components of a data center design
include routers, switches, firewalls, storage systems, servers, and application-delivery
controllers.

In the world of enterprise IT, data centers are designed to support business applications and
activities that include: Email and file sharing, Productivity applications, Customer relationship
management, ERP and databases, Big data, AT, and Machine learning, Virtual desktops,
communications, and collaboration services.

In a data center network, following are some of the important areas that need to be monitored
on a regular basis:

1. Real-time availability

2. Bandwidth monitoring

3. Network configuration management

Important aspect of the data center design is flexibility in quickly deploying and supporting new
services. Designing a flexible architecture that has the ability to support new applications in a short
time frame can result in a significant competitive advantage. Such a design requires solid initial
planning and thoughtful consideration in the areas of port density, access layer uplink bandwidth,
true server capacity, and oversubscription, to name just a few.

The data center network design is based on a proven layered approach, which has been tested and
improved over the past several years in some of the largest data center implementations in the
world. The layered approach is the basic foundation of the data center design that seeks to improve
scalability, performance, flexibility, resiliency, and maintenance.

The following figure explains the architecture:


Fig: DCN Network
Img src: cisco.com

The layers of the data center design are the core, aggregation, and access layers.

1. Core layer: Provides the high-speed packet switching backplane for all flows going in and out of
the data center. The core layer provides connectivity to multiple aggregation modules and provides
a resilient Layer 3 routed fabric with no single point of failure. The core layer runs an interior routing
protocol, such as OSPF or EIGRP, and load balances traffic between the campus core and
aggregation layers using Cisco Express Forwarding-based hashing algorithms.

2. Aggregation layer: Provide important functions, such as service module integration, Layer 2
domain definitions, spanning tree processing, and default gateway redundancy. Server-to-server
multi-tier traffic flows through the aggregation layer and can use services, such as firewall and server
load balancing, to optimize and secure applications. The smaller icons within the aggregation layer
switch in Figure 1-1 represent the integrated service modules. These modules provide services, such
as content switching, firewall, SSL offload, intrusion detection, network analysis, and more.
3. Access layer: Where the servers physically attach to the network. The server components consist
of 1RU servers, blade servers with integral switches, blade servers with pass-through cabling,
clustered servers, and mainframes with OSA adapters. The access layer network infrastructure
consists of modular switches, fixed configuration 1 or 2RU switches, and integral blade server
switches. Switches provide both Layer 2 and Layer 3 topologies, fulfilling the various server
broadcast domain or administrative requirements.

12. Explain the role of Autonomic computing in cloud? Describe its benefit to promote cloud
service.

Autonomic computing has been widely adopted by academia, Information Technology society
and the business world. This is due to computing systems increasing complexity. Complexity
implies more functionalities and capabilities are demanded by users and businesses. This
resulted in need to set-up or quickly integrate a new solution into an existing system. With new
devices and increased mobility, interoperability is also a major concern. Cloud computing has
provided a solution that allows users of this technology to have a perception of limitless
computing power and functionality by subscribing to services they need on the go. This is
distributed and available according to terms agreed in SLAs, SLAs ensure QoS are maintained.
This has pushed the industry into providing a solution that is self-star, self-optimization, self-
healing, self-protection, and self-configuration. Hence, the autonomic cloud computing.
Autonomic cloud computing solutions have largely been deployed to solve issues that arise from
managing existing cloud service(s). Autonomic solutions have so far been based on frameworks
that support dynamic nature but have failed to achieve full autonomic capabilities.

The Autonomic computing aims to provide a zero-cost maintenance and highly reliable system
to end-user. Self-Management provides the monitoring, diagnosis and repair capabilities to
maintain the systems’ behavior and grants the expected service. It may be a very cost-effective
and efficient method for cloud computing.

The benefits to promote cloud services are:

1. Security: Many organizations have security concerns when it comes to adopting a cloud-
computing solution. For one thing, a cloud host's full-time job is to carefully monitor
security, which is significantly more efficient than a conventional in-house system, where an
organization must divide its efforts between a myriad of IT concerns, with security being
only one of them. And while most businesses don't like to openly consider the possibility of
internal data theft, the truth is that a staggeringly high percentage of data thefts occur
internally and are perpetrated by employees. When this is the case, it can actually be much
safer to keep sensitive information offsite. Of course, this is all very abstract, so let's
consider some solid statistics.
2. Cost Savings: The pay-as-you-go system applies to the data storage space needed to service
the stakeholders and clients, which means that we'll get exactly as much space as we need,
and not be charged for any space that we don't.
3. Flexibility: The cloud offers businesses more flexibility overall versus hosting on a local
server. And, if we need extra bandwidth, a cloud-based service can meet that demand
instantly, rather than undergoing a complex (and expensive) update to the IT infrastructure.
This improved freedom and flexibility can make a significant difference to the overall
efficiency of the organization.
4. Mobility: Cloud computing allows mobile access to corporate data via smartphones and
devices, which, considering over 2.6 billion smartphones are being used globally today, is a
great way to ensure that no one is ever left out of the loop. Staff with busy schedules, or
who live a long way away from the corporate office, can use this feature to keep instantly up
to date with clients and co-worker.
5. Increased Collaboration: Cloud computing makes collaboration a simple process. Team
members can view and share information easily and securely across a cloud-based platform.
Some cloud-based services even provide collaborative social spaces to connect employees
across the organization, therefore increasing interest and engagement. Collaboration may
be possible without a cloud-computing solution, but it will never be as easy, nor as effective.
6. Quality Control: There are few things as detrimental to the success of a business as poor
quality and inconsistent reporting. In a cloud-based system, all documents are stored in one
place and in a single format. With everyone accessing the same information, we can
maintain consistency in data, avoid human error, and have a clear record of any revisions or
updates. Conversely, managing information in silos can lead to employees accidentally
saving different versions of documents, which leads to confusion and diluted data.
7. Loss Prevention: If the organization is not on the cloud, they are at risk of losing all the
information that had saved locally. With a cloud-based server, however, all the information
you've uploaded to the cloud remains safe and easily accessible from any computer with an
internet connection, even if the computer which is in regularly use isn't working.
8. Automatic Software Updates: Cloud-based applications automatically refresh and update
themselves, instead of forcing an IT department to perform a manual organization wide
update. This saves valuable IT staff time and money spent on outside IT consultation.
9. Sustainability: Given the current state of the environment, it's no longer enough for
organizations to place a recycling bin in the breakroom and claim that they're doing their
part to help the planet. Real sustainability requires solutions that address wastefulness at
every level of a business. Hosting on the cloud is more environmentally friendly and results
in less of a carbon footprint.

13. Explain the popularity of hybrid cloud? Design the network architecture for hybrid cloud
deployment.

Hybrid cloud refers to a mixed computing, storage, and services environment made up of on-
premises infrastructure, private cloud services, and a public cloud—such as Amazon Web
Services (AWS) or Microsoft Azure—with orchestration among the various platforms.

These cloud models are being adopted by numerous organizations looking to leverage the direct
benefits of both a private and public cloud environment. In a hybrid cloud, companies can still
leverage third party cloud providers in either a full or partial manner. This increases the
flexibility of computing. The hybrid cloud environment is also capable of providing on-demand,
externally-provisioned scalability.

The following three reasons explain the popularity of hybrid cloud:

1. Software-defined technologies: A big reason for the hybrid cloud evolution has been the
software-defined layer. SDN has helped bridge a lot of the cloud computing communication
that has to happen on such a large layer. By better integrating complex routing and
switching methodologies at the logical layer, software-defined networking allows
administrators to create vast hybrid cloud networks capable of advanced inter-connectivity.
2. Greater amount of resources: Cloud computing is where it is today mainly because of the
resources that support it. We have more bandwidth, new ways to connect, and greater
amounts of infrastructure convergence. As the integration of storage, networking and
computing capabilities have increased – so have the delivery methods of cloud computing.
Hybrid cloud models are now able to cross-connect with distributed data centers and utilize
vast amounts of bandwidth which has now been optimized by virtual WANOP appliances. All
of this creates the platform and bridge for the hybrid cloud model to scale and grow.
3. Logical and physical integration: There has been a logical and physical revolution. This
platform eliminates the storage server tier of traditional data center architectures by
enabling applications to speak directly to the storage device, thereby reducing expenses
associated with the acquisition, deployment, and support of hyperscale storage and cloud
infrastructures.

Network architecture for hybrid cloud deployment


Img Src: acronis.com

Each of the environments that make up the hybrid cloud architecture has its own benefits and
uses. By combining them all into a single hybrid cloud – or a multi-cloud environment, if we’re
dealing with particularly large arrays of data – the organization gains greater control over data
safety, accessibility, privacy, authenticity, and security both for the IT infrastructure and for the
customers’ data, applications, and systems.

14. Define the virtualization with its benefit in cloud computing? justify virtualization security is
essential for cloud?

Virtualization in Cloud Computing is a technology which enables the sharing of the physical
instance of a single server or resources among multiple users or multiple organizations, in other
words, it is basically making a virtual platform of the server OS (Operating System), storage
devices, a desktop or network resources. When we talk about virtualization in the cloud,
virtualization occurs with the help of resources that are available in the cloud which are then
shared across users to make the cloud virtualization possible.
Fig: Virtualization in cloud computing
Img src: educba.com

Virtualization plays a very important role in the cloud computing technology, normally in the
cloud computing, users share the data present in the clouds like application etc., but with the
help of virtualization users shares the Infrastructure.

The following are the advantages of virtualization in cloud computing:


1. Security: Security in virtualization is provided with the help of firewalls and encryption. This
ensures that all that lies inside the virtualization cloud is kept protected and any
unauthorized access can be prevented. The data can also be protected from cyber-attacks
and threats related to files such as malware, worms, and viruses.
2. More Economical: As we have seen, virtualization saves us the cost for physical machines
examples for which are servers and hardware. It is also environmentally friendly as when
the number of server usage gets reduced, we save electricity. This makes any organization
to run multiple OS (operating system).
3. Enabling Agile: By Cloud virtualization, we enable far more flexible operation which is very
efficient and agile supportive. The scientific or more complex technical problems can be
solved with the approach of grid computing which is achieved via cloud virtualization. It also
prevents the need for recovering data from corrupted devices.
4. Promotes high availability and disaster recovery.
5. Efficient and flexible data transfer: In Cloud virtualization, the users are not required to find
the hard drives or storages for the purpose of data transfer or retrieval. It can be done
almost at any time using cloud virtualization. It becomes very easy to locate the data and to
transfer or retrieve them.
6. No risk of system failure: In Cloud virtualization, the risk of system failure is eliminated as
the data which is stored in the cloud can be retrieved or transferred at any time from any
device. In a traditional scenario, there are chances that while an operation is performed, the
server might crash which eventually may damage the organization’s operational tasks.
Clustering is also always enabled in cloud virtualization so that even if one server crashes,
the other is always ready to take up the job.

15. Elaborate the Jericho Cloud Cube Model with its various dimension.

Cloud Cube Model was designed by Jericho Forum to help select cloud formations for security
cooperation. Their fascinating new cloud model helps IT managers and business tycoons assess
the benefits of cloud computing. The Cloud Cube Model looks at the several different "cloud
formations".

The Jericho Cloud Cube Model helps to categorize the cloud network based on the four-
dimensional factor: Internal/External, Proprietary/Open, De-Perimeterized/Perimeterized, and
Insourced/Outsourced.

Fig: Cloud Cube Model


Img src: https://data-flair.training

The main focus is to protect and secure the cloud network. This cloud cube model helps to
select cloud formation for secure collaboration. Security is an important concern for cloud
customers and most of the cloud providers understand it. The customer should also keep in
mind, the selected cloud formation meets the regulatory and location requirements. They
should also keep in mind that if cloud providers stop providing the services, where else they can
move. There are three service models, which include: SaaS, PaaS, IaaS. Additionally, there are
four deployment models as: Public Cloud, Private Cloud, Community Cloud, Hybrid Cloud.
These models are very flexible Agile and responsible. They are user-friendly and provide many
benefits to the customers.

Dimensions of Cloud Cube Model

The four dimensions are explained below:


1. Internal/External
The most basic cloud form is the external and internal cloud form. The external or internal
dimension defines the physical location of the data. It acknowledges us whether the data
exists inside or outside of your organization’s boundary. Here, the data which is stored using
a private cloud deployment will be considered internal and data outside the cloud will be
considered external.

2. Proprietary/Open
The second type of cloud formation is proprietary and open. The proprietary or open
dimension states about the state of ownership of the cloud technology and interfaces. It
also tells the degree of interoperability, while enabling data transportability between the
system and other cloud forms.
The proprietary dimension means, that the organization providing the service is securing
and protecting the data under their ownership.
The open dimension is using a technology in which there are more suppliers. Moreover, the
user is not constrained in being able to share the data and collaborate with selected
partners using the open technology.

3. De-Perimeterized/Perimeterized
The third type of cloud formation is De-perimeterized and Perimeterized. To reach this form,
the user needs collaboration-oriented architecture and Jericho forum commandments.
The Perimeterized and De-perimeterized dimension tells us whether you are operating
inside your traditional it mindset or outside it.

4. Insourced/Outsourced
The Insourced and outsourced dimensions have two states in each of the eight cloud forms.
In the outsourced dimension the services provided by the third party, whereas in the
insourced dimension the services provided by the own staff under the control.

16. What are the involvements of social computing in cloud? Explain top ten obstacles to the
cloud computing?

Social computing is an area of computer science that is concerned with the intersection of social
behavior and computational systems. It is based on creating or recreating social conventions and
social contexts through the use of software and technology. Social cloud computing expands
cloud computing past the confines of formal commercial data centers operated by cloud
providers to include anyone interested in participating within the cloud services sharing
economy. Social cloud computing has been highlighted as a potential benefit to large-scale
computing, video gaming, and media streaming. One service that uses social cloud computing is
Subutai. Subutai allows peer-to-peer sharing of computing resources globally or within a select
permissioned network. Other examples are Facebook, Twitter, etc.

Top ten obstacles to the cloud computing are explained below:

1. Security of data: In terms of security concerns of cloud technology, we do not find answers
to some questions. Mysterious threats like website hacking and virus attack are the biggest
problems of cloud computing data security. Before utilizing cloud computing technology for
a business, entrepreneurs should think about these things. Once we transfer important data
of the organization to a third party, we should make sure to have a cloud security and
management system.
2. Insufficiency of Resources and Expertise: The inadequacy of resources and expertise is one
of the cloud migration challenges this year. Although many IT employees are taking different
initiatives to improve their expertise in cloud computing future predictions, employers still
find it challenging to find employees with the expertise that they require. Some
organizations are also expecting to win over the challenges of shifting to cloud computing by
employing more workers with certifications or skills in cloud computing. Industry
professionals also suggest providing training of present employees to make them more
productive and speedier using the trendiest technology.
3. Complete Governance over IT Services: IT always does not have full control over
provisioning, infrastructure delivery, and operation in this cloud-based world. This has raised
the complicacy of IT to offer important compliance, governance, data quality, and risk
management. To eradicate different uncertainties and difficulties in shifting to the cloud, IT
should embrace the conventional control and IT management procedures to incorporate the
cloud. Ultimately, basic IT teams’ role in the cloud has emerged over the last few years.
4. Cloud Cost Management: Companies make several mistakes that can increase their
expenses. Sometimes, IT professionals like developers turn on a cloud instance implied to be
utilized for some time and forget to turn it off again. And some companies find themselves
hindered by the hidden cloud costing packages that provide numerous discounts that they
might not be using.
5. Dealing with Multi-Cloud Environments: These days, maximum companies are not only
working on a single cloud. A long-term prediction on the future of cloud computing
technology gives a more difficulty encountered by the teams of IT infrastructure. To win
over this challenge, professionals have also suggested the top practices like re-thinking
procedures, training staff, tooling, active vendor relationship management, and doing the
study.
6. Compliance: Compliance is also one of the challenges faced by cloud computing in 2020. For
everyone using cloud storage or backup services, this is a problem. Whenever an
organization transfers data from its internal storage to the cloud, it experiences compliance
with the laws and regulations of the industry.
7. Cloud Migration: Although releasing a new app in the cloud is a very simple procedure,
transferring an existing application to a cloud computing environment is tougher. Some
organizations migrating their apps to the cloud reported downtime during migration, issues
syncing data before cutover, the problem having migration tools to work well, slow data
migration, configuring security issues, and time-consuming troubleshooting.
8. Data lock-In: Data Lock-is related to tight dependency of an organization’s business with the
software or hardware infrastructure of a cloud provider. Even though software stacks have
improved interoperability among platforms, the storage APIs are still essentially proprietary,
or at least have not been subject of active standardization. This leads to customers not
being able to extract their data and programs from one site to run on another as in hybrid
cloud computing or surge computing.
9. Unformed Technology: Several cloud computing services are at the leading edge of
technologies such as advanced big data analytics, virtual reality, augmented reality, machine
learning, and artificial intelligence. The possible backlog to availing this interesting new
technology is that services sometimes fail to fulfill organizational expectations in terms of
dependability, usability, and functionality. However, the only possible fixes for this issue is
changing expectations, waiting for providers to boost their services, or trying to create your
solution.
10. Cloud Integration: Several companies, especially those with hybrid cloud environments
report issues associated with having their on-premises apps and tools and public cloud for
working together.

17. Discuss vulnerability assessment in cloud computing? Design quick incident response planning
for cloud service.

A vulnerability assessment is a systematic review of security weaknesses in an information


system. It evaluates if the system is susceptible to any known vulnerabilities, assigns severity
levels to those vulnerabilities, and recommends remediation or mitigation, if and whenever
needed.
A vulnerability assessment is the process of defining, identifying, classifying, and prioritizing
vulnerabilities in computer systems, applications, and network infrastructures. Vulnerability
assessments also provide the organization doing the assessment with the necessary knowledge,
awareness, and risk backgrounds to understand and react to the threats to its environment.
Vulnerability assessments depend on discovering different types of system or network
vulnerabilities. This means the assessment process includes using a variety of tools, scanners,
and methodologies to identify vulnerabilities, threats, and risks.
Some of the different types of vulnerability assessment scans include the following:

1. Network-based scans are used to identify possible network security attacks. This type of
scan can also detect vulnerable systems on wired or wireless networks.
2. Host-based scans are used to locate and identify vulnerabilities in servers, workstations or
other network hosts. This type of scan usually examines ports and services that may also be
visible to network-based scans. However, it offers greater visibility into the configuration
settings and patch history of scanned systems.
3. Wireless network scans of an organization's Wi-Fi networks usually focus on points of attack
in the wireless network infrastructure. In addition to identifying rogue access points, a
wireless network scan can also validate that a company's network is securely configured.
4. Application scans can be used to test websites to detect known software vulnerabilities and
incorrect configurations in network or web applications.
5. Database scans can be used to identify the weak points in a database so as to prevent
malicious attacks, such as SQL injection attacks.

An incident response plan is a set of instructions to help IT staff detect, respond to, and recover
from network security incidents. These types of plans address issues like cybercrime, data loss,
and service outages that threaten daily work.
Below are the five tips to build a Cloud Incident Response Plan:

1. Understand the differences between your cloud and traditional environments


Implementing security measures to protect the cloud environments and its sensitive data
will only get us so far. We should remember that what we monitor in a cloud environment is
different from traditional, on-premise environments. In the cloud, we’ll need to focus more
on applications, application programming interfaces, and user roles. Furthermore, consider
all the actions that incident responders need to take to successfully do their job within a
cloud environment. We’ll need to ensure they have visibility and proper access, or they’ll be
unable to find, fix, and ultimately eradicate infections.
2. Make cloud an integral part of your incident response
Threats to the cloud will persist, and incident responders will need to evolve to keep pace
with the rapidly evolving landscape. Keep incident response in mind when building cloud
environments, remembering that reactive incident response doesn’t work in the cloud. The
DevOps and cloud architecture teams should consider incident response requirements as
they set up cloud environments so that response is automated and coordinated.
3. Do not underestimate the pre-work
Cloud moves at warp speed, with everything happening much too fast for reactive incident
response to start when an alert comes in. Thinking about how we should approach incident
response in the cloud before an event happens will drastically close the gap in response
time, potentially going from days to seconds. The optimal infrastructure and tools need to
be there first, with the ability to see into environments. We suggest periodically doing
configuration checks and routine compromise assessments as good cloud security hygiene
practices.
4. Coordinate with other enterprise teams
Look at gaps in responsibilities or even geographies, identifying potential hurdles to achieve
a more coordinated response effort. Take the cloud architecture team, for example. Incident
response may not be a priority for them, yet they may have certain controls the incident
responders need to access or understand better. Breaking down traditional team silos and
establishing collaborative relationships between traditionally disparate groups will improve
the cloud security posture. If we know who to call and how to work together, all key players
can act faster and more effectively.
5. Get to know the service providers
Cloud service providers typically have incident response teams. Should carefully read the
service agreement and know who—the team or the provider’s—is available for each aspect
of a response. It’s important to find out exactly what they’ll alert about and how they’ll
support the team. By building a relationship with these critical points of contact, we can
save valuable time during an event.

18. Describe with example

V) Autonomic computing

Autonomic Computing is the ability of distributed system to manage its resources with little or
no human intervention. It involves intelligently adapting to environment and requests by users
in such a way the user does not even know. It was started in 2001 by IBM to help reduce
complexity of managing large distributed systems. Autonomic cloud computing helps address
challenges related to Quality of Service (QoS) by ensuring Service-level agreement (SLA) are met.
In addition, autonomic cloud computing helps reduce the carbon footprint of data centers and
cloud consumers by automatically scaling up or down energy usage base on cloud activity.

Autonomic computing has been widely adopted by academia, Information Technology society
and the business world. This is due to computing systems increasing complexity. Complexity
implies more functionalities and capabilities are demanded by users and businesses. This
resulted in need to set-up or quickly integrate a new solution into an existing system. With new
devices and increased mobility, interoperability is also a major concern. Cloud computing has
provided a solution that allows users of this technology to have a perception of limitless
computing power and functionality by subscribing to services they need on the go. This is
distributed and available according to terms agreed in SLAs, SLAs ensure QoS are maintained.
This has pushed the industry into providing a solution that is self-star, self-optimization, self-
healing, self-protection, and self-configuration. Hence, the autonomic cloud computing.
Autonomic cloud computing solutions have largely been deployed to solve issues that arise from
managing existing cloud service(s). Autonomic solutions have so far been based on frameworks
that support dynamic nature but have failed to achieve full autonomic capabilities.

Autonomic computing may use artificial intelligence approaches for modeling and planning
purposes. For example, machine learning techniques are often used to identify workload
patterns which, once identified, lead to the use of the right configuration of resources for their
execution.

W) Hyperconverge infrastructure

Hyperconverged infrastructure (HCI) is a software-defined, unified system that combines all the
elements of a traditional data center: storage, compute, networking, and management. This integrated
solution uses software and x86 servers to replace expensive, purpose-built hardware. With
hyperconverged infrastructure, we’ll decrease data center complexity and increase scalability.

Traditional three-tier architecture is expensive to build, complex to operate and difficult to scale. HCI
can help without losing control, increasing costs or compromising security.
Four tightly integrated software components make up a hyperconverged platform:

 Storage virtualization
 Compute virtualization
 Networking virtualization
 Advanced management capabilities including automation

The virtualization software abstracts and pools underlying resources, then dynamically allocates them to
applications running in VMs or containers. Configuration is based on policies aligned with the
applications, eliminating the need for complicated constructs like LUNs and volumes.

Following can be done with HCI:

 Build a private cloud


 Extend to public cloud
 Achieve true hybrid cloud

HCI transforms the traditional IT operational model with simple, unified management of resources. This
results in:

 Increased IT Efficiency
Eliminate manual processes and the need for siloed operational expertise on the team. Now, a
single, converged IT team can monitor and manage resources and improve storage capabilities.
Plus, with HCI, IT resources are presented as pools of storage that can be dynamically allocated
to deliver the right amount of capacity, performance, and protection.
 Better Storage, Lower Cost
Reduce the CAPEX by using a scale-up/scale-out architecture that requires only industry-
standard x86 servers, not expensive, purpose-built networking. Then simply add capacity as
needed with no disruptions. With HCI, we avoid vendor lock-in and eliminate overprovisioning,
meaning greatly reduced infrastructure spending across the data center.
 Greater Ability to Scale
One should be more responsive to rapidly changing business needs. Set up hardware in a few
hours; spin up workloads in minutes. Accelerate the performance of business-critical
applications like relational databases. HCI scales better than traditional infrastructure. It enables
a future-proof IT environment that allows you to scale up and scale out to easily meet specific
application needs.

x) Cloud Services

Cloud services are services available via a remote cloud computing server rather than an on-site server.
These scalable solutions are managed by a third party and provide users with access to computing
services such as analytics or networking via the internet. Cloud services offer powerful benefits for the
enterprise, from greater productivity and enhanced efficiency to significant cost reductions and
simplified IT management. Enterprise cloud computing can also enable the mobile services that
employees increasingly use when accessing corporate data and applications.
All infrastructure, platforms, software, or technologies that users access through the internet without
requiring additional software downloads can be considered cloud computing services—including the
following as-a-Service solutions.

 Infrastructure-as-a-Service (IaaS) provides users with computer, networking, and storage


resources.
 Platforms-as-a-Service (PaaS) provides users with a platform on which applications can run, as
well as all the IT infrastructure required for it to run.
 Software-as-a-Service (SaaS) provides users with—essentially—a cloud application, the platform
on which it runs, and the platform’s underlying infrastructure.
 Function-as-a-Service (FaaS), an event-driven execution model, lets developers build, run, and
manage app packages as functions without maintaining the infrastructure.

The benefits of cloud services depend on the industry. Many enterprises may use the cloud in the
applications such as, Managing spikes with scalability, Customer relationship management, Backup and
recovery, Big data analytics.

Enterprises can reap the benefits of better efficiency with cloud services, which allow for:

Higher Speeds, Better Security, Flexible Scaling, etc.

y) Zookeeper

ZooKeeper is an open-source Apache project that provides a centralized service for providing
configuration information, naming, synchronization, and group services over large clusters in distributed
systems. The goal is to make these systems easier to manage with improved, more reliable propagation
of changes.

ZooKeeper is a centralized service for maintaining configuration information, naming, providing


distributed synchronization, and providing group services. All these kinds of services are used in some
form or another by distributed applications. Each time they are implemented there is a lot of work that
goes into fixing the bugs and race conditions that are inevitable. Because of the difficulty of
implementing these kinds of services, applications initially usually skimp on them, which make them
brittle in the presence of change and difficult to manage. Even when done correctly, different
implementations of these services lead to management complexity when the applications are deployed.

ZooKeeper is used by companies including Yelp, Rackspace, Yahoo!, Odnoklassniki, Reddit, NetApp
SolidFire, Facebook, Twitter and eBay as well as open-source enterprise search systems like Solr.

Some of the prime features of Apache ZooKeeper are:

 Reliable System: This system is very reliable as it keeps working even if a node fails.
 Simple Architecture: The architecture of ZooKeeper is quite simple as there is a shared
hierarchical namespace which helps coordinating the processes.
 Fast Processing: ZooKeeper is especially fast in "read-dominant" workloads (i.e. workloads in
which reads are much more common than writes).
 Scalable: The performance of ZooKeeper can be improved by adding nodes
z) SaaS security

Software as a service (SaaS) Security refers to securing user privacy and corporate data in subscription-
based cloud applications. SaaS applications carry a large amount of sensitive data and can be accessed
from almost any device by a mass of users, thus posing a risk to privacy and sensitive information. SaaS
is one of several categories of cloud subscription services, including platform-as-a-service and
infrastructure-as-a-service. SaaS has become increasingly popular because it saves organizations from
needing to purchase servers and other infrastructure or maintain an in-house support staff. Instead, a
SaaS provider hosts and provides SaaS security and maintenance to their software. Some well-known
SaaS applications include Microsoft Office 365, Salesforce.com, Cisco Webex, Box, and Adobe Creative
Cloud. Most enterprise software vendors also offer cloud versions of their applications, such as Oracle
Financials Cloud.

Benefits of software-as-a-service:

 On-demand and scalable resources: Organizations can purchase additional storage, end-user
licenses, and features for their applications on an as-needed basis.
 Fast implementation: Organizations can subscribe almost instantly to a SaaS application and
provision employees, unlike on-premises applications that require more time.
 Easy upgrades and maintenance: The SaaS provider handle patches and updates, often without
the customer being aware of it.
 No infrastructure or staff costs: Organizations avoid paying for in-house hardware and software
licenses with perpetual ownership. They also do not need on-site IT staff to maintain and
support the application. This enables even small organizations to use enterprise-level
applications that would be costly for them to implement.

You might also like