0% found this document useful (0 votes)
5 views21 pages

Cloud Computing

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 21

Cloud Computing

Cloud Computing is defined as storing and accessing of data and computing services over the
internet. It doesn't store any data on your personal computer. It is the on-demand availability
of computer services like servers, data storage, networking, databases, etc. The main purpose
of cloud computing is to give access to data centers to many users. Users can also access data
from a remote server. Cloud computing is the delivery of different services through the Internet.
These resources include tools and applications like data storage, servers, databases,
networking, and software.
Whenever you travel through a bus or train, you take a ticket for your destination and hold back
to your seat till you reach your destination. Likewise other passengers also takes ticket and
travel in the same bus with you and it hardly bothers you where they go. When your stop comes
you get off the bus thanking the driver. Cloud computing is just like that bus, carrying data and
information for different users and allows to use its service with minimal cost.
Rather than keeping files on a proprietary hard drive or local storage device, cloud-based
storage makes it possible to save them to a remote database. As long as an electronic device
has access to the web, it has access to the data and the software programs to run it. Cloud
computing is a popular option for people and businesses for a number of reasons including cost
savings, increased productivity, speed and efficiency, performance, and security.
Cloud computing is a virtualization-based technology that allows us to create, configure, and
customize applications via an internet connection. The cloud technology includes a
development platform, hard disk, software application, and database.
The term cloud refers to a network or the Internet. We can say that Cloud is something, which
is present at remote location. It is a technology that uses remote servers on the internet to store,
manage, and access data online rather than local drives. The data can be anything such as files,
images, documents, audio, video, and more. Cloud can provide services over public and private
networks, i.e., WAN, LAN or VPN (Virtual Private Network).

Examples of Cloud Computing Services


Google Cloud, Amazon Web Services (AWS), IBM Cloud

Why the Name Cloud?


Why is cloud computing represented by a cloud symbol?
To find the answer we have to think back to the early days of network design. The role of the
network engineer comprised of designing a network that would function properly. Time was
dedicated to understanding what devices were on the network, how they were connected,
managed, controlled, etc.
Some networks hooked to other networks or the internet. To illustrate this connection as part
of the design the engineers needed a way to indicate that there was a network but also indicate
that they weren’t trying to describe it because it was more than what they knew. They landed
on the cloud symbol as a metaphor for the internet. It was based on a cloud drawing used in the
past to represent the telephone network.
The term “Cloud” came from a network design that was used by network engineers to represent
the location of various network devices and their inter-connections. The shape of this network
design was like a cloud.
The name cloud computing was inspired by the cloud symbol that’s often used to represent the
Internet in flow charts and diagrams. Cloud computing is a general term for anything that
involves delivering hosted service over the Internet. The word cloud is used as a metaphor for
the Internet, based on the standardized use of a cloud-like shape to denote a network on
telephony schematics and later to depict the Internet in computer network diagrams as an
abstraction of the underlying infrastructure it represents.
One relevant quote "Cloud comes from the early days of Internet where we drew the network
as a cloud... we didn't care where the message went - the cloud hid it from us", Kevin Marks,
Google.
Cloud computing is named as such because the information being accessed is found remotely
in the cloud or a virtual space. Companies that provide cloud services enable users to store files
and applications on remote servers and then access all the data via the Internet. This means the
user is not required to be in a specific place to gain access to it, allowing the user to work
remotely.

There are the following operations that we can do using cloud computing:
• Developing new applications and services
• Storage, back up, and recovery of data
• Hosting blogs and websites
• Delivery of software on demand
• Analysis of data
• Streaming videos and audios
Applications such as e-mail, web conferencing, customer relationship management (CRM)
execute on cloud.
Cloud Computing refers to manipulating, configuring, and accessing the hardware and software
resources remotely. It offers online data storage, infrastructure, and application.
Cloud computing offers platform independency, as the software is not required to be installed
locally on the PC. Hence, the Cloud Computing is making our business applications mobile
and collaborative.

Why Cloud Computing?


Small as well as large IT companies, follow the traditional methods to provide the IT
infrastructure. That means for any IT company, we need a Server Room that is the basic need
of IT companies. In that server room, there should be a database server, mail server,
networking, firewalls, routers, modem, switches, high speed net, and the maintenance
engineers. To establish such IT infrastructure, we need to spend lots of money. To overcome
all these problems and to reduce the IT infrastructure cost, Cloud Computing comes into
existence.
With increase in computer and Mobile user’s, data storage has become a priority in all fields.
Large and small scale businesses today focus on their data and they spent a huge amount of
money to maintain this data. It requires a strong IT support and a storage hub. Not all businesses
can afford high cost of in-house IT infrastructure and back up support services. For them Cloud
Computing is a cheaper solution. Perhaps its efficiency in storing data, computation and less
maintenance cost has succeeded to attract even bigger businesses as well.
Cloud computing decreases the hardware and software demand from the user’s side. The only
thing that user must be able to run is the cloud computing systems interface software, which
can be as simple as Web browser, and the Cloud network takes care of the rest. We all have
experienced cloud computing at some instant of time, some of the popular cloud services we
have used or we are still using are mail services like gmail, hotmail or yahoo etc. While
accessing e-mail service our data is stored on cloud server and not on our computer. The
technology and infrastructure behind the cloud is invisible. It is less important whether cloud
services are based on HTTP, XML, PHP or other specific technologies as far as it is user
friendly and functional. An individual user can connect to cloud system from his/her own
devices like desktop, laptop or mobile.
Cloud computing helps small businesses to convert their maintenance cost into profit. In an in-
house IT server, you have to pay a lot of attention and ensure that there are no flaws into the
system so that it runs smoothly. And in case of any technical fault, you are completely
responsible; it will seek a lot of attention, time and money for repair. Whereas, in cloud
computing, the service provider takes the complete responsibility of the complication and the
technical faults.
Cloud Computing provides us means by which we can access the applications as utilities over
the internet. It allows us to create, configure, and customize the business applications online.

History of Cloud Computing


Before emerging the cloud computing, there was Client/Server computing which is basically a
centralized storage in which all the software applications, all the data and all the controls are
resided on the server side. If a single user wants to access specific data or run a program, he/she
need to connect to the server and then gain appropriate access, and then he/she can do his/her
business. Then after, distributed computing came into picture, where all the computers are
networked together and share their resources when needed. On the basis of above computing,
there was emerged of cloud computing concepts that was implemented later.
1960 - In the 60s, mainframe computers were huge and occupied more space like an entire
room. Due to the cost of buying and maintaining mainframes, organizations couldn’t afford to
purchase one for each user. The solution of that problem was “time sharing” in which multiple
users shared access to data and CPU time.
1990 - Telecommunication companies started offering virtual private network connections,
which meant it was possible to allow for more users through shared access to the same physical
infrastructure. This change enables traffic to be shifted to allow for better network balance and
more control over bandwidth usage. Virtualization for PC- based systems was started.
1997 - The term “cloud computing” was coined by the University of Texas Professor Ramnath
Chellappa in a talk on a “new computing paradigm.”
2006 - Amazon created Amazon Web Service (AWS) and introduced its Elastic Compute
Cloud (EC2).
2009 - Google and Microsoft entered the playing field. The Google app engine brought the
low-cost computing and storage services, and Microsoft followed suit with window azure.

Characteristics of Cloud Computing


The characteristics of cloud computing are given below:
Agility
The cloud works in a distributed computing environment. It shares resources among users and
works very fast.
High Availability and Reliability
The availability of servers is high and more reliable because the chances of infrastructure
failure are minimum.
High Scalability
Cloud offers "on-demand" provisioning of resources on a large scale, without having engineers
for peak loads.
Multi-Sharing
With the help of cloud computing, multiple users and applications can work more efficiently
with cost reductions by sharing common infrastructure. Cloud computing allows multiple
tenants to share a pool of resources. One can share single physical instance of hardware,
database and basic infrastructure.
Device and Location Independence
Since cloud computing is completely web based, it can be accessed from anywhere and at any
time. Cloud computing enables the users to access systems using a web browser regardless of
their location or what device they use e.g. PC, mobile phone, etc. As infrastructure is off-site
(typically provided by a third-party) and accessed via the Internet, users can connect from
anywhere.
Maintenance
Maintenance of cloud computing applications is easier, since they do not need to be installed
on each user's computer and can be accessed from different places. So, it reduces the cost also.
Low Cost
By using cloud computing, the cost will be reduced because to take the services of cloud
computing, IT company need not to set its own infrastructure and pay-as-per usage of
resources.
Services in pay-per-use mode
Application Programming Interfaces (APIs) are provided to the users so that they can access
services on the cloud by using these APIs and pay the charges as per the usage of services.
On Demand Self Service
Cloud Computing allows the users to use web services and resources on demand. One can login
to a website at any time and use them.

Advantages of Cloud Computing

Back-up and restore data


Once the data is stored in the cloud, it is easier to get back-up and restore that data using the
cloud.
Improved collaboration
Cloud applications improve collaboration by allowing groups of people to quickly and easily
share information in the cloud via shared storage.
Excellent accessibility
Cloud allows us to quickly and easily access store information anywhere, anytime in the whole
world, using an internet connection. An internet cloud infrastructure increases organization
productivity and efficiency by ensuring that our data is always accessible.
Low cost
There is no requirement of high-power computers and technology because the application will
run on the cloud, not on the user’s PC. The cloud reduces the software costs because there is
no need to purchase software for every computer in an organization. Cloud computing reduces
both hardware and software maintenance costs for organizations.
Mobility
Cloud computing allows us to easily access all cloud data via mobile.
Pay-Per-Use model
Cloud computing offers Application Programming Interfaces (APIs) to the users for access
services on the cloud and pays the charges as per the usage of service.
Unlimited storage capacity
Cloud offers us a huge amount of storing capacity for storing our important data such as
documents, images, audio, video, etc. in one place.
Increase computing power
Cloud servers have a very high-capacity for running and processing tasks and the processing
of applications.
Updating
Instant software update is possible and users don't have to face the choice problem between
obsolete and high-upgrade software.

Disadvantages of Cloud Computing


Internet speed - Cloud technology requires a high-speed internet connection as web-based
applications often require a large bandwidth amount.
Constant Internet Connection - It's impossible to use cloud infrastructure without the
Internet. To access any application or cloud storage, a constant internet connection is required.
Security - With cloud computing, all the data gets stored in the cloud. The most significant
disadvantage of the cloud is security. Data on the cloud is not secure. There may be chances of
data loss because an unauthorized user can access the user’s data. While sending the data on
the cloud, there may be a chance that your organization's information is hacked by hackers.
Cloud computing is the on-demand availability of computer system resources, especially data
storage (cloud storage) and computing power, without direct active management by the user.
The term is generally used to describe data centers available to many users over the Internet.

Cloud computing metaphor: the group of networked elements providing services need not be
individually addressed or managed by users; instead, the entire provider-managed suite of
hardware and software can be thought of as an amorphous cloud.

Computing
The ACM (Association for Computing Machinery) Computing Curricula 2005 and 2020
defined "computing" as follows:
"In a general way, we can define computing to mean any goal-oriented activity requiring,
benefiting from, or creating computers. Thus, computing includes designing and building
hardware and software systems for a wide range of purposes; processing, structuring, and
managing various kinds of information; doing scientific studies using computers; making
computer systems behave intelligently; creating and using communications and entertainment
media; finding and gathering information relevant to any particular purpose, and so on. The list
is virtually endless, and the possibilities are vast."
NIST (National Institute of Standards and Technology) Definition of Cloud Computing
Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access
to a shared pool of configurable computing resources (e.g., networks, servers, storage,
applications, and services) that can be rapidly provisioned and released with minimal
management effort or service provider interaction. This cloud model is composed of five
essential characteristics, three service models, and four deployment models.

Essential Characteristics:
On-demand self-service. A consumer can unilaterally provision computing capabilities, such
as server time and network storage, as needed automatically without requiring human
interaction with each service provider.
Broad network access. Capabilities are available over the network and accessed through
standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g.,
mobile phones, tablets, laptops, and workstations).
Resource pooling. The provider’s computing resources are pooled to serve multiple consumers
using a multi-tenant model, with different physical and virtual resources dynamically assigned
and reassigned according to consumer demand. There is a sense of location independence in
that the customer generally has no control or knowledge over the exact location of the provided
resources but may be able to specify location at a higher level of abstraction (e.g., country,
state, or datacenter). Examples of resources include storage, processing, memory, and network
bandwidth.
Rapid elasticity. Capabilities can be elastically provisioned and released, in some cases
automatically, to scale rapidly outward and inward commensurate with demand. To the
consumer, the capabilities available for provisioning often appear to be unlimited and can be
appropriated in any quantity at any time.
Measured service. Cloud systems automatically control and optimize resource use by
leveraging a metering capability at some level of abstraction appropriate to the type of service
(e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be
monitored, controlled, and reported, providing transparency for both the provider and
consumer of the utilized service.

Trends in Computing
• Distributed Computing
• Grid Computing
• Cluster Computing
• Utility Computing
• Cloud Computing
Centralized System
Centralized systems are systems that use client/server architecture where one or more client
nodes are directly connected to a central server. This is the most commonly used type of system
in many organisations where client sends a request to a company server and receives the
response.

Characteristics of Centralized System


• Presence of a global clock: As the entire system consists of a central node (a server/ a
master) and many client nodes (a computer/ a slave), all client nodes synchronize up with
the global clock (the clock of the central node).
• One single central unit: One single central unit which serves/coordinates all the other nodes
in the system.
• Dependent failure of components: Central node failure causes entire system to fail. This
makes sense because when the server is down, no other entity is there to send/receive
response/requests.
Architecture of Centralized System - Client-Server architecture. The central node that serves
the other nodes in the system is the server node and all the other nodes are the client nodes.
Disadvantages of Centralized System
• Highly dependent on the network connectivity – System can fail if the nodes lose
connectivity as there is only one central node.
• No graceful degradation of system – abrupt failure of the entire system
• Less possibility of data backup. If the server node fails and there is no backup, you lose the
data straight away.
• Difficult server maintenance – There is only one server node and due to availability reasons,
it is inefficient and unprofessional to take the server down for maintenance.
Decentralized System
In decentralized systems, every node makes its own decision. The final behavior of the system
is the aggregate of the decisions of the individual nodes. There is no single entity that receives
and responds to the request.

Example – Bitcoin. Let’s take bitcoin for example because it’s the most popular use case of
decentralized systems. No single entity/organisation owns the bitcoin network. The network is
a sum of all the nodes who talk to each other for maintaining the amount of bitcoin every
account holder has.
Characteristics of Decentralized System
• Lack of a global clock: Every node is independent of each other and hence, have different
clocks that they run and follow.
• Multiple central units (Computers/Nodes/Servers): More than one central unit which can
listen for connections from other nodes.
• Dependent failure of components: one central node failure causes a part of system to fail;
not the whole system.
Architecture of Decentralized System
peer-to-peer architecture – all nodes are peers of each other. No one node has supremacy over
other nodes.
master-slave architecture – One node can become a master by voting and help in coordinating
of a part of the system but this does not mean the node has supremacy over the other node
which it is coordinating.
Applications of Decentralized System
• Private networks – peer nodes joined with each other to make a private network.
• Cryptocurrency – Nodes joined to become a part of a system in which digital currency is
exchanged without any trace and location of who sent what to whom.
Use Cases
• Blockchain
• Decentralized databases – Entire database split in parts and distributed to different nodes
for storage and use. For example, records with names starting from ‘A’ to ‘K’ in one node,
‘L’ to ‘N’ in second node and ‘O’ to ‘Z’ in third node.
• Cryptocurrency

Distributed Systems
A distributed system is a collection of independent computers that appears to its users as a
single coherent system.

Distributed computing is a field of computer science that studies distributed systems. A


distributed system consists of multiple autonomous computers that communicate through a
computer network. The computers interact with each other in order to achieve a common goal.
Distributed computing also refers to the use of distributed systems to solve computational
problems. In distributed computing, a problem is divided into many tasks, each of which is
solved by one or more computers. A distributed system is a collection of independent
computers, interconnected via a network, capable of collaborating on a task. Distributed
computing has become increasingly popular due to the advancements that can make both
machines and networks cheaper and faster.
In distributed system, there are several autonomous computational entities, each of which has
its own local memory. The entities communicate with each other by message passing. The
processors communicate with one another through various communication lines such as high-
speed buses or telephone lines. Each processor has its own local memory.
Example
• Intranets / Workgroups
• Automatic Teller Machine (bank) Network
• Internet/World-Wide Web

Computers in a Distributed System


• Workstations: Computers used by end-users to perform computing
• Server Systems: Computers which provide resources and services
• Personal Assistance Devices: Handheld computers connected to the system via a wireless
communication link.

Characteristics of Distributed System


Fault tolerance
• When one or some nodes fail, the whole system can still work fine except performance.
• Need to check status of each node
Each node play partial role
• Each computer has only a limited, incomplete view of the system.
• Each computer may know only one part of the input.
Resource sharing
• Each user can share the computing power and storage resource in the system with other
users.
Load sharing
• Dispatching several tasks to each nodes can help share loading to the whole system.
Easy to expand
• We expect to use few time when adding nodes. Hope to spend no time if possible.
Centralized vs. Distributed Computing

Early computing was performed on a single processor. Uni-processor computing can be called
centralized computing.

Centralized data networks are those that maintain all the data in a single computer, location
and to access the information you must access the main computer of the system, known as
“server”.
On the other hand, a distributed data network works as a single logical data network, installed
in a series of computers (nodes) located in different geographic locations and that are not
connected to a single processing unit, but are fully connected to provide integrity and
accessibility to information from any point. In this system all the nodes contain information
and all the clients of the system are in equal condition. In this way, distributed data networks
can perform autonomous processing.
Maintenance -Centralized networks are the easiest to maintain since they have only one point
of failure, this is not the case of the distributed ones, which are more difficult to maintain.
Stability - The centralized ones are very unstable, since any problem that affects the central
server can generate chaos throughout the system. However, the distributed ones are more
stable, by storing the totality of the system information in a large number of nodes that maintain
equal conditions with each other.
Security - Distributed networks have higher level of security, since to carry out malicious
attacks would have to attack a large number of nodes at the same time. As the information is
distributed among the nodes of the network, in this case if a legitimate change is made it will
be reflected in the rest of the nodes of the system that will accept and verify the new
information; but if some illegitimate change is made, the rest of the nodes will be able to detect
it and will not validate this information. This consensus between nodes protects the network
from deliberate attacks or accidental changes of information.
Speed - Distributed systems have an advantage over centralized systems in terms of network
speed, since as the information is not stored in a central location, a bottleneck is less likely, in
which the number of people attempting to access a server is larger than it can support, causing
waiting times and slowing down the system.
Scalability - Centralized systems tend to present scalability problems since the capacity of the
server is limited and cannot support infinite traffic. Distributed systems have greater scalability,
due to the large number of nodes that support the network.
Availability – In centralized systems, if there are several requests, the server can break down
and no longer respond. But distributes systems can withstand significant pressure on the
network. All the nodes in the network have the data. Then, the requests are distributed among
the nodes. Therefore, the pressure does not fall on a computer, but on the entire network. In
this case, the total availability of the network is much greater than in the centralized one.
Reliability – In centralized system, server failure can cause failure of the entire system. But in
distributed system, if one machine crashes, the system as a whole can still survive. Higher
availability and improved reliability can be achieved in distributed systems.
Distributed Applications
Applications that consist of a set of processes that are distributed across a network of machines
and work together as an ensemble to solve a common problem. There are several applications
which coordinate among themselves to address a particular problem.
Not only in the past; now also, it is mostly, several applications are client server type of things,
resource management centralized at the server. So, we want to make it in a distributed fashion.
There is peer to peer computing which represents a movement towards more truly distributed
applications.
In client-server model, different clients invoke a particular server:

Peer-to-Peer
Peer-to-peer (P2P) computing or networking is a distributed application architecture that
partitions tasks or workloads between peers. Peers are equally privileged, equipotent
participants in the application. They are said to form a peer-to-peer network of nodes.
Peers make a portion of their resources, such as processing power, disk storage or network
bandwidth, directly available to other network participants, without the need for central
coordination by servers or stable hosts. Peers are both suppliers and consumers of resources,
in contrast to the traditional client-server model in which the consumption and supply of
resources is divided.
A peer-to-peer (P2P) network in which interconnected nodes ("peers") share resources amongst
each other without the use of a centralized administrative system:
A network based on the client-server model, where individual clients request services and
resources from centralized servers:

A typical distributed application based on peer processes:

There are different peers, there are different applications which are running in the peers, and
these applications talk to each other to realize a particular job.
Grid Computing
Grid computing is a group of networked computers which work together as a virtual
supercomputer to perform large tasks, such as analysing huge sets of data or weather modeling.
The term grid computing originated in the early 1990s as a metaphor for making computer
power as easy to access as an electric power grid. An electrical grid is an interconnected
network for delivering electricity from producers to consumers. It consists of generating
stations, electrical substations, high voltage transmission lines, distribution lines that connect
individual customers.
Grid computing is the use of widely distributed computer resources to reach a common goal.
Grid computing is distinguished from conventional high-performance computing systems such
as cluster computing in that grid computers have each node set to perform a different
task/application. Grid computers also tend to be more heterogeneous and geographically
dispersed (thus not physically coupled) than cluster computers. Although a single grid can be
dedicated to a particular application, commonly a grid is used for a variety of purposes. Grids
are a form of distributed computing whereby a "super virtual computer" is composed of many
networked loosely coupled computers acting together to perform large tasks.
Grid Computing can be defined as a network of computers working together to perform a task
that would rather be difficult for a single machine. All machines on that network work under
the same protocol to act like a virtual supercomputer. The task that they work on may include
analysing huge datasets or simulating situations which require high computing power.
Computers on the network contribute resources like processing power and storage capacity to
the network.
Grid Computing is a subset of distributed computing, where a virtual super computer comprises
of machines on a network connected by some bus, mostly Ethernet or sometimes the Internet.
It can also be seen as a form of Parallel Computing where instead of many CPU cores on a
single machine, it contains multiple cores spread across various locations.
A form of networking. Unlike conventional networks that focus on communication among
devices, grid computing harnesses unused processing cycles of all computers in a network for
solving problems too intensive for any stand-alone machine.
Grid computing represents a distributed computing approach that attempts to achieve high
computational performance by a non-traditional means. Rather than achieving high
performance computational needs by having large clusters of similar computing resources or a
single high-performance system, such as a supercomputer, grid computing attempts to harness
the computational resources of a large number of dissimilar devices. Grid computing typically
leverages the spare CPU cycles of devices that are not currently needed for a system’s own
needs, and then focus them on the particular goal of the grid computing resources. While these
few spare cycles from each individual computer might not mean much to the overall task, in
aggregate, the cycles are significant.
Grid computing is a computing infrastructure that provides dependable, consistent, pervasive
and inexpensive access to computational capabilities.
Grid computing enables the virtualization of distributed computing and data resources such as
processing, network bandwidth and storage capacity to create a single system image, granting
users and applications seamless access to vast IT capabilities. Just as an Internet user views a
unified instance of content via the Web, a grid user essentially sees a single, large virtual
computer.

Electrical Power Grid Analogy


Electrical power grid – Users (or electrical appliances) get access to electricity through wall
sockets with no care or consideration for where or how the electricity is actually generated.
The power grid links together power plants of many different kinds.
Computational grid – Users (or client applications) gain access to computing resources
(processors, storage, data, applications and so on) as needed with little or no knowledge of
where those resources are located or what the underlying technologies, hardware, operating
system and so on are. The computational grid links together computing resources (PCs,
workstations, servers, storage elements) and provides the mechanism needed to access them.

Characteristics of Grid Computing


Large scale: A grid must be able to deal with a number of resources ranging from just a few
to millions.
Geographical distribution: Grid's resources may be located at distant places.
Heterogeneity: A grid hosts both software and hardware resources that can be varied ranging
from data, files, software components or programs to sensors, scientific instruments, display
devices, personal digital organizers, computers, super-computers and networks.
Resource sharing: Resources in a grid belong to many different organizations that allow other
organizations (i.e. users) to access them.
Multiple administrations: Each organization may establish different security and
administrative policies under which their owned resources can be accessed and used.
Resource coordination: Resources in a grid must be coordinated in order to provide
aggregated computing capabilities.
Transparent access: A grid should be seen as a single virtual computer.
Dependable access: A grid must assure the delivery of services under established Quality of
Service (QoS) requirements.
Consistent access: A grid must be built with standard services, protocols and interfaces thus
hiding the heterogeneity of the resources while allowing its scalability.
Pervasive access: The grid must grant access to available resources by adapting to a dynamic
environment in which resource failure is commonplace.
Need for Grid Computing
Utilising Underutilised Resources - In most organisations, many computing resources are
idle and underutilised at most of the times. Realising that these idle times are being wasted and
not profitable to the organisation, grid computing provide the solution for exploiting
underutilised resources. In addition to processing resources, it is often that computing resources
have also large amount of unused storage capacity. Grid computing allows these unused
capacities to be considered as a single virtual storage media where the need of huge storage
capacity within a particular application is resolved. Thus, the performance of this application
is improved if compared running this application over a single computer.
Parallel CPU Capacity - The possibility of applying massive parallel CPU activity within an
application is one of the main exciting features of grid computing.
Resource Balancing – Grid computing groups multiple heterogeneous resources into a single
virtual resource. Furthermore, the grid also facilitates in balancing these resources depending
on the requirements of the tasks. As a result, appropriate resources are selected based on the
time of execution and the priority of each task.
The benefits of grid computing can be categorised into:
a) Business benefits
• Faster time to obtain the results
• Increase productivity

b) Technology benefits
• Optimise existing infrastructure
• Increase access to data and collaboration

Who uses Grid Computing?


Type of Grids
Grid have been divided into a number of types, on the basis of their use.
Computational Grid: These grids provide secure access to huge pool of shared processing
power suitable for high throughput applications and computation intensive computing.
Data Grid: Data grids provide an infrastructure to support data storage, data discovery, data
handling, data publication, and data manipulation of large volumes of data actually stored in
various heterogeneous databases and file systems.
Collaboration Grid: With the advent of Internet, there has been an increased demand for better
collaboration. Such advanced collaboration is possible using the grid. For instance, persons
from different companies in a virtual enterprise can work on different components of a CAD
project without even disclosing their proprietary technologies.

Grid Components

A Grid computing network mainly consists of these three types of machines


Control Node: A computer, usually a server or a group of servers which administrates the
whole network and keeps the account of the resources in the network pool.
Provider: The computer which contributes it’s resources in the network resource pool.
User: The computer that uses the resources on the network.
When a computer makes a request for resources to the control node, control node gives the user
access to the resources available on the network. When it is not in use it should ideally
contribute its resources to the network. Hence a normal computer on the node can swing in
between being a user or a provider based on its needs.
Grids are often constructed with general-purpose grid middleware software libraries.
Middleware is a software which lies between an operating system and the applications running
on it. Middleware is software that provides common services and capabilities to applications
outside of what’s offered by the operating system. Middleware enables communication and
data management for distributed applications.

You might also like