UNIT1
UNIT1
UNIT1
DEPARTMENT OF CSE
INTRODUCTION
EVOLUTION OF DISTRIBUTED COMPUTING
Grids enable access to shared computing power and storage capacity from your desktop.
Clouds enable access to leased computing power and storage capacity from your desktop.
• Grids are an open source technology. Resource users and providers alike can understand
and contribute to the management of their grid
• Clouds are a proprietary technology. Only the resource provider knows exactly how
their cloud manages data, job queues, security requirements and so on.
• The concept of grids was proposed in 1995. The Open science grid (OSG) started in 1995
The EDG (European Data Grid) project began in 2001.
• In the late 1990`s Oracle and EMC offered early private cloud solutions . However the
term cloud computing didn't gain prominence until 2007.
SCALABLE COMPUTING OVER THE INTERNET
Instead of using a centralized computer to solve computational problems, a parallel and
distributed computing system uses multiple computers to solve large-scale problems over the
Internet. Thus, distributed computing becomes data-intensive and network-centric.
The Age of Internet Computing
o high-performance computing (HPC) applications is no longer optimal for measuring
system performance
o The emergence of computing clouds instead demands high-throughput computing (HTC)
systems built with parallel and distributed computing technologies
o We have to upgrade data centers using fast servers, storage systems, and high-bandwidth
networks.
The Platform Evolution
o From 1950 to 1970, a handful of mainframes, including the IBM 360 and CDC 6400
o From 1960 to 1980, lower-cost minicomputers such as the DEC PDP 11 and VAX
Series
o From 1970 to 1990, we saw widespread use of personal computers built with VLSI
microprocessors.
o From 1980 to 2000, massive numbers of portable computers and pervasive devices
appeared in both wired and wireless applications
o Since 1990, the use of both HPC and HTC systems hidden in clusters, grids, or
Internet clouds has proliferated
Reasons to adapt the cloud for upgraded Internet applications and web services:
1. Desired location in areas with protected space and higher energy efficiency
2. Sharing of peak-load capacity among a large pool of users, improving overall utilization
3. Separation of infrastructure maintenance duties from domain-specific application development
4. Significant reduction in cloud computing cost, compared with traditional computing
paradigms
5. Cloud computing programming and application development
6. Service and data discovery and content/service distribution
7. Privacy, security, copyright, and reliability issues
8. Service agreements, business models, and pricing policies
🞂 Cloud computing is using the internet to access someone else's software running on
someone else's hardware in someone else's data center.
🞂 The user sees only one resource ( HW, Os) but uses virtually multiple os. HW resources
etc..
🞂 Cloud architecture effectively uses virtualization
🞂 A model of computation and data storage based on “pay as you go” access to “unlimited”
remote data center capabilities
🞂 A cloud infrastructure provides a framework to manage scalable, reliable, on-demand
access to applications
🞂 Cloud services provide the “invisible” backend to many of our mobile applications
🞂 High level of elasticity in consumption
🞂 Historical roots in today’s Internet apps
🞂 Search, email, social networks, e-com sites
🞂 File storage (Live Mesh, Mobile Me)
Definition
Essential Characteristics 3
🞂 Resource pooling.
◦ The provider’s computing resources are pooled to serve multiple consumers using
a multi-tenant model, with different physical and virtual resources dynamically
assigned and reassigned according to consumer demand.
Essential Characteristics 4
🞂 Rapid elasticity.
◦ Capabilities can be rapidly and elastically provisioned - in some cases
automatically - to quickly scale out; and rapidly released to quickly scale in.
◦ To the consumer, the capabilities available for provisioning often appear to be
unlimited and can be purchased in any quantity at any time.
Essential Characteristics 5
🞂 Measured service.
◦ Cloud systems automatically control and optimize resource usage by leveraging a
metering capability at some level of abstraction appropriate to the type of service.
◦ Resource usage can be monitored, controlled, and reported - providing
transparency for both the provider and consumer of the service.
PaaS providers
🞂 Google App Engine
◦ Python, Java, Eclipse
🞂 Microsoft Azure
◦ .Net, Visual Studio
🞂 Sales Force
◦ Apex, Web wizard
🞂 TIBCO,
🞂 VMware,
🞂 Zoho
II Hardware Evolution
In 1930, binary arithmetic was developed
computer processing technology, terminology, and programming languages.
• In 1939,Electronic computer was developed
Computations were performed using vacuum-tube technology.
• In 1941, Konrad Zuse's Z3 was developed
Support both floating-point and binary arithmetic.
There are four generations
First Generation Computers
Second Generation Computers
Third Generation Computers
Fourth Generation Computers
a.First Generation Computers
Time Period : 1942 to 1955
Technology : Vacuum Tubes
Size : Very Large System
Processing : Very Slow
Examples:
1.ENIAC (Electronic Numerical Integrator and Computer)
2.EDVAC(Electronic Discrete Variable Automatic Computer)
Advantages:
• It made use of vacuum tubes which was the advanced technology at that time
• Computations were performed in milliseconds.
Disadvantages:
• very big in size, weight was about 30 tones.
• very costly.
• Requires more power consumption
•Large amount heat was generated.
Advantages:
Fastest in computation and size get reduced as compared to the previous generation of
computer. Heat generated is small.
Less maintenance is required.
Disadvantages:
The Microprocessor design and fabrication are very complex.
Air Conditioning is required in many cases
• The term parallel computing and distributed computing are often used interchangeably,
even though they mean slightly different things.
• The term parallel implies a tightly coupled system, where as distributed systems
refers to a wider class of system, including those that are tightly coupled.
• More precisely, the term parallel computing refers to a model in which the
computation is divided among several processors sharing the same memory.
• The architecture of parallel computing system is often characterized by the
homogeneity of components: each processor is of the same type and it has the same
capability as the others.
• The shared memory has a single address space, which is accessible to all the processors.
• Parallel programs are then broken down into several units of execution that can be
allocated to different processors and can communicate with each other by means of
shared memory.
• Originally parallel systems are considered as those architectures that featured multiple
processors sharing the same physical memory and that were considered a single
computer.
– Over time, these restrictions have been relaxed, and parallel systems now include
all architectures that are based on the concept of shared memory, whether this is
physically present or created with the support of libraries, specific hardware, and
a highly efficient networking infrastructure.
– For example: a cluster of which of the nodes are connected through an InfiniBand
network and configured with distributed shared memory system can be considered
as a parallel system.
• The term distributed computing encompasses any architecture or system that allows the
computation to be broken down into units and executed concurrently on different
computing elements, whether these are processors on different nodes, processors on the
same computer, or cores within the same processor.
• Distributed computing includes a wider range of systems and applications than parallel
computing and is often considered a more general term.
• Even though it is not a rule, the term distributed often implies that the locations of the
computing elements are not the same and such elements might be heterogeneous in terms
of hardware and software features.
• Classic examples of distributed computing systems are
– Computing Grids
– Internet Computing Systems
a.Parallel Processing
• Processing of multiple tasks simultaneously on multiple processors is called parallel
processing.
• The parallel program consists of multiple active processes ( tasks) simultaneously solving
a given problem.
• A given task is divided into multiple subtasks using a divide-and-conquer technique, and
each subtask is processed on a different central processing unit (CPU).
• Programming on multi processor system using the divide-and-conquer technique is called
parallel programming.
• Many applications today require more computing power than a traditional sequential
computer can offer.
• Parallel Processing provides a cost effective solution to this problem by increasing the
number of CPUs in a computer and by adding an efficient communication system
between them.
• The workload can then be shared between different processors. This setup results in
higher computing power and performance than a single processor a system offers.
•
(iii) Multiple – Instruction , Single Data (MISD) systems
• MISD computing system is a multi processor machine capable of executing different
instructions on different Pes all of them operating on the same data set.
• Machines built using MISD model are not useful in most of the applications.
• Few machines are built but none of them available commercially.
• This type of systems are more of an intellectual exercise than a practical configuration.
(iv) Multiple – Instruction , Multiple Data (MIMD) systems
• MIMD computing system is a multi processor machine capable of executing multiple
instructions on multiple data sets.
• Each PE in the MIMD model has separate instruction and data streams, hence machines
built using this model are well suited to any kind of application.
• Unlike SIMD, MISD machine, PEs in MIMD machines work asynchronously,
MIMD machines are broadly categorized into shared-memory MIMD and distributed memory
MIMD based on the way PEs are coupled to the main memory
annauniversityedu.blogspot.com
Shared Memory MIMD machines
• All the PEs are connected to a single global memory and they all have access to it.
• Systems based on this model are also called tightly coupled multi processor systems.
• The communication between PEs in this model takes place through the shared memory.
• Modification of the data stored in the global memory by one PE is visible to all other
PEs.
• Dominant representative shared memory MIMD systems are silicon graphics machines
and Sun/IBM SMP ( Symmetric Multi-Processing).
annauniversityedu.blogspot.com
Shared Vs Distributed MIMD model
• The shared memory MIMD architecture is easier to program but is less tolerant to failures
and harder to extend with respect to the distributed memory MIMD model.
• Failures, in a shared memory MIMD affect the entire system, whereas this is not the case
of the distributed model, in which each of the PEs can be easily isolated.
• Moreover, shared memory MIMD architectures are less likely to scale because the
addition of more PEs leads to memory contention.
• This is a situation that does not happen in the case of distributed memory, in which each
PE has its own memory.
As a result, distributed memory MIMD architectures are most popular today
d. Levels of Parallelism
• Levels of Parallelism are decided on the lumps of code ( grain size) that can be a
potential candidate of parallelism.
• The table shows the levels of parallelism.
• All these approaches have a common goal
– To boost processor efficiency by hiding latency.
– To conceal latency, there must be another thread ready to run whenever a lengthy
operation occurs.
• The idea is to execute concurrently two or more single-threaded applications. Such as
compiling, text formatting, database searching, and device simulation.
e. Laws of Caution
• Studying how much an application or a software system can gain from parallelism.
• In particular what need to keep in mind is that parallelism is used to perform multiple
activities together so that the system can increase its throughput or its speed.
• But the relations that control the increment of speed are not linear.
• For example: for a given n processors, the user expects speed to be increase by in times.
This is an ideal situation, but it rarely happens because of the communication overhead.
• Here two important guidelines to take into account.
– Speed of computation is proportional to the square root of the system cost; they
never increase linearly. Therefore, the faster a system becomes, the more
expensive it is to increase its speed
– Speed by a parallel computer increases as the logarithm of the number of
-
-
-
-
-
- Symmetric architectures in which all the components, called peers, play the same role
and incorporate both client and server capabilities of the client/server model.
- More precisely, each peer acts as a server when it processes requests from other peers
and as a client when it issues requests to other peers.
Peer-to-Peer architectural Style
d.Models for Inter process Communication
• Distributed systems are composed of a collection of concurrent processes interacting with
each other by means of a network connection.
• IPC is a fundamental aspect of distributed systems design and implementation.
• IPC is used to either exchange data and information or coordinate the activity of
processes.
• IPC is what ties together the different components of a distributed system, thus making
them act as a single system.
• There are several different models in which processes can interact with each other – these
maps to different abstractions for IPC.
• Among the most relevant that we can mention are shared memory, remote procedure call
(RPC), and message passing.
• At lower level, IPC is realized through the fundamental tools of network programming.
• Sockets are the most popular IPC primitive for implementing communication channels
between distributed processes.
Message-based communication
• The abstraction of message has played an important role in the evolution of the model
and technologies enabling distributed computing.
• The definition of distributed computing – is the one in which components located at
networked computers communicate and coordinate their actions only by passing
messages. The term messages, in this case, identify any discrete amount of information
that is passed from one entity to another. It encompasses any form of data representation
that is limited in size and time, whereas this is an invocation to a remote procedure or a
serialized object instance or a generic message.
• The term message-based communication model can be used to refer to any model for
IPC.
• Several distributed programming paradigms eventually use message-based
communication despite the abstractions that are presented to developers for programming
the interactions of distributed components.
• Here are some of the most popular and important:
Message Passing: This paradigm introduces the concept of a message as the main
abstraction of the model. The entities exchanging information explicitly encode in the form
of a message the data to be exchanged. The structure and the content of a message vary
according to the model. Examples of this model are the Message-Passing-Interface (MPI)
and openMP.
• Remote Procedure Call (RPC): This paradigm extends the concept of procedure call
beyond the boundaries of a single process, thus triggering the execution of code in remote
processes.
• Distributed Objects: This is an implementation of the RPC model for the object-
oriented paradigm and contextualizes this feature for the remote invocation of methods
exposed by objects. Examples of distributed object infrastructures are Common Object
Request Broker Architecture (CORBA), Component Object Model (COM, DCOM, and
COM+), Java Remote Method Invocation (RMI), and .NET Remoting.
• Distributed agents and active Objects: Programming paradigms based on agents and
active objects involve by definition the presence of instances, whether they are agents of
objects, despite the existence of requests.
• Web Service: An implementation of the RPC concept over HTTP; thus allowing the
interaction of components that are developed with different technologies. A Web service
is exposed as a remote object hosted on a Web Server, and method invocation are
transformed in HTTP requests, using specific protocols such as Simple Object Access
Protocol (SOAP) or Representational State Transfer (REST).
Classification
Elasticity solutions can be arranged in different classes based on
🞂 Scope
🞂 Policy
🞂 Purpose
🞂 Method
a.Scope
🞂 Elasticity can be implemented on any of the cloud layers.
🞂 Most commonly, elasticity is achieved on the IaaS level, where the resources to be
provisioned are virtual machine instances.
🞂 Other infrastructure services can also be scaled
🞂 On the PaaS level, elasticity consists in scaling containers or databases for instance.
🞂 Finally, both PaaS and IaaS elasticity can be used to implement elastic applications, be it
for private use or in order to be provided as a SaaS
🞂 The elasticity actions can be applied either at the infrastructure or application/platform
level.
🞂 The elasticity actions perform the decisions made by the elasticity strategy or
management system to scale the resources.
🞂 Google App Engine and Azure elastic pool are examples of elastic Platform as a Service
(PaaS).
🞂 Elasticity actions can be performed at the infrastructure level where the elasticity
controller monitors the system and takes decisions.
🞂 The cloud infrastructures are based on the virtualization technology, which can be VMs
or containers.
🞂 In the embedded elasticity, elastic applications are able to adjust their own resources
according to runtime requirements or due to changes in the execution flow.
🞂 There must be a knowledge of the source code of the applications.
🞂 Application Map: The elasticity controller must have a complete map of the application
components and instances.
🞂 Code embedded: The elasticity controller is embedded in the application source code.
🞂 The elasticity actions are performed by the application itself.
🞂 While moving the elasticity controller to the application source code eliminates the use of
monitoring systems
🞂 There must be a specialized controller for each application.
b.Policy
🞂 Elastic solutions can be either manual or automatic.
🞂 A manual elastic solution would provide their users with tools to monitor their systems
and add or remove resources but leaves the scaling decision to them.
Automatic mode: All the actions are done automatically, and this could be classified into
reactive and proactive modes.
Elastic solutions can be either reactive or predictive
Reactive mode: The elasticity actions are triggered based on certain thresholds or rules, the
system reacts to the load (workload or resource utilization) and triggers actions to adapt changes
accordingly.
🞂 An elastic solution is reactive when it scales a posteriori, based on a monitored change in
the system.
🞂 These are generally implemented by a set of Event-Condition-Action rules.
Proactive mode: This approach implements forecasting techniques, anticipates the future
needs and triggers actions based on this anticipation.
🞂 A predictive or proactive elasticity solution uses its knowledge of either recent history or
load patterns inferred from longer periods of time in order to predict the upcoming load
of the system and scale according to it.
c.Purpose
🞂 An elastic solution can have many purposes.
🞂 The first one to come to mind is naturally performance, in which case the focus should be
put on their speed.
🞂 Another purpose for elasticity can also be energy efficiency, where using the minimum
amount of resources is the dominating factor.
🞂 Other solutions intend to reduce the cost by multiplexing either resource providers or
elasticity methods
🞂 Elasticity has different purposes such as improving performance, increasing resource
capacity, saving energy, reducing cost and ensuring availability.
🞂 Once we look to the elasticity objectives, there are different perspectives.
🞂 Cloud IaaS providers try to maximize the profit by minimizing the resources while
offering a good Quality of Service (QoS),
🞂 PaaS providers seek to minimize the cost they pay to the
Cloud.
🞂 The customers (end-users) search to increase their Quality of Experience (QoE) and to
minimize their payments.
🞂 QoE is the degree of delight or annoyance of the user of an application or service
d.Method
🞂 Vertical elasticity, changes the amount of resources linked to existing instances on-the-
fly.
🞂 This can be done in two manners.
🞂 The first method consists in explicitly redimensioning a virtual machine instance, i.e.,
changing the quota of physical resources allocated to it.
🞂 This is however poorly supported by common operating systems as they fail to take into
account changes in CPU or memory without rebooting, thus resulting in service
interruption.
🞂 The second vertical scaling method involves VM migration: moving a virtual machine
instance to another physical machine with a different overall load changes its available
resources
🞂 Horizontal scaling is the process of adding/removing instances, which may be located at
different locations.
🞂 Load balancers are used to distribute the load among the different instances.
🞂 Vertical scaling is the process of modifying resources (CPU, memory, storage or both)
size for an instance at run time.
🞂 It gives more flexibility for the cloud systems to cope with the varying workloads
Migration
🞂 Migration can be also considered as a needed action to further allow the vertical scaling
when there is no enough resources on the host machine.
🞂 It is also used for other purposes such as migrating a VM to a less loaded physical
machine just to guarantee its performance.
🞂 Several types of migration are deployed such as live migration and no-live migration.
🞂 Live migration has two main approaches
🞂 post-copy
🞂 pre-copy
🞂 Post-copy migration suspends the migrating VM, copies minimal processor state to the
target host, resumes the VM and then begins fetching memory pages from the source.
🞂 In pre-copy approach, the memory pages are copied while the VM is running on the
source.
🞂 If some pages are changed (called dirty pages) during the memory copy process, they will
be recopied until the number of recopied pages is greater than dirty pages, or the source
VM will be stopped.
🞂 The remaining dirty pages will be copied to the destination VM.
Architecture
🞂 The architecture of the elasticity management solutions can be either centralized or
decentralized.
🞂 Centralized architecture has only one elasticity controller, i.e., the auto scaling system
that provisions and deprovisions resources.
🞂 In decentralized solutions, the architecture is composed of many elasticity controllers or
application managers, which are responsible for provisioning resources for different
cloud-hosted platforms
Provider
🞂 Elastic solutions can be applied to a single or multiple cloud providers.
🞂 A single cloud provider can be either public or private with one or multiple regions or
datacenters.
🞂 Multiple clouds in this context means more than one cloud provider.
🞂 It includes hybrid clouds that can be private or public, in addition to the federated clouds
and cloud bursting.
🞂 Most of the elasticity solutions support only a single cloud provider
On-demand Provisioning.
🞂 Resource Provisioning means the selection, deployment, and run-time management of
software (e.g., database server management systems, load balancers) and hardware
resources (e.g., CPU, storage, and network) for ensuring guaranteed performance for
applications.
🞂 Resource Provisioning is an important and challenging problem in the large-scale
distributed systems such as Cloud computing environments.
🞂 There are many resource provisioning techniques, both static and dynamic each one
having its own advantages and also some challenges.
🞂 These resource provisioning techniques used must meet Quality of Service (QoS)
parameters like availability, throughput, response time, security, reliability etc., and
thereby avoiding Service Level Agreement (SLA) violation.
🞂 Over provisioning and under provisioning of resources must be avoided.
🞂 Another important constraint is power consumption.
🞂 The ultimate goal of the cloud user is to minimize cost by renting the resources and from
the cloud service provider’s perspective to maximize profit by efficiently allocating the
resources.
🞂 In order to achieve the goal, the cloud user has to request cloud service provider to make
a provision for the resources either statically or dynamically.
🞂 So that the cloud service provider will know how many instances of the resources and
what resources are required for a particular application.
🞂 By provisioning the resources, the QoS parameters like availability, throughput, security,
response time, reliability, performance etc must be achieved without violating SLA
There are two types
Static Provisioning
Dynamic Provisioning
Static Provisioning
🞂 For applications that have predictable and generally unchanging demands/workloads, it is
possible to use “static provisioning" effectively.
🞂 With advance provisioning, the customer contracts with the provider for services.
🞂 The provider prepares the appropriate resources in advance of start of service.
🞂 The customer is charged a flat fee or is billed on a monthly basis.
Dynamic Provisioning
🞂 In cases where demand by applications may change or vary, “dynamic provisioning"
techniques have been suggested whereby VMs may be migrated on-the-fly to new
compute nodes within the cloud.
🞂 The provider allocates more resources as they are needed and removes them when they
are not.
🞂 The customer is billed on a pay-per-use basis.
🞂 When dynamic provisioning is used to create a hybrid cloud, it is sometimes referred to
as cloud bursting.
Parameters for Resource Provisioning
🞂 Response time
🞂 Minimize Cost
🞂 Revenue Maximization
🞂 Fault tolerant
🞂 Reduced SLA Violation
🞂 Reduced Power Consumption
Response time: The resource provisioning algorithm designed must take minimal time to
respond when executing the task.
Minimize Cost: From the Cloud user point of view cost should be minimized.
Revenue Maximization: This is to be achieved from the Cloud Service Provider’s view.
Fault tolerant: The algorithm should continue to provide service in spite of failure of nodes.
Reduced SLA Violation: The algorithm designed must be able to reduce SLA violation.
Reduced Power Consumption: VM placement & migration techniques must lower power
consumption
Dynamic Provisioning Types
1. Local On-demand Resource Provisioning
2. Remote On-demand Resource Provisioning
Local On-demand Resource Provisioning
1. The Engine for the Virtual Infrastructure
The OpenNebula Virtual Infrastructure Engine
• OpenNEbula creates a distributed virtualization layer
• Extend the benefits of VM Monitors from one to multiple resources
• Decouple the VM (service) from the physical location
• Transform a distributed physical infrastructure into a flexible and elastic virtual
infrastructure, which adapts to the changing demands of the VM (service) workloads
Separation of Resource Provisioning from Job Management
• New virtualization layer between the service and the infrastructure layers
• Seamless integration with the existing middleware stacks.
• Completely transparent to the computing service and so end users
Cluster Partitioning
• Dynamic partition of the infrastructure
• Isolate workloads (several computing clusters)
• Dedicated HA partitions