UNIT 3
UNIT 3
UNIT 3
CLOUD COMPUTING
Principle of Parallel and Distributed Computing
AGENDA
• perform different operations on the same data set. Machines built using
the MISD model are not useful in most of the applications;
• a few machines are built, but none of them are available commercially.
They became more of an intellectual exercise than a practical
configuration.
4.Multiple-instruction, multiple-data
(MIMD) systems
• An MIMD computing system is a multiprocessor machine capable of
executing multiple instructions on multiple data sets.
• Each PE in the MIMD model has separate instruction and data
streams; hence machines built using this model are well suited to
any kind of application.
• Unlike SIMD and MISD machines, PEs in MIMD machines work
asynchronously. MIMD machines are broadly categorized into shared-
memory MIMD and distributed-memory MIMD based on the way PEs
are coupled to the main memory.
• Shared memory MIMD machines In the shared memory MIMD model, all
the PEs are connected to a single global memory and they all have access
to it .
• Systems based on this model are also called tightly coupled multiprocessor
systems.
• The communication between PEs in this model takes place through the
shared memory;
• modification of the data stored in the global memory by one PE is visible to
all other PEs.
• Dominant representative shared memory MIMD systems are Silicon
Graphics machines and Sun/IBM’s SMP (Symmetric Multi-Processing)
Distributed memory MIMD machines
In the distributed memory MIMD model, all PEs have a local memory.
Systems based on this model are also called loosely coupled
multiprocessor systems.
• The communication between PEs in this model takes place through the
interconnection network (the interprocess communication channel, or
IPC).
• The network connecting PEs can be configured to tree, mesh, cube,
and so on. Each PE operates asynchronously, and if
communication/synchronization among tasks is necessary, they can do
so by exchanging messages between them
• The shared-memory MIMD architecture is easier to program but is
less tolerant to failures and harder to extend with respect to the
distributed memory MIMD model.
• Failures in a shared-memory MIMD affect the entire system, whereas
this is not the case of the distributed model, in which each of the PEs
can be easily isolated. Moreover, shared memory MIMD architectures
are less likely to scale because the addition of more PEs leads to
memory contention.
• This is a situation that does not happen in the case of distributed
memory, in which each PE has its own memory. As a result,
distributed memory MIMD architectures are most popular today
Approaches to parallel programming
• A sequential program is one that runs on a single processor and has a
single line of control.
• To make many processors collectively work on a single program, the
program must be divided into smaller independent chunks so that each
processor can work on separate chunks of the problem.
• The program decomposed in this way is a parallel program. A wide
variety of parallel programming approaches are available. The most
prominent among them are the following
•Data parallelism
• Process parallelism
• Farmer-and-worker model
Data Parallelism