0% found this document useful (0 votes)
72 views22 pages

Cloud Computing - Lecture 3

This document discusses parallel and distributed computing. It covers: 1) The two main eras of computing - sequential and parallel - and how key elements like architectures, compilers, applications and problem-solving environments developed during these eras. 2) The differences between parallel and distributed computing, with parallel implying tightly coupled systems and distributed referring to more loosely coupled heterogeneous systems. 3) The four main hardware architectures - SISD, SIMD, MISD, and MIMD - along with examples of each and how they process instruction and data streams. MIMD systems are best suited for most applications.

Uploaded by

Muhammad Fahad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
72 views22 pages

Cloud Computing - Lecture 3

This document discusses parallel and distributed computing. It covers: 1) The two main eras of computing - sequential and parallel - and how key elements like architectures, compilers, applications and problem-solving environments developed during these eras. 2) The differences between parallel and distributed computing, with parallel implying tightly coupled systems and distributed referring to more loosely coupled heterogeneous systems. 3) The four main hardware architectures - SISD, SIMD, MISD, and MIMD - along with examples of each and how they process instruction and data streams. MIMD systems are best suited for most applications.

Uploaded by

Muhammad Fahad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 22

Cloud Computing

Lecture 03

Chapter 2 – Principles of Parallel and Distributed


Computing

1
Eras of computing
 Two fundamental models of computing and eras:
1. Sequential
2. Parallel
 Four key elements of computing developed during these eras are:
1. Architectures
2. Compilers
3. Applications
4. Problem-solving environments

2
FIGURE 2.1 Eras of computing, 1940s-2030s.

3
Parallel vs. distributed computing
 Both terms parallel are often used interchangeably, but mean slightly
different things.
 The term parallel implies a tightly coupled system, whereas distributed
refers to a wider class of system, including tightly coupled systems.
 Parallel computing
• computation is divided among several processors
• shared memory which is accessible to all the processors
• homogeneity of components
 Distributed computing
• computation is divided among several different computing elements
• computing elements might be heterogeneous (hardware and software)
• location of computing elements is not same 4
What is parallel processing?
 Processing of multiple tasks simultaneously on multiple processors is
called parallel processing.
• Ever increasing computational requirements.
• Sequential architectures reaching physical limitations.
– The speed at which sequential CPUs can operate is reaching saturation point (no
more vertical growth)
• A cost-effective solution to this problem by increasing the number of
CPUs in a computer and by adding an efficient communication system
between them.
– an alternative way to get high computational speed is to connect multiple CPUs
(opportunity for horizontal growth).

5
Hardware architectures for parallel processing
 Four categories of computing systems based on the number of
instruction and data streams that can be processed simultaneously:
1. Single-instruction, single-data (SISD) systems
2. Single-instruction, multiple-data (SIMD) systems
3. Multiple-instruction, single-data (MISD) systems
4. Multiple-instruction, multiple-data (MIMD) systems

6
Single-instruction, single-data (SISD) systems
 A uniprocessor machine capable of executing a single instruction,
which operates on a single data stream.
 Machine instructions are processed sequentially.
 All the instructions and data to be processed have to be stored in
primary memory.
 The speed of the processing element in the SISD model is limited by
the rate at which the computer can transfer information internally.
 Most conventional computers are built using the SISD model.

7
FIGURE 2.2 Single-instruction, single-data (SISD)
architecture.

8
Single-instruction, multiple-data (SIMD) systems
 A multiprocessor machine capable of executing the same instruction
on all the CPUs but operating on different data streams.
 Machines based on an SIMD model are well suited to scientific
computing since they involve lots of vector and matrix operations.

9
FIGURE 2.3 Single-instruction, multiple-data (SIMD)
architecture.

10
Multiple-instruction, single-data (MISD) systems
 A multiprocessor machine capable of executing different instructions
on different PEs but all of them operating on the same data set.
 For instance, statements such as y=sin(x)+cos(x)+tan(x) perform
different operations on the same data set.
 Machines built using the MISD model are not useful in most of the
applications.

11
FIGURE 2.4 Multiple-instruction, single-data (MISD)
architecture.

12
Multiple-instruction, multiple-data (MIMD) systems
 A multiprocessor machine capable of executing multiple instructions
on multiple data sets.
• Each PE has separate instruction and data streams.
• PEs work asynchronously.
• Well suited to any kind of application.
 Two categories:
• Shared memory MIMD machines
• Distributed memory MIMD machines

13
FIGURE 2.5 Multiple-instructions, multiple-data (MIMD)
architecture.

14
Multiple-instruction, multiple-data (MIMD) systems
• Shared memory MIMD machines
– all the PEs are connected to a single global memory and they all have access to it
– communication between PEs takes place through the shared memory
– modification of the data stored in the global memory by one PE is visible to all other
Pes
– easier to program
– failures in a shared-memory MIMD affect the entire system
– harder to extend

• Distributed memory MIMD machines


– all PEs have a local memory
– communication between PEs takes place through network interconnection
– each PE operates asynchronously and has its own memory
– most popular 15
FIGURE 2.6 Shared (left) and distributed (right) memory
MIMD architecture.

16
Approaches to parallel programming
 Data parallelism
• Divide-and-conquer technique is used to split data into multiple sets, and
each data set is processed on different PEs using the same instruction.
 Process parallelism
• A given operation has multiple (but distinct) activities that can be
processed on multiple processor.
 Farmer-and-worker model
• One processor is configured as master and all other remaining PEs are
designated as slaves; the master assigns jobs to slave PEs and, on
completion, they inform the master, which in turn collects results.

17
Levels of parallelism
 Levels of parallelism are decided based on the lumps of code (grain
size) that can be a potential candidate for parallelism.
 Common goal: to boost processor efficiency by hiding latency.
• There must be another thread ready to run whenever a lengthy operation
occurs.

18
FIGURE 2.7 Levels of parallelism in an application.

19
Laws of caution
 Parallelism - perform multiple activities together to increase the speed
of system.
 The relations that control the increment of speed are not linear.
• Example: In an ideal situation, given n processors the user expects speed
to be increased by n times. It rarely happens because of the
communication overhead.
 Two important guidelines to take into account:
• Speed of computation is proportional to the square root of system cost.
• Speed by a parallel computer increases as the logarithm of the number of
processors i.e. y=k log(N).

20
FIGURE 2.8 Cost versus speed

21
FIGURE 2.9 Number processors versus speed

22

You might also like