Lecture 2

Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

Difference between Parallel Computing and

Distributed Computing
Parallel Computing and Distributed Computing are two important models of computing that
have important roles in today’s high-performance computing. Both are designed to perform a
large number of calculations breaking down the processes into several parallel tasks; however,
they differ in structure, function, and utilization. Therefore, in the following article, there is a
dissection of Parallel Computing and Distributed Computing, their gains, losses, and
applications.
What is Parallel Computing?
In parallel computing multiple processors performs multiple tasks assigned to them
simultaneously. Memory in parallel systems can either be shared or distributed. Parallel
computing provides concurrency and saves time and money.
Advantages of Parallel Computing
 Increased Speed: In this technique, several calculations are executed concurrently
hence reducing the time of computation required to complete large-scale problems.
 Efficient Use of Resources: Takes full advantage of all the processing units it is equipped
with hence making the best use of the machine’s computational power.
 Scalability: Also, the more processors built into the system, the more complex problems
can be solved within a short time.

 Improved Performance for Complex Tasks: Best suited for activities, which involve a
large numerical calculation like, number simulation, scientific analysis and modeling and
data processing.
Disadvantages of Parallel Computing

 Complexity in Programming: Parallel writing programming that is used in organizing


tasks in a parallel manner is even more difficult than that of serial programming.
 Synchronization Issues: Interaction of various processors when operating concurrently
can become synchronized and result in problem areas on the overall communication.
 Hardware Costs: The implementation of parallel computing does probably involve the
use of certain components such as multi-core processors which could possibly be costly
than the normal systems.
What is Distributed Computing?
In distributed computing, we have multiple autonomous computers, which seems to the user as
single system. In distributed systems, there is no shared memory and computers communicate

Parallel & Distributed Computing 1 Deptt of CS & IT, HU


with each other through message passing. In distributed computing, a single task is divided
among different computers.

Advantages of Distributed Computing


 Fault Tolerance: The failure of one node means that this node is no longer part of the
computations, but that is not fatal for the entire computation since there are other
computers participating in the process thereby making the system more reliable.
 Cost-Effective: Builds upon existing hardware and has flexibility in utilizing commodity
machines instead of the need to have expensive and specific processors for its use.
 Scalability: The distributed systems have the ability to scale and expand horizontally
through the addition of more machines in the networks and therefore they can take on
greater workloads and processes.
 Geographic Distribution: Distributed computing makes it possible to execute tasks at
different points thereby eliminating latencies.
Disadvantages of Distributed Computing

 Complexity in Management: The task of managing a distributed system itself can be


made more difficult since it may require dealing with the latency and/or failure of a
network as well as issues related to synchronizing the information to be distributed.

 Communication Overhead: Inter node communication requirements can actually hinder


the package transfer between nodes that are geographically distant and hence the
overall performance is greatly compromised.
 Security Concerns: In general, distributed systems are less secure as compared to
centralized system because distributed systems heavily depend on a network.

Parallel & Distributed Computing 2 Deptt of CS & IT, HU


Difference between Parallel Computing and Distributed Computing:

S.NO Parallel Computing Distributed Computing

Many operations are performed System components are located at different


1.
simultaneously locations

2. Single computer is required Uses multiple computers

Multiple processors perform Multiple computers perform multiple


3.
multiple operations operations

It may have shared or distributed


4. It have only distributed memory
memory

Processors communicate with each Computer communicate with each other


5.
other through bus through message passing.

Improves system scalability, fault tolerance


6. Improves the system performance
and resource sharing capabilities

Conclusion
Parallel Computing and Distributed Computing are effective computational models developed
with an aim to solve large calamities. Parallel computing is suitable for accelerating
computations of a single machine or clustered machines, with emphasis on the rate of
processing. On the hand, distributed on the other has many separate and independent
computers that are connected over the network focusing on scalability and fault tolerance.
Each of the models presented has it own strength and weakness, therefore, the choice between
them depends on the conditions of the particular application or system.

Parallel & Distributed Computing 3 Deptt of CS & IT, HU


Parallel Operating System

What is a Parallel Operating System?

Parallel operating systems are designed to speed up the execution of programs by dividing
them into multiple segments. It is used for dealing with multiple processors simultaneously by
using computer resources, which include a single computer with multiple processors and
several computers connected by a network to form a cluster of parallel processing or a
combination of both.

How do Parallel Operating Systems Work?

It is an evolution of serial processing wherein tasks are broken down into manageable tasks that
can be executed concurrently. Further, each part is broken down into a series of instructions
executed simultaneously on different CPUs. The working of the parallel operating system is
such that the CPU and its components are divided into smaller parts, each having full speed and
power. In a normal operating system, once an I/O device is identified, it will use the CPU to
transfer the information into the memory before performing any operations on it, like
processing and transmitting. By parallel operating systems, however, more data can be
transferred and processed simultaneously, resulting in quicker data transfer.

Types of Parallel Operating Systems

The parallel operating system is further divided into two types: type-1 and type-2

1. Type-1: it acts as a native hypervisor and runs directly on bare metal. It can be executed
on the operating system or virtual machine that shares physical hardware. This type of
architecture is known as native because the host OS does not provide any emulation of
the I/O system. For instance, VMware uses type-1 virtualization of operating systems to
execute an instance of MAC OS.
2. Type-2: it is hosted on a hypervisor running within conventional operating systems like
Linux or Windows.

Application of Parallel Operating System

 Databases and data mining


 Advanced graphics
 Argument reality
 Real-time simulation of a system
 Science and engineering

Parallel & Distributed Computing 4 Deptt of CS & IT, HU


Examples of Parallel Operating Systems

A few examples of the parallel operating system can be VMware, Microsoft Hyper-V, Red Hat
enterprise, Oracle VM, KVM/QEMU, and Sun xVM Server.

The main reason for using a parallel operating system is to execute virtual machines having
different purposes. These machines have dedicated servers that users can use for running an
application.

This is useful when multiple applications are to be executed along with the usage of resources
and to avoid interfering with other processes and should be able to handle the system's load.

An example of this can be an email server wherein a web server, as well as rules of firewall, can
be executed at the same time without any hindrance. The resources used are CPU, RAM, etc.

Functions of Parallel Operating System

The following are the important functions of a parallel operating system:

 Has a multiprocessing environment


 Security among processes
 The parallel OS can handle the load of tasks in the operating system
 Sharing of resources between other processes
 Avoiding interference of other processes or threads with each other
 Efficient utilization of all the resources

Advantages of Parallel Operating System

 It saves time and allows the execution of applications simultaneously


 Solves the large complex problem of operating system
 Multiple resources can be used simultaneously
 Has a larger memory for the allocation of resources and tasks
 Faster as compared to another operating system

Disadvantages of Parallel Operating System

 The architecture of the parallel operating system is complex


 High cost since more resources are used for synchronization, data transfer, thread, and
communication. In the case of clusters, better cooling techniques are required.
 Huge power consumption.
 High maintenance.

Parallel & Distributed Computing 5 Deptt of CS & IT, HU


Models of Computation

Both sequential and parallel computers operate on a set (stream) of instructions called
algorithms. These set of instructions (algorithm) instruct the computer about what it has to do
in each step.

Depending on the instruction stream and data stream, computers can be classified into four
categories −

 Single Instruction stream, Single Data stream (SISD) computers


 Single Instruction stream, Multiple Data stream (SIMD) computers
 Multiple Instruction stream, Single Data stream (MISD) computers
 Multiple Instruction stream, Multiple Data stream (MIMD) computers

SISD Computers

SISD computers contain one control unit, one processing unit, and one memory unit.

In this type of computers, the processor receives a single stream of instructions from the
control unit and operates on a single stream of data from the memory unit. During
computation, at each step, the processor receives one instruction from the control unit and
operates on a single data received from the memory unit.

SIMD Computers

SIMD computers contain one control unit, multiple processing units, and shared memory or
interconnection network.

Parallel & Distributed Computing 6 Deptt of CS & IT, HU


Here, one single control unit sends instructions to all processing units. During computation, at
each step, all the processors receive a single set of instructions from the control unit and
operate on different set of data from the memory unit.

Each of the processing units has its own local memory unit to store both data and instructions.
In SIMD computers, processors need to communicate among themselves. This is done
by shared memory or by interconnection network.

While some of the processors execute a set of instructions, the remaining processors wait for
their next set of instructions. Instructions from the control unit decides which processor will
be active (execute instructions) or inactive (wait for next instruction).

MISD Computers

As the name suggests, MISD computers contain multiple control units, multiple processing
units, and one common memory unit.

Parallel & Distributed Computing 7 Deptt of CS & IT, HU


Here, each processor has its own control unit and they share a common memory unit. All the
processors get instructions individually from their own control unit and they operate on a single
stream of data as per the instructions they have received from their respective control units.
This processor operates simultaneously.

MIMD Computers

MIMD computers have multiple control units, multiple processing units, and a shared
memory or interconnection network.

Parallel & Distributed Computing 8 Deptt of CS & IT, HU


Here, each processor has its own control unit, local memory unit, and arithmetic and logic unit.
They receive different sets of instructions from their respective control units and operate on
different sets of data.

Note

 An MIMD computer that shares a common memory is known as multiprocessors, while


those that uses an interconnection network is known as multicomputers.
 Based on the physical distance of the processors, multicomputers are of two types −
o Multicomputer − When all the processors are very close to one another (e.g., in
the same room).
o Distributed system − When all the processors are far away from one another (e.g.-
in the different cities)

Parallel & Distributed Computing 9 Deptt of CS & IT, HU

You might also like