Lecture 2
Lecture 2
Lecture 2
Distributed Computing
Parallel Computing and Distributed Computing are two important models of computing that
have important roles in today’s high-performance computing. Both are designed to perform a
large number of calculations breaking down the processes into several parallel tasks; however,
they differ in structure, function, and utilization. Therefore, in the following article, there is a
dissection of Parallel Computing and Distributed Computing, their gains, losses, and
applications.
What is Parallel Computing?
In parallel computing multiple processors performs multiple tasks assigned to them
simultaneously. Memory in parallel systems can either be shared or distributed. Parallel
computing provides concurrency and saves time and money.
Advantages of Parallel Computing
Increased Speed: In this technique, several calculations are executed concurrently
hence reducing the time of computation required to complete large-scale problems.
Efficient Use of Resources: Takes full advantage of all the processing units it is equipped
with hence making the best use of the machine’s computational power.
Scalability: Also, the more processors built into the system, the more complex problems
can be solved within a short time.
Improved Performance for Complex Tasks: Best suited for activities, which involve a
large numerical calculation like, number simulation, scientific analysis and modeling and
data processing.
Disadvantages of Parallel Computing
Conclusion
Parallel Computing and Distributed Computing are effective computational models developed
with an aim to solve large calamities. Parallel computing is suitable for accelerating
computations of a single machine or clustered machines, with emphasis on the rate of
processing. On the hand, distributed on the other has many separate and independent
computers that are connected over the network focusing on scalability and fault tolerance.
Each of the models presented has it own strength and weakness, therefore, the choice between
them depends on the conditions of the particular application or system.
Parallel operating systems are designed to speed up the execution of programs by dividing
them into multiple segments. It is used for dealing with multiple processors simultaneously by
using computer resources, which include a single computer with multiple processors and
several computers connected by a network to form a cluster of parallel processing or a
combination of both.
It is an evolution of serial processing wherein tasks are broken down into manageable tasks that
can be executed concurrently. Further, each part is broken down into a series of instructions
executed simultaneously on different CPUs. The working of the parallel operating system is
such that the CPU and its components are divided into smaller parts, each having full speed and
power. In a normal operating system, once an I/O device is identified, it will use the CPU to
transfer the information into the memory before performing any operations on it, like
processing and transmitting. By parallel operating systems, however, more data can be
transferred and processed simultaneously, resulting in quicker data transfer.
The parallel operating system is further divided into two types: type-1 and type-2
1. Type-1: it acts as a native hypervisor and runs directly on bare metal. It can be executed
on the operating system or virtual machine that shares physical hardware. This type of
architecture is known as native because the host OS does not provide any emulation of
the I/O system. For instance, VMware uses type-1 virtualization of operating systems to
execute an instance of MAC OS.
2. Type-2: it is hosted on a hypervisor running within conventional operating systems like
Linux or Windows.
A few examples of the parallel operating system can be VMware, Microsoft Hyper-V, Red Hat
enterprise, Oracle VM, KVM/QEMU, and Sun xVM Server.
The main reason for using a parallel operating system is to execute virtual machines having
different purposes. These machines have dedicated servers that users can use for running an
application.
This is useful when multiple applications are to be executed along with the usage of resources
and to avoid interfering with other processes and should be able to handle the system's load.
An example of this can be an email server wherein a web server, as well as rules of firewall, can
be executed at the same time without any hindrance. The resources used are CPU, RAM, etc.
Both sequential and parallel computers operate on a set (stream) of instructions called
algorithms. These set of instructions (algorithm) instruct the computer about what it has to do
in each step.
Depending on the instruction stream and data stream, computers can be classified into four
categories −
SISD Computers
SISD computers contain one control unit, one processing unit, and one memory unit.
In this type of computers, the processor receives a single stream of instructions from the
control unit and operates on a single stream of data from the memory unit. During
computation, at each step, the processor receives one instruction from the control unit and
operates on a single data received from the memory unit.
SIMD Computers
SIMD computers contain one control unit, multiple processing units, and shared memory or
interconnection network.
Each of the processing units has its own local memory unit to store both data and instructions.
In SIMD computers, processors need to communicate among themselves. This is done
by shared memory or by interconnection network.
While some of the processors execute a set of instructions, the remaining processors wait for
their next set of instructions. Instructions from the control unit decides which processor will
be active (execute instructions) or inactive (wait for next instruction).
MISD Computers
As the name suggests, MISD computers contain multiple control units, multiple processing
units, and one common memory unit.
MIMD Computers
MIMD computers have multiple control units, multiple processing units, and a shared
memory or interconnection network.
Note