0% found this document useful (0 votes)
7 views32 pages

Unit 3 in OS-1

Download as docx, pdf, or txt
Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1/ 32

Unit 3

Inter Process Communication


In general, Inter Process Communication is a type of mechanism usually provided by the operating system (or OS). The main aim or goal of this
mechanism is to provide communications in between several processes. In short, the intercommunication allows a process letting another process
know that some event has occurred.

What is Inter Process Communication in OS?

Inter-process communication (IPC) serves as a means for transmitting data among multiple threads situated within one or more processes or
programs. These processes may be active on a solitary computer or distributed across a network of machines.

It is a set of programming interfaces that enable a programmer to coordinate actions across multiple processes that can run concurrently in an
operating system. This enables a given program to handle several user requests at the same time.

Because each user request may cause multiple processes to operate in the operating system, the processes may need to communicate with one
another. Because each IPC protocol technique has its own set of advantages and disadvantages, it is not uncommon for a single program to use
many protocols.

Why Inter Process Communication (IPC) needed?


Inter Process Communication in OS is needed because:

 Resource Sharing: IPC enables multiple processes to share resources, such as memory and file systems, allowing for better resource utilization
and increased system performance.
 Coordination and Synchronization: IPC provides a way for processes to coordinate their activities and synchronize access to shared resources,
ensuring that the system operates in a safe and controlled manner.
 Communication: IPC enables processes to communicate with each other, allowing for the exchange of data and information between processes.
 Modularity: IPC enables the development of modular software, where processes can be developed and executed independently, and then
combined to form a larger system.
 Flexibility: IPC allows processes to run on different hosts or nodes in a network, providing greater flexibility and scalability in large and
complex systems.

Overall, IPC is essential for building complex and scalable systems in operating systems, as it enables processes to coordinate their activities,
share resources, and communicate with each other in a safe and controlled manner.

Approaches for Inter Process Communication in OS

Following are some different approaches to inter process communication in OS, which are as follows:
Pipes

Pipes is a method of Inter Process Communication in OS. It allows processes to communicate with each other by reading from and writing to a
common channel, which acts as a buffer between the processes. Pipes can be either named or anonymous, depending on whether they have a
unique name or not.
The use of pipes in IPC is a simple and efficient method of communication, as they provide a way for processes to exchange data without the
overhead of more complex IPC methods, such as sockets or message passing. However, pipes have limited capabilities compared to other IPC
methods, as they only support one-way communication and have limited buffer sizes.

Message Passing

Message passing is a method of Inter Process Communication in OS. It involves the exchange of messages between processes, where each
process sends and receives messages to coordinate its activities and exchange data with other processes.

In message passing, each process has a unique identifier, known as a process ID, and messages are sent from one process to another using this
identifier. When a process sends a message, it specifies the recipient process ID and the contents of the message, and the operating system is
responsible for delivering the message to the recipient process. The recipient process can then retrieve the contents of the message and respond,
if necessary.

Shared Memory

Shared memory is a method of Inter Process Communication in OS. It involves the use of a shared memory region, where multiple processes can
access the same data in memory. Shared memory provides a way for processes to exchange data and coordinate their activities by accessing a
common area of memory.
Direct Communication

Direct communication is a method of Inter Process Communication in OS. It involves the direct exchange of data between processes, without the
use of intermediate communication mechanisms such as message passing, message queues, or shared memory.

In direct communication, processes communicate with each other by exchanging data directly, either by passing data as parameters to function
calls or by reading and writing to shared data structures in memory. Direct communication is typically used when processes need to exchange
small amounts of data, or when they need to coordinate their activities in a simple and straightforward manner.

Process Synchronization
Process Synchronization is the coordination of execution of multiple processes in a multi-process system to ensure that they access shared
resources in a controlled and predictable manner. It aims to resolve the problem of race conditions and other synchronization issues in a
concurrent system.

In a multi-process system, synchronization is necessary to ensure data consistency and integrity, and to avoid the risk of deadlocks and other
synchronization problems. Process synchronization is an important aspect of modern operating systems, and it plays a crucial role in ensuring
the correct and efficient functioning of multi-process systems.
On the basis of synchronization, processes are categorized as one of the following two types:
 Independent Process: The execution of one process does not affect the execution of other processes.

For ex : at one time two persons want to withdraw the money but the bank is different then obviously no dependency is there. Independently
they can perform their transactions.

 Cooperative Process: A process that can affect or be affected by other processes executing in the system. In this sharing of memory,
variables, resources and coding can be done.

For ex : If someone is no having knowledge to play guitar but he is asked to play it but he/she don’t have knowledge then only noise can be
created not the music because he must know about the coordination between threads into it. So cooperative process means dependency on
each other.

Process synchronization problem arises in the case of Cooperative processes also because resources are shared in Cooperative processes.
Race condition
Semaphores in Operating System
Semaphores are integer variables that are used to solve the critical section problem by using two atomic operations, wait and signal that are used
for process synchronization.
The definitions of wait and signal are as follows −

 Wait
The wait operation decrements the value of its argument S, if it is positive. If S is negative or zero, then no operation is performed.
wait(S)
{
while (S<=0);

S--;
}

 Signal
The signal operation increments the value of its argument S.
signal(S)
{
S++;
}

Types of Semaphores

There are two main types of semaphores i.e. counting semaphores and binary semaphores. Details about these are given as follows −

 Counting Semaphores
These are integer value semaphores and have an unrestricted value domain. These semaphores are used to coordinate the resource
access, where the semaphore count is the number of available resources. If the resources are added, semaphore count
automatically incremented and if the resources are removed, the count is decremented.
 Binary Semaphores
The binary semaphores are like counting semaphores but their value is restricted to 0 and 1. The wait operation only works when
the semaphore is 1 and the signal operation succeeds when semaphore is 0. It is sometimes easier to implement binary semaphores
than counting semaphores.

Advantages of Semaphores

Some of the advantages of semaphores are as follows −

 Semaphores allow only one process into the critical section. They follow the mutual exclusion principle strictly and are much
more efficient than some other methods of synchronization.
 There is no resource wastage because of busy waiting in semaphores as processor time is not wasted unnecessarily to check if a
condition is fulfilled to allow a process to access the critical section.
 Semaphores are implemented in the machine independent code of the microkernel. So they are machine independent.

Disadvantages of Semaphores

Some of the disadvantages of semaphores are as follows −

 Semaphores are complicated so the wait and signal operations must be implemented in the correct order to prevent deadlocks.
 Semaphores are impractical for last scale use as their use leads to loss of modularity. This happens because the wait and signal
operations prevent the creation of a structured layout for the system.
 Semaphores may lead to a priority inversion where low priority processes may access the critical section first and high priority
processes later.
Race Condition

A race condition is a situation that may occur inside a critical section. This happens when the result of multiple thread execution in critical
section differs according to the order in which the threads execute.
Race conditions in critical sections can be avoided if the critical section is treated as an atomic instruction. Also, proper thread synchronization
using locks or atomic variables can prevent race conditions.
What is Race Condition in OS?

A race condition is a problem that occurs in an operating system (OS) where two or more processes or threads are executing concurrently. The
outcome of their execution depends on the order in which they are executed. In a race condition, the exact timing of events is unpredictable, and
the outcome of the execution may vary based on the timing. This can result in unexpected or incorrect behavior of the system.

For example:

If two threads are simultaneously accessing and changing the same shared resource, such as a variable or a file, the final state of that resource
depends on the order in which the threads execute. If the threads are not correctly synchronized, they can overwrite each other's changes, causing
incorrect results or even system crashes.

Deadlock in Operating System


A process in operating system uses resources in the following way.
1) Requests a resource
2) Use the resource
3) Releases the resource

What is Deadlock?
Deadlock is a situation that occurs in OS when any process enters in a waiting state because
another waiting process is holding the demanded resource. Deadlock is a common problem in
multi-processing where several processes share a specific type of mutually exclusive resource.
(Deadlock is a situation where a set of processes are blocked because each process is holding a
resource and waiting for another resource acquired by another process)
 In the above diagram, the process 1 has resource 1 and needs to acquire resource 2. Similarly process 2 has resource 2
and needs to acquire resource 1. Process 1 and process 2 are in deadlock as each of them needs the other’s resource to
complete their execution but neither of them is willing to relinquish their resources.

Example of Deadlock
 A real-world example would be traffic, which is going only in one direction.
 Here, a bridge is considered a resource.
 So, when Deadlock happens, it can be easily resolved if one car backs up (Preempt resources and rollback).
 Several cars may have to be backed up if a deadlock situation occurs.
 So starvation is possible.
Deadlock Conditions
(if any of the following conditions occur then there is a situation of deadlock)
Deadlock can arise if the following four conditions hold simultaneously (Necessary
Conditions) and if any of the condition resolved then deadlock condition can be avoided

Mutual Exclusion: Two or more resources are non-shareable (Only one process can use at a
time)
>> There should be a resource that can only be held by one process at a time. In the diagram
below, there is a single instance of Resource 1 and it is held by Process 1 only.

Hold and Wait: A process is holding at least one resource and waiting for resources.
A process can hold multiple resources and still request more resources from other processes which are
holding them. In the diagram given below, process 2 holds Resource 2 and Resource 3 and is requesting the
Resource 1 which is held by Process 1.
No Preemption: A resource cannot be taken from a process unless the process releases the resource.

>> A resource cannot be preempted from a process by force. A process can only release a resource
voluntarily. In the diagram below, Process 2 cannot preempt Resource 1 from Process 1. It will only be
released when Process 1 relinquishes it voluntarily after its execution is complete.
Circular Wait: A set of processes are waiting for each other in circular form.

A process is waiting for the resource held by the second process, which is waiting for the
resource held by the third process and so on, till the last process is waiting for a resource held by
the first process. This forms a circular chain. For example: Process 1 is allocated Resource2 and
it is requesting Resource 1. Similarly, Process 2 is allocated Resource 1 and it is requesting
Resource 2. This forms a circular wait loop.

Methods for handling Deadlock


Strategies to handle deadlocks
Deadlock Model

A deadlock occurs when a set of processes is stalled because each process is holding a resource and waiting for another
process to acquire another resource. In the diagram below, for example, Process 1 is holding Resource 1 while Process 2
acquires Resource 2, and Process 2 is waiting for Resource 1.
System Model :
 For the purposes of deadlock discussion, a system can be modeled as a collection of limited resources that can
be divided into different categories and allocated to a variety of processes, each with different requirements.
 Memory, printers, CPUs, open files, tape drives, CD-ROMs, and other resources are examples of resource
categories.
 By definition, all resources within a category are equivalent, and any of the resources within that category can
equally satisfy a request from that category. If this is not the case (i.e. if there is some difference between the
resources within a category), then that category must be subdivided further. For example, the term “printers”
may need to be subdivided into “laser printers” and “color inkjet printers.”
 Some categories may only have one resource.
 The kernel keeps track of which resources are free and which are allocated, to which process they are allocated,
and a queue of processes waiting for this resource to become available for all kernel-managed resources.
Mutexes or wait() and signal() calls can be used to control application-managed resources (i.e. binary or
counting semaphores. )
 When every process in a set is waiting for a resource that is currently assigned to another process in the set, the
set is said to be deadlocked.

Operations :
In normal operation, a process must request a resource before using it and release it when finished, as shown below.
1. Request –
If the request cannot be granted immediately, the process must wait until the resource(s) required to become
available. The system, for example, uses the functions open(), malloc(), new(), and request ().
2. Use –
The process makes use of the resource, such as printing to a printer or reading from a file.
3. Release –
The process relinquishes the resource, allowing it to be used by other processes.
Necessary Conditions :
There are four conditions that must be met in order to achieve deadlock as follows.
1. Mutual Exclusion –
At least one resource must be kept in a non-shareable state; if another process requests it, it must wait for it to be
released.

2. Hold and Wait –


A process must hold at least one resource while also waiting for at least one resource that another process is
currently holding.

3. No preemption –
Once a process holds a resource (i.e. after its request is granted), that resource cannot be taken away from that
process until the process voluntarily releases it.

4. Circular Wait –
There must be a set of processes P0, P1, P2,…, PN such that every P[I] is waiting for P[(I + 1) percent (N + 1)].
(It is important to note that this condition implies the hold-and-wait condition, but dealing with the four
conditions is easier if they are considered separately).

Methods for Handling Deadlocks :


In general, there are three approaches to dealing with deadlocks as follows.
1. Preventing or avoiding deadlock by Avoid allowing the system to become stuck in a loop.
2. Detection and recovery of deadlocks, When deadlocks are detected, abort the process or preempt some
resources.
3. Ignore the problem entirely.
4. To avoid deadlocks, the system requires more information about all processes. The system, in particular, must
understand what resources a process will or may request in the future. ( Depending on the algorithm, this can
range from a simple worst-case maximum to a complete resource request and release plan for each process. )
5. Deadlock detection is relatively simple, but deadlock recovery necessitates either aborting processes or
preempting resources, neither of which is an appealing option.
6. If deadlocks are not avoided or detected, the system will gradually slow down as more processes become stuck
waiting for resources that the deadlock has blocked and other waiting processes. Unfortunately, when the
computing requirements of a real-time process are high, this slowdown can be confused with a general system
slowdown.
Deadlock Prevention :
Deadlocks can be avoided by avoiding at least one of the four necessary conditions: as follows.
Condition-1 :
Mutual Exclusion :
 Read-only files, for example, do not cause deadlocks.
 Unfortunately, some resources, such as printers and tape drives, require a single process to have exclusive access
to them.
Condition-2 :
Hold and Wait :
To avoid this condition, processes must be prevented from holding one or more resources while also waiting for one or
more others. There are a few possibilities here:
 Make it a requirement that all processes request all resources at the same time. This can be a waste of system
resources if a process requires one resource early in its execution but does not require another until much later.
 Processes that hold resources must release them prior to requesting new ones, and then re-acquire the released
resources alongside the new ones in a single new request. This can be a problem if a process uses a resource to
partially complete an operation and then fails to re-allocate it after it is released.
 If a process necessitates the use of one or more popular resources, either of the methods described above can
result in starvation.
Condition-3 :
No Preemption :
When possible, preemption of process resource allocations can help to avoid deadlocks.
 One approach is that if a process is forced to wait when requesting a new resource, all other resources previously
held by this process are implicitly released (preempted), forcing this process to re-acquire the old resources
alongside the new resources in a single request, as discussed previously.
 Another approach is that when a resource is requested, and it is not available, the system looks to see what other
processes are currently using those resources and are themselves blocked while waiting for another resource. If
such a process is discovered, some of their resources may be preempted and added to the list of resources that
the process is looking for.
 Either of these approaches may be appropriate for resources whose states can be easily saved and restored, such
as registers and memory, but they are generally inapplicable to other devices, such as printers and tape drives.
Condition-4 :
Circular Wait :
 To avoid circular waits, number all resources and insist that processes request resources is strictly increasing ( or
decreasing) order.
 To put it another way, before requesting resource Rj, a process must first release all Ri such that I >= j.
 The relative ordering of the various resources is a significant challenge in this scheme.
Deadlock Avoidance :
 The general idea behind deadlock avoidance is to avoid deadlocks by avoiding at least one of the
aforementioned conditions.
 This necessitates more information about each process AND results in low device utilization. (This is a
conservative approach.)
 The scheduler only needs to know the maximum number of each resource that a process could potentially use in
some algorithms. In more complex algorithms, the scheduler can also use the schedule to determine which
resources are required and in what order.
 When a scheduler determines that starting a process or granting resource requests will result in future deadlocks,
the process is simply not started or the request is denied.
 The number of available and allocated resources, as well as the maximum requirements of all processes in the
system, define a resource allocation state.
Deadlock Detection :
 If deadlocks cannot be avoided, another approach is to detect them and recover in some way.
 Aside from the performance hit of constantly checking for deadlocks, a policy/algorithm for recovering from
deadlocks must be in place, and when processes must be aborted or have their resources preempted, there is the
possibility of lost work.
Recovery From Deadlock :
There are three basic approaches to getting out of a bind:
1. Inform the system operator and give him/her permission to intervene manually.
2. Stop one or more of the processes involved in the deadlock.
3. Prevent the use of resources.
Approach of Recovery From Deadlock :
Here, we will discuss the approach of Recovery From Deadlock as follows.
Approach-1:
Process Termination :
There are two basic approaches for recovering resources allocated to terminated processes as follows.
1. Stop all processes that are involved in the deadlock. This does break the deadlock, but at the expense of
terminating more processes than are absolutely necessary.
2. Processes should be terminated one at a time until the deadlock is broken. This method is more conservative, but
it necessitates performing deadlock detection after each step.
In the latter case, many factors can influence which processes are terminated next as follows.
1. Priorities in the process
2. How long has the process been running and how close it is to completion.
3. How many and what kind of resources does the process have? (Are they simple to anticipate and restore? )
4. How many more resources are required for the process to be completed?
5. How many processes will have to be killed?
6. Whether the process is batch or interactive.

Approach-2

Resource Preemption :
When allocating resources to break the deadlock, three critical issues must be addressed:
1. Selecting a victim –
Many of the decision criteria outlined above apply to determine which resources to preempt from which
processes.
2. Rollback –
A preempted process should ideally be rolled back to a safe state before the point at which that resource was
originally assigned to the process. Unfortunately, determining such a safe state can be difficult or impossible, so
the only safe rollback is to start from the beginning. (In other words, halt and restart the process.)
3. Starvation –
How do you ensure that a process does not go hungry because its resources are constantly being preempted? One
option is to use a priority system and raise the priority of a process whenever its resources are preempted. It
should eventually gain a high enough priority that it will no longer be preempted.

Detection and Recovery: Another approach to dealing with deadlocks is to detect them when they occur and recover
from them. This can involve killing one or more of the processes involved in the deadlock or releasing some of the
resources they hold.
Deadlock Avoidance
Deadlock avoidance can be done with Banker’s Algorithm.
Banker’s Algorithm
Bankers’s Algorithm is resource allocation and deadlock avoidance algorithm which test all the request made by
processes for resources, it checks for the safe state, if after granting request system remains in the safe state it allows the
request and if there is no safe state it doesn’t allow the request made by the process.
Inputs to Banker’s Algorithm:

1. Max need of resources by each process.


2. Currently, allocated resources by each process.
3. Max free available resources in the system.
The request will only be granted under the below condition:
1. If the request made by the process is less than equal to max need to that process.
2. If the request made by the process is less than equal to the freely available resource in the system.
Timeouts: To avoid deadlocks caused by indefinite waiting, a timeout mechanism can be used to limit the amount of time
a process can wait for a resource. If the resource is not available within the timeout period, the process can be forced to
release its current resources and try again later.

You might also like