Unit 3 in OS-1
Unit 3 in OS-1
Unit 3 in OS-1
Inter-process communication (IPC) serves as a means for transmitting data among multiple threads situated within one or more processes or
programs. These processes may be active on a solitary computer or distributed across a network of machines.
It is a set of programming interfaces that enable a programmer to coordinate actions across multiple processes that can run concurrently in an
operating system. This enables a given program to handle several user requests at the same time.
Because each user request may cause multiple processes to operate in the operating system, the processes may need to communicate with one
another. Because each IPC protocol technique has its own set of advantages and disadvantages, it is not uncommon for a single program to use
many protocols.
Resource Sharing: IPC enables multiple processes to share resources, such as memory and file systems, allowing for better resource utilization
and increased system performance.
Coordination and Synchronization: IPC provides a way for processes to coordinate their activities and synchronize access to shared resources,
ensuring that the system operates in a safe and controlled manner.
Communication: IPC enables processes to communicate with each other, allowing for the exchange of data and information between processes.
Modularity: IPC enables the development of modular software, where processes can be developed and executed independently, and then
combined to form a larger system.
Flexibility: IPC allows processes to run on different hosts or nodes in a network, providing greater flexibility and scalability in large and
complex systems.
Overall, IPC is essential for building complex and scalable systems in operating systems, as it enables processes to coordinate their activities,
share resources, and communicate with each other in a safe and controlled manner.
Following are some different approaches to inter process communication in OS, which are as follows:
Pipes
Pipes is a method of Inter Process Communication in OS. It allows processes to communicate with each other by reading from and writing to a
common channel, which acts as a buffer between the processes. Pipes can be either named or anonymous, depending on whether they have a
unique name or not.
The use of pipes in IPC is a simple and efficient method of communication, as they provide a way for processes to exchange data without the
overhead of more complex IPC methods, such as sockets or message passing. However, pipes have limited capabilities compared to other IPC
methods, as they only support one-way communication and have limited buffer sizes.
Message Passing
Message passing is a method of Inter Process Communication in OS. It involves the exchange of messages between processes, where each
process sends and receives messages to coordinate its activities and exchange data with other processes.
In message passing, each process has a unique identifier, known as a process ID, and messages are sent from one process to another using this
identifier. When a process sends a message, it specifies the recipient process ID and the contents of the message, and the operating system is
responsible for delivering the message to the recipient process. The recipient process can then retrieve the contents of the message and respond,
if necessary.
Shared Memory
Shared memory is a method of Inter Process Communication in OS. It involves the use of a shared memory region, where multiple processes can
access the same data in memory. Shared memory provides a way for processes to exchange data and coordinate their activities by accessing a
common area of memory.
Direct Communication
Direct communication is a method of Inter Process Communication in OS. It involves the direct exchange of data between processes, without the
use of intermediate communication mechanisms such as message passing, message queues, or shared memory.
In direct communication, processes communicate with each other by exchanging data directly, either by passing data as parameters to function
calls or by reading and writing to shared data structures in memory. Direct communication is typically used when processes need to exchange
small amounts of data, or when they need to coordinate their activities in a simple and straightforward manner.
Process Synchronization
Process Synchronization is the coordination of execution of multiple processes in a multi-process system to ensure that they access shared
resources in a controlled and predictable manner. It aims to resolve the problem of race conditions and other synchronization issues in a
concurrent system.
In a multi-process system, synchronization is necessary to ensure data consistency and integrity, and to avoid the risk of deadlocks and other
synchronization problems. Process synchronization is an important aspect of modern operating systems, and it plays a crucial role in ensuring
the correct and efficient functioning of multi-process systems.
On the basis of synchronization, processes are categorized as one of the following two types:
Independent Process: The execution of one process does not affect the execution of other processes.
For ex : at one time two persons want to withdraw the money but the bank is different then obviously no dependency is there. Independently
they can perform their transactions.
Cooperative Process: A process that can affect or be affected by other processes executing in the system. In this sharing of memory,
variables, resources and coding can be done.
For ex : If someone is no having knowledge to play guitar but he is asked to play it but he/she don’t have knowledge then only noise can be
created not the music because he must know about the coordination between threads into it. So cooperative process means dependency on
each other.
Process synchronization problem arises in the case of Cooperative processes also because resources are shared in Cooperative processes.
Race condition
Semaphores in Operating System
Semaphores are integer variables that are used to solve the critical section problem by using two atomic operations, wait and signal that are used
for process synchronization.
The definitions of wait and signal are as follows −
Wait
The wait operation decrements the value of its argument S, if it is positive. If S is negative or zero, then no operation is performed.
wait(S)
{
while (S<=0);
S--;
}
Signal
The signal operation increments the value of its argument S.
signal(S)
{
S++;
}
Types of Semaphores
There are two main types of semaphores i.e. counting semaphores and binary semaphores. Details about these are given as follows −
Counting Semaphores
These are integer value semaphores and have an unrestricted value domain. These semaphores are used to coordinate the resource
access, where the semaphore count is the number of available resources. If the resources are added, semaphore count
automatically incremented and if the resources are removed, the count is decremented.
Binary Semaphores
The binary semaphores are like counting semaphores but their value is restricted to 0 and 1. The wait operation only works when
the semaphore is 1 and the signal operation succeeds when semaphore is 0. It is sometimes easier to implement binary semaphores
than counting semaphores.
Advantages of Semaphores
Semaphores allow only one process into the critical section. They follow the mutual exclusion principle strictly and are much
more efficient than some other methods of synchronization.
There is no resource wastage because of busy waiting in semaphores as processor time is not wasted unnecessarily to check if a
condition is fulfilled to allow a process to access the critical section.
Semaphores are implemented in the machine independent code of the microkernel. So they are machine independent.
Disadvantages of Semaphores
Semaphores are complicated so the wait and signal operations must be implemented in the correct order to prevent deadlocks.
Semaphores are impractical for last scale use as their use leads to loss of modularity. This happens because the wait and signal
operations prevent the creation of a structured layout for the system.
Semaphores may lead to a priority inversion where low priority processes may access the critical section first and high priority
processes later.
Race Condition
A race condition is a situation that may occur inside a critical section. This happens when the result of multiple thread execution in critical
section differs according to the order in which the threads execute.
Race conditions in critical sections can be avoided if the critical section is treated as an atomic instruction. Also, proper thread synchronization
using locks or atomic variables can prevent race conditions.
What is Race Condition in OS?
A race condition is a problem that occurs in an operating system (OS) where two or more processes or threads are executing concurrently. The
outcome of their execution depends on the order in which they are executed. In a race condition, the exact timing of events is unpredictable, and
the outcome of the execution may vary based on the timing. This can result in unexpected or incorrect behavior of the system.
For example:
If two threads are simultaneously accessing and changing the same shared resource, such as a variable or a file, the final state of that resource
depends on the order in which the threads execute. If the threads are not correctly synchronized, they can overwrite each other's changes, causing
incorrect results or even system crashes.
What is Deadlock?
Deadlock is a situation that occurs in OS when any process enters in a waiting state because
another waiting process is holding the demanded resource. Deadlock is a common problem in
multi-processing where several processes share a specific type of mutually exclusive resource.
(Deadlock is a situation where a set of processes are blocked because each process is holding a
resource and waiting for another resource acquired by another process)
In the above diagram, the process 1 has resource 1 and needs to acquire resource 2. Similarly process 2 has resource 2
and needs to acquire resource 1. Process 1 and process 2 are in deadlock as each of them needs the other’s resource to
complete their execution but neither of them is willing to relinquish their resources.
Example of Deadlock
A real-world example would be traffic, which is going only in one direction.
Here, a bridge is considered a resource.
So, when Deadlock happens, it can be easily resolved if one car backs up (Preempt resources and rollback).
Several cars may have to be backed up if a deadlock situation occurs.
So starvation is possible.
Deadlock Conditions
(if any of the following conditions occur then there is a situation of deadlock)
Deadlock can arise if the following four conditions hold simultaneously (Necessary
Conditions) and if any of the condition resolved then deadlock condition can be avoided
Mutual Exclusion: Two or more resources are non-shareable (Only one process can use at a
time)
>> There should be a resource that can only be held by one process at a time. In the diagram
below, there is a single instance of Resource 1 and it is held by Process 1 only.
Hold and Wait: A process is holding at least one resource and waiting for resources.
A process can hold multiple resources and still request more resources from other processes which are
holding them. In the diagram given below, process 2 holds Resource 2 and Resource 3 and is requesting the
Resource 1 which is held by Process 1.
No Preemption: A resource cannot be taken from a process unless the process releases the resource.
>> A resource cannot be preempted from a process by force. A process can only release a resource
voluntarily. In the diagram below, Process 2 cannot preempt Resource 1 from Process 1. It will only be
released when Process 1 relinquishes it voluntarily after its execution is complete.
Circular Wait: A set of processes are waiting for each other in circular form.
A process is waiting for the resource held by the second process, which is waiting for the
resource held by the third process and so on, till the last process is waiting for a resource held by
the first process. This forms a circular chain. For example: Process 1 is allocated Resource2 and
it is requesting Resource 1. Similarly, Process 2 is allocated Resource 1 and it is requesting
Resource 2. This forms a circular wait loop.
A deadlock occurs when a set of processes is stalled because each process is holding a resource and waiting for another
process to acquire another resource. In the diagram below, for example, Process 1 is holding Resource 1 while Process 2
acquires Resource 2, and Process 2 is waiting for Resource 1.
System Model :
For the purposes of deadlock discussion, a system can be modeled as a collection of limited resources that can
be divided into different categories and allocated to a variety of processes, each with different requirements.
Memory, printers, CPUs, open files, tape drives, CD-ROMs, and other resources are examples of resource
categories.
By definition, all resources within a category are equivalent, and any of the resources within that category can
equally satisfy a request from that category. If this is not the case (i.e. if there is some difference between the
resources within a category), then that category must be subdivided further. For example, the term “printers”
may need to be subdivided into “laser printers” and “color inkjet printers.”
Some categories may only have one resource.
The kernel keeps track of which resources are free and which are allocated, to which process they are allocated,
and a queue of processes waiting for this resource to become available for all kernel-managed resources.
Mutexes or wait() and signal() calls can be used to control application-managed resources (i.e. binary or
counting semaphores. )
When every process in a set is waiting for a resource that is currently assigned to another process in the set, the
set is said to be deadlocked.
Operations :
In normal operation, a process must request a resource before using it and release it when finished, as shown below.
1. Request –
If the request cannot be granted immediately, the process must wait until the resource(s) required to become
available. The system, for example, uses the functions open(), malloc(), new(), and request ().
2. Use –
The process makes use of the resource, such as printing to a printer or reading from a file.
3. Release –
The process relinquishes the resource, allowing it to be used by other processes.
Necessary Conditions :
There are four conditions that must be met in order to achieve deadlock as follows.
1. Mutual Exclusion –
At least one resource must be kept in a non-shareable state; if another process requests it, it must wait for it to be
released.
3. No preemption –
Once a process holds a resource (i.e. after its request is granted), that resource cannot be taken away from that
process until the process voluntarily releases it.
4. Circular Wait –
There must be a set of processes P0, P1, P2,…, PN such that every P[I] is waiting for P[(I + 1) percent (N + 1)].
(It is important to note that this condition implies the hold-and-wait condition, but dealing with the four
conditions is easier if they are considered separately).
Approach-2
Resource Preemption :
When allocating resources to break the deadlock, three critical issues must be addressed:
1. Selecting a victim –
Many of the decision criteria outlined above apply to determine which resources to preempt from which
processes.
2. Rollback –
A preempted process should ideally be rolled back to a safe state before the point at which that resource was
originally assigned to the process. Unfortunately, determining such a safe state can be difficult or impossible, so
the only safe rollback is to start from the beginning. (In other words, halt and restart the process.)
3. Starvation –
How do you ensure that a process does not go hungry because its resources are constantly being preempted? One
option is to use a priority system and raise the priority of a process whenever its resources are preempted. It
should eventually gain a high enough priority that it will no longer be preempted.
Detection and Recovery: Another approach to dealing with deadlocks is to detect them when they occur and recover
from them. This can involve killing one or more of the processes involved in the deadlock or releasing some of the
resources they hold.
Deadlock Avoidance
Deadlock avoidance can be done with Banker’s Algorithm.
Banker’s Algorithm
Bankers’s Algorithm is resource allocation and deadlock avoidance algorithm which test all the request made by
processes for resources, it checks for the safe state, if after granting request system remains in the safe state it allows the
request and if there is no safe state it doesn’t allow the request made by the process.
Inputs to Banker’s Algorithm: