Topic: Shared Memory, Message Passing, Pipes, and Sockets
Lesson Objectives
By the end of this lesson, learners should be able to:
1. Explain the concept of interprocess communication (IPC).
2. Describe and differentiate between shared memory and message passing.
3. Explain how pipes and sockets work in IPC.
4. Identify appropriate use cases for each IPC mechanism.
1. What is Interprocess Communication (IPC)?
Definition:
Interprocess Communication (IPC) refers to the mechanisms an operating system provides to
allow processes to communicate and coordinate with each other.
Why IPC is needed:
Data sharing
Synchronization
Event signaling
Resource sharing
2. Shared Memory
Definition:
Shared memory is an IPC technique where multiple processes access a common memory space
for communication.
How It Works:
The OS allocates a shared memory region.
Processes attach this region to their address space.
They can then read/write data directly.
Advantages:
Very fast (no kernel involvement during access)
Efficient for large data transfers
Disadvantages:
Requires explicit synchronization (e.g., semaphores, mutexes)
Risk of race conditions or data corruption
Example (Linux C API):
shmget(), shmat(), shmdt(), shmctl()
Use Cases:
Real-time systems
Games or simulation engines
Multimedia applications
3. Message Passing
Definition:
Message passing is a method where processes send and receive messages via the operating
system.
How It Works:
The OS manages message queues.
One process sends a message; another receives it.
Functions in Linux (System V):
msgget(), msgsnd(), msgrcv()
Advantages:
Simple and safe
No risk of data collision
Disadvantages:
Slower than shared memory (involves kernel overhead)
May require buffering and copying data
Use Cases:
Distributed systems
Inter-module communication
4. Pipes
Definition:
A pipe is a unidirectional communication channel between processes.
Types of Pipes:
1. Unnamed Pipes:
o Temporary
o Exist only between parent and child processes
2. Named Pipes (FIFOs):
o Have a name in the file system
o Can be used between unrelated processes
How Pipes Work:
One process writes to the pipe.
Another process reads from the pipe.
Data follows First-In-First-Out (FIFO) order.
Linux Example (Named Pipe):
mkfifo mypipe
Advantages:
Easy to use
Good for linear data flow
Disadvantages:
Unidirectional (unless two pipes are used)
Limited control and scalability
Use Cases:
Shell scripting
Inter-process communication in simple applications
5. Sockets
Definition:
Sockets are IPC mechanisms used for communication between processes over a network (or
locally).
Types of Sockets:
1. Stream Sockets (TCP): Reliable, connection-oriented
2. Datagram Sockets (UDP): Unreliable, faster, connectionless
How It Works:
A server process creates a socket and listens.
A client process connects and sends/receives data.
Socket System Calls (Linux):
socket(), bind(), listen(), accept(), connect(), send(), recv()
Advantages:
Works over networks (remote communication)
Flexible and scalable
Disadvantages:
More complex to implement
Requires handling IP addresses, ports, protocols
Use Cases:
Web servers and clients
Chat and messaging apps
Network services
6. Summary Comparison Table
IPC Method Speed OS Involvement Complexity Network Support Sync Required
Shared Memory Very fast Low High No Yes
Message Passing Moderate Medium Low Yes (indirect) No
Pipes Moderate Medium Low No No
Sockets Variable High High Yes No
7. Key Points to Remember
Shared memory is efficient but needs synchronization.
Message passing is safer but slower.
Pipes are good for parent-child communication.
Sockets are used for network-based or remote process communication.
Topic: Race Conditions, Critical Section Problem, and Mutual Exclusion
1. Process Synchronization – Introduction
Definition:
Process synchronization is a mechanism that ensures that two or more concurrent processes
do not execute critical section code at the same time, which can lead to unexpected results.
It is essential in multitasking environments, especially when shared resources (memory,
files, variables) are involved.
2. Race Condition
Definition:
A race condition occurs when multiple processes or threads access and manipulate shared
data concurrently, and the final outcome depends on the order of execution.
Why It Happens:
No control over the sequence of operations.
Processes race to access or modify shared data.
Example (Pseudo Code):
// Two processes updating the same variable
balance = balance + 100;
If two threads execute this simultaneously, the final result may be incorrect.
Real-Life Example:
Two ATMs withdrawing from the same bank account at the same time.
Solution:
Proper synchronization to control access to shared resources.
3. Critical Section Problem
Definition:
A critical section is a part of a program that accesses shared resources (data, file, memory). If
multiple processes enter their critical sections at the same time, it may lead to inconsistent
results.
Conditions of the Critical Section Problem:
To solve the critical section problem, any solution must satisfy the following three conditions:
1. Mutual Exclusion:
Only one process can enter the critical section at a time.
2. Progress:
If no process is in the critical section, then only processes trying to enter should decide
who gets in next.
3. Bounded Waiting:
A process should not wait indefinitely to enter its critical section.
4. Mutual Exclusion
Definition:
Mutual exclusion ensures that only one process at a time can access a critical section or shared
resource.
Methods to Achieve Mutual Exclusion:
Method Description
Software Solutions Algorithms such as Peterson’s, Dekker’s, etc.
Hardware Solutions Disable interrupts, atomic instructions (e.g., Test-and-Set, Swap)
Semaphores Integer variables for signaling; wait() and signal() operations
Mutex Locks Simple locking mechanisms to protect critical sections
Monitors High-level constructs (object-oriented style) for synchronization
5. Software Solution Example: Peterson’s Algorithm
Used for two processes (P0 and P1) to ensure mutual exclusion:
// Shared variables
bool flag[2];
int turn;
Process P0:
flag[0] = true;
turn = 1;
while (flag[1] && turn == 1); // wait
// critical section
flag[0] = false;
Process P1:
flag[1] = true;
turn = 0;
while (flag[0] && turn == 0); // wait
// critical section
flag[1] = false;
6. Hardware Solution Example: Test-and-Set Instruction
An atomic instruction used to achieve synchronization:
boolean test_and_set(boolean *target) {
boolean temp = *target;
*target = true;
return temp;
}
If test_and_set() returns false, the process enters the critical section.
If true, the process waits.
7. Semaphores
Definition:
A semaphore is a variable used to control access to a shared resource.
Two types:
Binary Semaphore (0 or 1) → behaves like a mutex
Counting Semaphore (range > 1)
Operations:
wait(S) – Decrements the value of semaphore S. If S < 0, the process is blocked.
signal(S) – Increments the value of semaphore S. If S ≤ 0, wakes up a blocked process.
8. Deadlock vs Starvation (Related Concepts)
Concept Meaning
Deadlock Processes wait forever due to circular wait for resources
Starvation A process is indefinitely delayed from entering its critical section due to unfair scheduling
9. Summary
Term Key Idea
Race Condition Multiple processes accessing shared data unpredictably
Critical Section Code that accesses shared resources
Mutual Exclusion Only one process in the critical section at a time
Semaphores/Mutexes Tools to enforce synchronization
10. Real-Life Analogy
Race Condition Example:
Two people editing the same Google Doc offline and syncing it later – changes might overwrite
each other.
Critical Section Example:
Only one person can use a restroom (shared resource) at a time – others must wait.
Topic: Synchronization Tools
1. What is Synchronization in Operating Systems?
Definition:
Synchronization is the process of coordinating the execution of processes so that they do not
interfere with each other when accessing shared resources like memory, files, or I/O devices.
Goal:
Prevent problems such as:
Race conditions
Data inconsistency
Deadlocks and starvation
2. Importance of Synchronization Tools
Ensure mutual exclusion in critical sections.
Maintain data consistency.
Coordinate interprocess communication (IPC).
Avoid process interference in concurrent systems.
3. Common Synchronization Tools
A. Semaphores
Definition:
A semaphore is an integer variable used to signal and control access to shared resources.
Types:
1. Binary Semaphore (also called Mutex)
o Only two values: 0 or 1
o Used for mutual exclusion
2. Counting Semaphore
o Range: unrestricted integer
o Used when a resource has multiple instances
Operations:
c
CopyEdit
wait(S):
while S <= 0; // busy wait
S = S - 1;
signal(S):
S = S + 1;
Use Case Example:
Controlling access to a printer shared by multiple users.
B. Mutex Locks (Mutual Exclusion Locks)
Definition:
A mutex is a locking mechanism used to ensure only one thread accesses a critical section at a
time.
Usage:
A thread locks the mutex before entering the critical section.
It unlocks it after completing the task.
Advantages:
Simple to implement
Efficient for short critical sections
Disadvantages:
Can cause deadlocks if not managed properly
C. Monitors
Definition:
A monitor is a high- level synchronization construct that allows safe access to shared variables
using methods and condition variables.
Only one process may be active in the monitor at a time.
Languages like Java and C++ support monitors natively through synchronized
methods/blocks.
Key Concepts:
wait() – Process releases control and waits
signal() – Wakes up one waiting process
Advantages:
Simplifies complex synchronization
Reduces programmer errors
D. Condition Variables (Used with Monitors)
A condition variable allows a process to wait until a particular condition becomes true.
Used with monitors for advanced coordination.
java
synchronized (object) {
while (!condition) {
object.wait();
}
// critical section
object.notify(); // or notifyAll();
}
E. Spinlocks
Definition:
A spinlock is a lock where a process continuously checks (spins) in a loop while waiting for the
lock to become available.
Features:
Avoids context-switch overhead
Wasteful on single-processor systems
Use Case:
Useful in multiprocessor systems where locks are held briefly
F. Hardware-based Tools
Some CPUs provide special atomic instructions to support synchronization:
1. Test-and-Set
2. Compare-and-Swap
3. Exchange
These operations are atomic and can implement semaphores or locks at a low level.
4. Summary Table of Synchronization Tools
Tool Type Used For Key Feature
Semaphore Software General synchronization wait/signal operations
Mutex Lock Software Mutual exclusion Lock/unlock only
Monitor Software (OOP) Safe access via methods Built-in sync methods
Spinlock Hardware/Soft Short critical sections Busy-wait mechanism
Condition Variable Software Wait for condition Used inside monitors
Atomic Instructions Hardware Low-level synchronization No need for high-level tools
5. Real-Life Analogy
Semaphore: Like a traffic light that signals when a car (process) can go.
Mutex: Like a restroom key — only one person can use it at a time.
Monitor: Like a room with a single door that only allows one person in at once, and you
can wait inside until you're called.
6. Conclusion
Synchronization tools are essential for safe multitasking.
Choosing the right tool depends on use case, system architecture, and performance
requirements.
Incorrect use of these tools can lead to deadlocks, starvation, or race conditions.