Unit-III: Threads, Process Synchronization & Deadlocks
Unit-III: Threads, Process Synchronization & Deadlocks
Textbook Reference:
Operating System Concepts – 9th Edition
Silberschatz, Galvin and Gagne ©2013
Process
The unit or resource allocation and a unit of protection.
A virtual address space that holds the process image.
Protected access to:
processors
other processes
files
I/O resources
But does the law take into account contemporary multicore systems?
Ready Spawn
Blocked Block
Unblock
Finish
Process 1
Server Server
Thread A (Process 1)
Thread B (Process 1)
RPC
Request
Server
Time
I/O Request Time quantum
request Complete Expires
Thread A (Process 1)
Thread B (Process 1)
Time quantum
Expires
Process created
User
Threads User Space
Library Space
Kernel
Space
Kernel
Space
P
(a) Pure User Level Threads (b) Pure Kernel Level Threads
(a) (b)
(c) (d)
Disadvantages of ULTs
In a typical OS many system calls are blocking
As a result, when a ULT executes a system call, not only is that thread
blocked, but all of the threads within the process are blocked
In a pure ULT strategy, a multithreaded application cannot take advantage
of multiprocessing
Jacketing:
• Converts a blocking system call into a non-blocking system call
Disadvantages of KLTs:
The transfer of control from one thread to another within the same process requires a
mode switch to the kernel.
Many-to-One
One-to-One
Many-to-Many
Textbook Reference:
Operating System Concepts – 9th Edition
Silberschatz, Galvin and Gagne ©2013
Objectives
To present the concept of process synchronization.
To introduce the critical-section problem, whose solutions can be used to ensure
the consistency of shared data
To present both software and hardware solutions of the critical-section problem
To examine several classical process-synchronization problems
To explore several tools that are used to solve process synchronization problems
while (true) {
/* produce an item in next produced */
while (true) {
while (counter == 0)
; /* do nothing */
next_consumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
counter--;
/* consume the item in next consumed */
}
do {
critical section
turn = j;
remainder section
} while (true);
The variable turn indicates whose turn it is to enter the critical section
The flag array is used to indicate if a process is ready to enter the
critical section. flag[i] = true implies that process Pi is ready!
do { do {
flag[i] = true; flag[j] = true;
turn = j; turn = i;
while (flag[j] && turn = = while (flag[i] && turn = =
j); i);
critical section critical section
flag[i] = false; flag[j] = false;
remainder section remainder section
} while (true); } while (true);
do {
acquire lock
critical section
release lock
remainder section
} while (TRUE);
Definition:
boolean test_and_set (boolean *target)
{
boolean rv = *target;
*target = TRUE;
return rv:
}
1. Executed atomically
2. Returns the original value of passed parameter
3. Set the new value of passed parameter to “TRUE”.
1. Executed atomically
2. Returns the original value of passed parameter “value”
3. Set the variable “value” the value of the passed parameter “new_value” but
only if “value” ==“expected”. That is, the swap takes place only under this
condition.
do {
waiting[i] = true;
key = true;
while (waiting[i] && key)
key = test_and_set(&lock);
waiting[i] = false;
/* critical section */
j = (i + 1) % n;
while ((j != i) && !waiting[j])
j = (j + 1) % n;
if (j == i)
lock = false;
else
waiting[j] = false;
/* remainder section */
} while (true);
Textbook Reference:
Operating System Concepts – 9th Edition
Silberschatz, Galvin and Gagne ©2013
Semaphore
Synchronization tool that provides more sophisticated ways (than Mutex locks) for process to
synchronize their activities.
Semaphore S – integer variable
Can only be accessed via two indivisible (atomic) operations
wait() and signal()
Originally called P() and V()
Definition of the wait() operation
wait(S) {
while (S <= 0)
; // busy wait
S--;
}
Definition of the signal() operation
signal(S) {
S++;
}
Note that applications may spend lots of time in critical sections and
therefore this is not a good solution
wait(semaphore *S) {
S->value--;
if (S->value < 0) {
add this process to S->list;
block();
}
}
signal(semaphore *S) {
S->value++;
if (S->value <= 0) {
remove a process P from S->list;
wakeup(P);
}
}
do {
...
/* produce an item in next_produced */
...
wait(empty);
wait(mutex);
...
/* add next produced to the buffer */
...
signal(mutex);
signal(full);
} while (true);
Do {
wait(full);
wait(mutex);
...
/* remove an item from buffer to next_consumed */
...
signal(mutex);
signal(empty);
...
/* consume the item in next consumed */
...
} while (true);
do {
wait(rw_mutex);
...
/* writing is performed */
...
signal(rw_mutex);
} while (true);
// eat
signal (chopstick[i] );
signal (chopstick[ (i + 1) % 5] );
// think
} while (TRUE);
What is the problem with this algorithm?
Deadlock handling
Allow at most 4 philosophers to be sitting simultaneously at the table.
Allow a philosopher to pick up the forks only if both are available (picking must be
done in a critical section.
Use an asymmetric solution -- an odd-numbered philosopher picks up first the left
chopstick and then the right chopstick.
Even-numbered philosopher picks up first the right chopstick and then the left
chopstick.
Textbook Reference:
Operating System Concepts – 9th Edition
Silberschatz, Galvin and Gagne ©2013
Monitors
A high-level abstraction that provides a convenient and effective mechanism for process
synchronization
Abstract data type, internal variables only accessible by code within the procedure
Only one process may be active within the monitor at a time
But not powerful enough to model some synchronization schemes
monitor monitor-name
{
// shared variable declarations
procedure P1 (…) { …. }
initialization_code() {
for (int i = 0; i < 5; i++)
state[i] = THINKING;
}
}
DiningPhilosophers.pickup(i);
EAT
DiningPhilosophers.putdown(i);
Variables
wait(mutex);
…
body of F;
…
if (next_count > 0)
signal(next)
else
signal(mutex);
x_count++;
if (next_count > 0)
signal(next);
else
signal(mutex);
wait(x_sem);
x_count--;
if (x_count > 0) {
next_count++;
signal(x_sem);
wait(next);
next_count--;
}
R.acquire(t);
...
access the resurce;
...
R.release;
Textbook Reference:
Operating System Concepts – 9th Edition
Silberschatz, Galvin and Gagne ©2013
Deadlocks
System Model
Deadlock Characterization
Methods for Handling Deadlocks
Deadlock Prevention
Deadlock Avoidance
Deadlock Detection
Recovery from Deadlock
a waiting process is never again able to change state, because the resources it has
requested are held by other waiting processes. This situation is called a deadlock.
R = {R1, R2, …, Rm}, the set consisting of all resource types in the
system
Pi requests instance of Rj
Pi
Rj
Pi is holding an instance of Rj
Pi
Rj
No Preemption –
If a process that is holding some resources requests another resource that cannot be
immediately allocated to it, then all resources currently being held are released
Preempted resources are added to the list of resources for which the process is waiting
Process will be restarted only when it can regain its old resources, as well as the new ones
that it is requesting
we preempt the desired resources from the waiting process and allocate them to the
requesting process. If the resources are neither available nor held by a waiting process, the
requesting process must wait.
Circular Wait – impose a total ordering of all resource types, and require that each
process requests resources in an increasing order of enumeration
pthread_mutex_lock(&first_mutex);
pthread_mutex_lock(&second_mutex);
/** * Do some work */
pthread_mutex_unlock(&second_mutex);
pthread_mutex_unlock(&first_mutex);
pthread_exit(0);
}
/* thread two runs in this function */
void *do_work_two(void *param)
{
pthread_mutex_lock(&second_mutex);
pthread_mutex_lock(&first_mutex);
/** * Do some work */
pthread_mutex_unlock(&first_mutex);
pthread_mutex_unlock(&second_mutex);
pthread_exit(0);
}
Simplest and most useful model requires that each process declare the
maximum number of resources of each type that it may need
The deadlock-avoidance algorithm dynamically examines the resource-
allocation state to ensure that there can never be a circular-wait condition
Resource-allocation state is defined by the number of available and allocated
resources, and the maximum demands of the processes
Claim edge Pi Rj indicated that process Pj may request resource Rj; represented by
a dashed line.
Claim edge converts to request edge when a process requests a resource.
Request edge converted to an assignment edge when the resource is allocated to the
process.
When a resource is released by a process, assignment edge reconverts to a claim
edge.
Resources must be claimed a priori in the system.
When a process gets all its resources it must return them in a finite amount of
time
4. If Finish [i] == true for all i, then the system is in a safe state
Requesti = request vector for process Pi. If Requesti [j] = k then process Pi wants
k instances of resource type Rj
1. If Requesti Needi go to step 2. Otherwise, raise error condition, since process has
exceeded its maximum claim
2. If Requesti Available, go to step 3. Otherwise Pi must wait, since resources are not
available
3. Pretend to allocate requested resources to Pi by modifying the state as follows:
Available = Available – Requesti;
Need
ABC
P0 7 4 3
P1 1 2 2
P2 6 0 0
P3 0 1 1
P4 4 3 1
The system is in a safe state since the sequence < P1, P3, P4, P2, P0> satisfies
safety criteria
Executing safety algorithm shows that sequence < P1, P3, P4, P0, P2> satisfies safety
requirement
Detection algorithm
Recovery scheme
Sequence <P0, P2, P3, P1, P4> will result in Finish[i] = true for all i
State of system?
Can reclaim resources held by process P0, but insufficient resources to fulfill other
processes; requests
Deadlock exists, consisting of processes P1, P2, P3, and P4
Rollback – return to some safe state, restart process for that state