UNIT 2

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 15

351CS43 OPERATING SYSTEMS

UNIT - II
Process Synchronization: Critical – Section Problem – Synchronization Hardware –
Semaphores – Classical Problems of Synchronization – Critical Region – Monitors.
Deadlocks: Characterization – Methods for Handling Deadlocks – Deadlocks Prevention –
Avoidance – Detection – Recovery.

Process Synchronization

The Critical Section Problem

Consider a system consisting of n processes {Po. P1,…. Pn-1}. Each process has a segment
of code, called a critical section, in which the process may be changing common variables,
updating a table, writing a file, and so on. The important feature of the system is that, when
one process is executing in its critical section, no other process is to be allowed to execute in
its critical section. That is, no two processes are executing in their critical sections at the
same time. The critical -section problem is to design a protocol that the processes can use to
cooperate. Each process must request permission to enter its critical section. The section of
code implementing this request is the entry section. The critical section may be followed by
an exit section. The remaining code is the remainder section

A solution to the critical-section problem must satisfy the following three requirements:

 Mutual exclusion. If process P1 is executing in its critical section, then no other


processes can be executing in their critical sections.
 Progress. If no process is executing in its critical section and some processes wish to
enter their critical sections, then only those processes that are not executing in their
remainder sections can participate in the decision on which will enter its critical
section next, and this selection cannot be postponed indefinitely.
 Bounded waiting. There exists a bound, or limit, on the number of times that other
processes are allowed to enter their critical sections after a process has made a
request to enter its critical section and before that request is granted.

1
351CS43 OPERATING SYSTEMS

Two general approaches are used to handle critical sections in operating systems: (1)
preemptive kernels and (2) nonpreemptive kernels. A preemptive kernel allows a process to
be preempted while it is running in kernel mode. A nonpreemptive kernel does not allow a
process running in kernel mode to be preempted;

Synchronization Hardware

Hardware features can make any programming task easier and improve system efficiency. In
this section, we present some simple hardware instructions that are available on many
systems and show how they can be used effectively in solving the critical-section problem.

Many modern computer systems therefore provide special hardware instructions that allow
us either to test and modify the content of a word or to swap the contents of two words
atomically—that is, as one uninterruptible unit. We can use these special instructions to solve
the critical-section problem in a relatively simple manner. Rather than discussing one
specific instruction for one specific machine, we abstract the main concepts behind these
types of instructions Uniprocessors – could disable interrupts
 Currently running code would execute without preemption
 Generally too inefficient on multiprocessor systems

Modern machines provide special atomic hardware instructions


Atomic = non-interruptable

 Either test memory word and set value


 Or swap contents of two memory words

Semaphores

A semaphore S is an integer variable that, apart from initialization, is accessed only through
two standard atomic operations: wait ( ) and signal ( ). The wait ( ) operation was originally
termed P(“to test”); signal ( ) was originally called V ( “to increment”). The definition of
wait () is as follows:
wait (S) {
while S < = 0
; / / no-op
s- -;
}
The definition of signal () is as follows:
Signal (S) {
S++;
}

All the modifications to the integer value of the semaphore in the wait ( ) and signal ( )
operations must be executed indivisibly. That is, when one process modifies the semaphore

2
351CS43 OPERATING SYSTEMS

value, no other process can simultaneously modify that same semaphore value. In addition, in
the case of wait (S), the testing of the integer value of S (S< = 0), and its possible
modification (S--), must also be executed without interruption.

Usage
Counting semaphores can be used to control access to a given resource consisting of a finite
number of instances. The semaphore is initialized to the number of resources available. Each
process that wishes to use a resource performs a wait ( ) operation on the semaphore (thereby
decrementing the count). When a process releases a resource, it performs a signal ( )
operation (incrementing the count), When the count for the semaphore goes to 0, all
resources are being used. After that, processes that wish to use a resource will block until the
count becomes greater than 0.

We can also use semaphores to solve various synchronization problems. For example,
consider two concurrently running processes: P1 with a statement S1 and P2 with a statement
S2. Suppose we require that S2 be executed only after S1 has completed. We can implement
this scheme readily by letting P1 and P2 share a common semaphore synch, initialized to 0,
and by inserting the statements
S1;
signal (synch);
in process P1, and the statements
wait (synch);
S2;
in process P2. Because synch is initialized to 0, P2 will execute S2 only after P1 has invoked
signal (synch), which is after statement S1 has been executed.

Implementation

The main disadvantage of the semaphore definition given here is that it requires busy
waiting. While a process is in its critical section, any other process that tries to enter its
critical section must loop continuously in the entry Code. This continual looping is clearly a
problem in a real multiprogramming system, where a single CPU is shared among many
processes. Busy waiting wastes CPU cycles that some other process might be able to use
productively. This type of semaphore is also called a spin lock because the process ‘spins”
while waiting for the lock

To overcome the need for busy waiting, we can modify the definition of the wait ( ) and
signal ( ) semaphore operations. When a process executes the wait ( ) operation and finds that
the semaphore value is not positive, it must wait. However, rather than engaging in busy
waiting, the process can block itself. The block operation places a process into a waiting
queue associated with the semaphore, and the state of the process is switched to the waiting
state. Then control is transferred to the CPU scheduler, which selects another process to
execute.

A process that is blocked, waiting on a semaphore S , should be restarted when some other
process executes a signal ( ) operation. The process is restarted by a wakeup ( ) operation,

3
351CS43 OPERATING SYSTEMS

which changes the process from the waiting state to the ready state. The process is then
placed in the ready queue.

To implement semaphores under this definition, we define a semaphore as a “C” struct:


typedef struct {
int value;
struct process *list;
} semaphore;

Each semaphore has an integer value and a list of processes list. When a process must wait
on a semaphore, it is added to the list of processes. A signal ( ) operation removes one
process from the list of waiting processes and awakens that process.

Classical Problems of Synchronization

The Bounded-Buffer Problem

It is commonly used to illustrate the power of synchronization primitives.

We assume that the pool consists of n buffers, each capable of holding one item. the mutex
semaphore provides mutual exclusion for accesses to the buffer pool and is initialized to the
value 1. The empty and full semaphores count the number of empty and full buffers. The
semaphore empty is initialized to the value n; the semaphore full is initialized to the value 0.

do {
....
// produce an item in next p
....
wait (empty);
wait (mutex);
....
// add next p to butter
...
signal (mutex);
signal(full);
}while (TRUE);

The code for the producer process is shown in above figure; the code for the consumer
process is shown in below Figure. Note the symmetry between the producer and the
consumer; we can interpret this code as the producer producing full buffers for the consumer
or as the consumer producing empty buffers for the producer.

4
351CS43 OPERATING SYSTEMS

do {
wait (full);
wait (mutex);
....
// remove an item from buffer to next c
...
signal (mutex);
signal (empty);
...
// consume the item in next c
...
} while (TRUE);

The Readers-Writers Problem

A database is to be shared among several concurrent processes. Some of these processes may
want only to read the database, whereas others may want to update (that is. to read and write)
the database. We distinguish between these two types of processes by referring to the former
as readers and to the latter as writers. Obviously if two readers access the shared data
simultaneously, no adverse affects will result. However, if a writer and some other thread
(either a reader or a writer) access the database simultaneously, chaos may ensue.

To ensure that these difficulties do not arise, we require that the writers have exclusive access
to the shared database. This synchronization problem is referred to as the readers—writers
problem. Since it was originally stated, it has been used to test nearly every new
synchronization primitive. The readers— writers problem has several variations, all
involving priorities. The simplest one, referred to as the first readers—writers problem,
requires that no reader will be kept waiting unless a writer has already obtained permission to
use the shared object. In other words, no reader should wait for other readers to finish simply
because a writer is waiting. The second readers—writers problem requires that, once a
writer is ready, that writer performs its write as soon as possible. In other words, if a writer is
waiting to access the object, no new readers may start reading.

In the solution to the first readers-writers problem, the reader processes share the following
data structures;
semaphore mutex, wrt;
int readcount;

The semaphores mutex and wrt are initialized to 1; readcount is initialized to 0. The
semaphore wrt is common to both reader and writer processes. The mutex semaphore is used
to ensure mutual exclusion when the variable readcount is updated. The readcount variable
keeps track of how many processes are currently reading the object. The semaphore wrt
functions as a mutual-exclusion semaphore for the writers. It is also used by the first or last
do {
wait (wrt);
...

5
351CS43 OPERATING SYSTEMS

// writing is performed
...
signal {wrt);
}while (TRUE);

The code for a writer process is shown in above Figure. the code for a reader process is
shown in below Figure. Note that, if a writer is in the critical section and n readers are
waiting, then one reader is queued on wrt, and n-1 readers are queued on mutex. Also
observe that, when a writer executes signal (wrt), we may resume the execution of either the
waiting readers or a single waiting writer. The selection is made by the scheduler.

do
wait (mutex);
readcount÷÷;
if (readcount = = 1)
wait (wrt);
signal (mutex);
...
// reading is performed
...
wait (mutex);
readeount- -;
if (readcount = = 0)
signal (wrt);
signal (mutex);
}while (TRUE);

The Dining-Philosophers Problem

Consider five philosophers who spend their lives thinking and eating. The philosophers share
a circular table surrounded by five chairs, each belonging to one philosopher. In the center of
the table is a bowl of rice, and the table is laid with five single chopsticks (below Figure).
When a philosopher thinks, she does not interact with her colleagues. From time to time, a
philosopher gets hungry and tries to pick up the two chopsticks that are closest to her (the
chopsticks that are between her and her left and right neighbors). A philosopher may pick up
only one chopstick at a time. Obviously she cannot pick up a chopstick that is already in the
hand of a neighbor. When a hungry philosopher has both her chopsticks at the same time, she
eats without releasing her chopsticks. When she is finished eating, she puts down both of her
chopsticks and starts thinking again.

6
351CS43 OPERATING SYSTEMS

The dining-philosophers problem is considered a classic synchronization problem neither


because of its practical importance nor because computer scientists dislike philosophers but
because it is an example of a large class of concurrency-control problems. It is a simple
representation of the need to allocate several resources among several processes in a
deadlock-free and starvation-free manner.

One simple solution is to represent each chopstick with a semaphore. A philosopher tries to
grab a chopstick by executing a wait ( ) operation on that semaphore; she releases her
chopsticks by executing the signal ( ) operation on the appropriate semaphores. Thus, the
shared data are

semaphore chopstick[5];
where all the elements of chopstick are initialized to 1. The structure of philosopher i is
shown in below figure.

Although this solution guarantees that no two neighbors are eating simultaneously, it
nevertheless must be rejected because it could create a deadlock. Suppose that all five
philosophers become hungry simultaneously and each grabs her left chopstick. All the
elements of chopstick will now be equal to 0. When each philosopher tries to grab her right
chopstick, she will be delayed forever.
do {
wait (chopstick [i]);
wait(chopstick[(i+1) % 5]);
...
// eat
...
signal (chopstick[i]);
signal(chopstick[(i+l) % 5]);
...
// think
...
} while (TRUE);

7
351CS43 OPERATING SYSTEMS

Monitors
Some languages (like Modula) have special language for dealing with mutual exclusion.
Such an environment is called a monitor.

• A monitor is a language-device which removes some of the pain from synchronization.


The monitor construct ensures that only one process at a time can be active within the
monitor. The programmer does not need to code this synchronization constraint explicitly.
They only have to create a monitor.
• A procedure or function defined under the umbrella of a monitor can only access those
shared memory locations declared within that monitor and vice-versa.
• Wait and signal operations can be defined to wait for specific condition variables. A
process can thus wait until another process sends a signal or semaphore which changes the
condition variable.

The monitor construct ensures that only one process at a time can be active within the
monitor. Consequently, the programmer does not need to code this synchronization
constraint explicitly. However, the monitor construct, as defined so far, is not sufficiently
powerful for modeling some synchronization schemes. For this purpose, we need to define
additional synchronization mechanisms. These mechanisms are provided by the condition
construct. A programmer who needs to write a tailor-made synchronization scheme can
define one or more variables of type condition:

Condition x, y;
The only operations that can be invoked on a condition variable are wait ( ) and signal ( ).
The operation
x.wait ( );
means that the process invoking this operation is suspended until another process invokes
x.signal( );

The x.signal ( ) operations resumes exactly one suspended process. If no process is


suspended, then the signal ( ) operation has no effect; that is, the state of x is the same as if
the operation had never been executed (Figure 6.18). Contrast this operation with the signal
( ) operation associated with semaphores, which always affects the state of the semaphore.

8
351CS43 OPERATING SYSTEMS

Now suppose that, when the x..signal () operation is invoked by a process P. there is a
suspended process Q associated with condition x. Clearly if the suspended process Q is
allowed 10 resume its execution, the signaling process P must wait. Otherwise, both P and Q
would be active simultaneously within the monitor. Note, however, that both processes can
conceptually continue with their execution. Two possibilities exist:

1. Signal and wait. P either waits until Q leaves the monitor or waits for another condition.
2. Signal and continue. Q either waits until P leaves the monitor or waits for another
condition.

There are reasonable arguments in favor of adopting either option. On the one hand, since P
was already executing in the monitor, the signal and—continue method seems more
reasonable, On the other hand, if we allow thread P to continue, then by the time Q is
resumed, the logical condition for which Q was waiting may no longer hold. A compromise
between these two choices was adopted in the language Concurrent Pascal. When thread P
executes the signal operation, it immediately leaves the monitor. Hence, Q is immediately
resumed.

Deadlocks
In a multiprogramming environment, several processes may compete for a finite number of
resources. A process requests resources; and if the resources are not available at that time, the
process enters a waiting state. Sometimes, a waiting process is never again able to change
state, because the resources it has requested are held by other waiting processes. This
situation is called a deadlock

System Model

9
351CS43 OPERATING SYSTEMS

If a process requests an instance of a resource type, the allocation of any instance of the type
will satisfy the request. If it will not, then the instances are not identical, and the resource
type classes have not been defined properly. For example, a system may have two printers.
These two printers may be defined to be in the same resource class if no one cares which
printer prints which output. However, if one printer is on the ninth floor and the other is in
the basement, then people on the ninth floor may not see both printers as equivalent, and
separate resource classes may need to be defined for each printer.
Under the normal mode of operation, a process may utilize a resource in only the following
sequence:

 Request. If the request cannot be granted immediately (for example, if the resource
is being used by another process), then the requesting process must wait until it can
acquire the resource.
 Use. The process can operate on the resource (for example, if the resource is a
printer, the process can print on the printer).
 Release. The process releases the resource.

Deadlock Characterization
Necessary Conditions
A deadlock situation can arise if the following four conditions hold simultaneously in a
system:

 Mutual exclusion. At least one resource must be held in a nonsharable mode; that is,
only one process at a time can use the resource. If another process requests that
resource, the requesting process must be delayed until the resource has been released.

 Hold and wait. A process must be holding at least one resource and waiting to
acquire additional resources that are currently being held by other processes.
 No preemption. Resources cannot be preempted; that is, a resource can be released
only voluntarily by the process holding it, after that process has completed its task.
 Circular wait. A set { P0, P1. ..., Pn} of waiting processes must exist such that P0 is
waiting for a resource held by P1, P1 is waiting for a resource held by P2. ..., Pn-1 is
waiting for a resource held by Pn, and Pn is waiting for a resource held by P0.

Resource-Allocation Graph
Deadlocks can be described more precisely in terms of a directed graph called a system
resource-allocation graph. This graph consists of a set of vertices V and a set of edges E. The
set of vertices V is partitioned into two different types of nodes: P = {P1. P2, ..., Pn}, the set
consisting of all the active processes in the Sytem, and R = { R, R2, ..., Rm} , the set
consisting of all resource types in the system.
A directed edge from process Pi to resource type Rj is denoted by Pi  Rj; it signifies that
process Pi has requested an instance of resource type Rj, and is currently waiting for that
resource. A directed edge from resource type Rj to process Pi is denoted by Rj  Pi; it
signifies that an instance of resource type Rj has been allocated to process Pi. A directed
edge PiRj is called a request edge; a directed edge RjPi is called an assignment edge.

10
351CS43 OPERATING SYSTEMS

Process states:

o Process Pi is holding an instance of resource type R2 and is waiting for an instance of


resource type R1.
o Process P2 is holding an instance of R1 and an instance of R2 and is waiting for an instance
of R3.
o Process P3 is holding an instance of R3.
Given the definition of a resource-allocation graph, it can he shown that, ii the graph contains
no cycles, then no process in the system is deadlocked. If the graph does contain a cycle, then
a deadlock may exist.

Methods for handling Deadlocks


Generally speaking. We can deal with the deadlock problem in one of three ways:
 We can use a protocol to prevent or avoid deadlocks, ensuring that the system will
never enter a deadlock state.
 We can allow the system to enter a deadlock state, detect it, and recover.
 We can ignore the problem altogether and pretend that deadlocks never occur in the
system.

Deadlock Prevention
Mutual Exclusion
The mutual-exclusion condition must hold for non-sharable resources. For example, a printer
cannot be simultaneously shared by several processes. Sharable resources, in contrast, do not
require mutually exclusive access and thus cannot be involved in a deadlock. Read-only files
are a good example of a sharable resource. If several processes attempt to open a read-only
file at the same time, they can be granted simultaneous access to the file. A process never
needs to wait for a sharable resource. In general, however, we cannot prevent deadlocks by
denying the mutual-exclusion condition, because some resources are intrinsically non-
sharable.

Hold and Wait

11
351CS43 OPERATING SYSTEMS

To ensure that the hold-and-wait condition never occurs in the system, we must guarantee
that, whenever a process requests a resource, it does not hold any other resources. One
protocol that can be used requires each process to request and be allocated all its resources
before it begins execution. We can implement this provision by requiring that system calls
requesting resources for a process precede all other system calls.

No Preemption
The third necessary condition for deadlocks is that there be no preemption of resources that
have already been allocated. To ensure that this condition does not hold, we can use the
following protocol. If a process is holding some resources and requests another resource that
cannot be immediately allocated to it (that is. the process must wait), then all resources
currently being held are preempted. In other words, these resources are implicitly released.
The preempted resources are added to the list of resources for which the process is waiting.
The process will be restarted only when it can regain its old resources, as well as the new
ones that it is requesting.

Circular Wait
The fourth and final condition for deadlocks is the circular—wait condition. One way to
ensure that this condition never holds is to impose a total ordering of all resource types and to
require that each process requests resources in an increasing order of enumeration.
To illustrate, we let R = { R1, R2, R3,.. Rm} be the set of resource types. We assign to each
resource type a unique integer number, which allows us to compare two resources and to
determine whether one precedes another in our ordering. Formally, we define a one-to-one
function F: R  N, where N is the set of natural numbers.

Deadlock Detection

Single Instance of Each Resource Type


If all resources have only a single instance, then we can define a deadlock- detection
algorithm that uses a variant of the resource-allocation graph. called a wait-for graph. We
obtain this graph from the resource-allocation graph by removing the resource nodes and
collapsing the appropriate edges.
More precisely, an edge from Pi to Pj in a wait-for graph implies that process Pi is waiting
for process Pj to release a resource that Pi needs. An edge PiPj exists in a wait-for graph if
and only if the corresponding resource- allocation graph contains two edges Pi Rq and
Rq Pi, for some resource Rq

12
351CS43 OPERATING SYSTEMS

Several Instances of a Resource Type

The wait-for graph scheme is not applicable to a resource-allocation system with multiple
instances of each resource type. We turn now to a deadlock- detection algorithm that is
applicable to such a system.
 Available A vector of length m indicates the number of available resources of each
type.
 Allocation. An n x m matrix defines the number of resources of each type currently
allocated to each process.
 Request. An n x m in matrix indicates the current request of each process. If
Request[i][j] equals k. then process Pi is requesting k more instances of resource type
Rj,

Recovery from Deadlock

Process Termination

To eliminate deadlocks by aborting a process, we use one of two methods. In both methods,
the system reclaims all resources allocated to the terminated processes.

 Abort all deadlocked processes. This method clearly will break the deadlock cycle,
but at great expense; the deadlocked processes may have computed for a long time,
and the results of these partial computations must be discarded and probably will
have to be recomputed later.
 Abort one process at a time until the deadlock cycle is eliminated. This method
incurs considerable overhead, since, after each process is aborted, a deadlock-

13
351CS43 OPERATING SYSTEMS

detection algorithm must be invoked to determine whether any processes are still
deadlocked.

Resource Preemption
To eliminate deadlocks using resource preemption, we successively preempt some resources
from processes and give these resources to other processes until the deadlock cycle is broken.

If preemption is required to deal with deadlocks, then three issues need to he addressed:
 Selecting a victim. Which resources and which processes are to be preempted? As in
process termination, we must determine the order of preemption to minimize cost.
Cost factors may include such parameters as the number of resources a deadlocked
process is holding and the amount of time the process has thus far consumed during
its execution.
 Rollback. If we preempt a resource from a process, what should be done with that
process? Clearly, it cannot continue with it normal execution; it is missing some
needed resource. We must roll back the process to some safe state and restart it from
that state.
 Starvation. How do we ensure that starvation will not occur? That is, how can we
guarantee that resources will not always be preempted from the same process?

In a system where victim selection is based primarily on cost factors, it may happen that the
same process is always picked as a victim. As a result, this process never completes its
designated task, a starvation situation that must be dealt with in any practical system. Clearly,
we must ensure that a process can be picked as a victim only a (small) finite number of times.
The most common solution is to include the number of rollbacks in the cost factor,

Spooling
Spooling is a way of processing data serially. Print jobs are spooled to the printer, because
they must be printed in the right order (it would not help the user if the lines of his/her file
were liberally mixed together with parts of someone elses file). During a spooling operation,
only one job is performed at a time and other jobs wait in a queue to be processed. Spooling
is a form of batch processing.

Spooling comes from the need to copy data onto a spool of tape for storage. It has since been
dubbed Simultaneous Peripheral Operation On-Line, which is a pretty lousy attempt to make
something more meaningful out of the word ‘spool’!

System Call
An important task of an operating system is to provide black-box functions for the most
frequently needed operations, so that users do not have to waste their time programming very
low level code which is irrelevant to their purpose. These ready-made functions comprise
frequently used code and are called system calls.
For example, controlling devices requires very careful and complex programming. Users
should not have to write code to position the head of the disk drive at the right place just to
save a file to the disk. This is a very basic operation which everyone requires and thus it

14
351CS43 OPERATING SYSTEMS

becomes the responsibility of the OS. Another example is mathematical functions or graphics
primitives.

Kernal
The part of the OS which handles all of the details of sharing and device handling is called
the kernel or core. The kernel is not something which can be used directly, although its
services can be accessed through system calls. What is needed is a user interface or command
line interface (CLI) which allows users to log onto the machine and manipulate files, compile
programs and execute them using simple commands. Since this is a layer of software which
wraps the kernel in more acceptable clothes, it is called a shell around the kernel.

It is only by making layers of software, in a hierarchy that very complex programs can be
written and maintained. The idea of layers and hierarchies ret urns again and again.

15

You might also like