Unit 3 Notes

Download as pdf or txt
Download as pdf or txt
You are on page 1of 24

21CSC202J - OPERATING SYSTEMS

Unit-3 – CPU Scheduling


Basic Concepts, Scheduling Criteria, Scheduling Algorithms, Thread Scheduling, Multiple-
Processor Scheduling, Real-Time CPU Scheduling. Deadlocks: System Model, Deadlock
Characterization, Methods for Handling Deadlocks, Deadlock Prevention, Deadlock
Avoidance, Deadlock Detection, Recovery from Deadlock.

Basic Concepts
Whenever the CPU becomes idle, the operating system selects one of the processes in the ready
queue for execution.
The selection process is carried out by the short-term scheduler (or CPU scheduler).
CPU Scheduling can be preemptive or non-preemptive
• Non-preemptive Scheduling: In non-preemptive scheduling, once the CPU has been
allocated a process, the process keeps the CPU until it releases the CPU either by
termination or by switching to the waiting state. Hence, scheduler waits till the process
completes its CPU burst time, and then it can allocate the CPU to another process.
• Preemptive Scheduling: In preemptive scheduling, the CPU is allocated to the processes
for a limited CPU cycle time, When the burst time of the process is greater than CPU cycle,
it is moved back to the ready queue and will execute in the next chance.

CPU Scheduling Algorithms


1. First-Come, First-Served Scheduling
2. Shortest Job First Scheduling
3. Priority Scheduling
4. Round Robin Scheduling
5. Multilevel queue Scheduling
6. Multilevel feedback Scheduling
7. Real Time scheduling: Rate Monotonic Scheduling and Deadline Scheduling

Scheduling Criteria

• Turnaround time: The interval from the time of submission of a process to the time of
completion is the turnaround time. i.e A period of time required for completing a
particular process or task.
• Waiting time: refers to the total time that a process spends while waiting in a ready
queue before reaching the CPU.
• Response time: It is the amount of time it takes to start responding, but not the time
that it takes to output that response.
• CPU utilization: The CPU should be kept as busy as possible. CPU utilization may
range from 0 to 100 percent. In a real system, it should range from 40 percent (for a
lightly loaded system) to 90 percent (for a heavily used system).

PREPARED BY DR J FARITHA BANU 1


• Throughput: Itis the number of processes completed per time unit. For long processes,
this rate may be 1 process per hour; for short transactions, throughput might be 10
processes per second.

❖ First-Come, First-Served Scheduling (FCFS)


• The process that requests the CPU first is allocated the CPU first.
• It is a non-preemptive Scheduling technique.
• The implementation of the FCFS policy is easily managed with a FIFO queue

Example:
Process Burst Time
P1 24
P2 3
P3 3
• If the processes arrive in the order PI, P2, P3, and are served in FCFS order, we get the
result shown in the following Gantt chart

Gantt Chart

Average waiting time = (0+24+27) / 3 = 17 ms


Average Turnaround time = (24+27+30) / 3 = 27 ms
• The FCFS algorithm is particularly troublesome for time – sharing systems, where it is
important that each user get a share of the CPU at regular intervals.

❖ Shortest Job First Scheduling


• The CPU is assigned to the process that has the smallest next CPU burst.
• If two processes have the same length next CPU burst, FCFS scheduling is used to break
the tie.

Example:

Process Burst Time


P1 6
P2 8
P3 7
P4 3
Gantt Chart

Average waiting time is (3 + 16 + 9 + 0)/4 = 7 ms


Average turnaround time = (3+9+16+24) / 4 = 13 ms
PREPARED BY DR J FARITHA BANU 2
SJF can be Preemptive & non preemptive scheduling
Example 2:
Process Arrival Time Burst Time
P1 0 8
P2 1 4
P3 2 9
P4 3 5

❖ Preemptive SJ F Scheduling or shortest remaining time first


• It is a preemptive scheduling technique.
• Preemptive SJF is known as shortest remaining time first (SRTF)

Average waiting
time : P1 : 0+
(10 – 1) = 9
P2 : 1 – 1 = 0
P3 : 17 – 2 = 15
P4 : 5 – 3 = 2
AWT = (9+0+15+2) / 4 = 6.5 ms
❖ Non-preemptive Scheduling

AWT = 0 + (8 – 1) + (12 – 3) + (17 – 2) / 4 = 7.75 ms

❖ Priority Scheduling
A priority is associated with each process, and the CPU is allocated to the process with the
highest priority.( smallest integer - highest priority).
Example :
Process Burst Time Priority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2

AWT=8.2 ms

PREPARED BY DR J FARITHA BANU 3


• Priority Scheduling can be preemptive or non-preemptive.
• Drawback: Starvation – low priority processes may never execute.
• Solution: Aging – It is a technique of gradually increasing the priority of processes
that wait in the system for a long time.
❖ Round-Robin Scheduling
• The round-robin (RR) scheduling algorithm is designed especially for
timesharing systems.
• It is similar to FCFS scheduling, but preemption is added to switch between
processes.
• A small unit of time, called a time quantum (or time slice), is defined.
• The ready queue is treated as a circular queue.
Example:
Process Burst Time
P1 24
P2 3
P3 3

Time Quantum = 4 ms.

Waiting time
P1 = (0-0)+ (10-4) = 6
P2 = 4
P3 = 7 (6+4+7 / 3 = 5.66 ms)
• The average waiting time is 17/3 = 5.66 milliseconds.
• The performance of the RR algorithm depends heavily on the size of the time–
quantum.
• If time-quantum is very large(infinite) then RR policy is same as FCFS policy.
• If time quantum is very small, RR approach is called processor sharing and
appears to the users as though each of n process has its own processor running
at 1/n the speed of real processor.

PREPARED BY DR J FARITHA BANU 4


Example 2:
Consider the following set of processes, with the length of the CPU-burst time
given in milliseconds:
Process Burst Time Priority
P1 10 3
P2 1 1
P3 2 3
P4 1 4
P5 5 2
The processes have arrived in the order P1, P2, P3, P4, P5, all at time 0. Draw
Gantt charts illustrating the execution of these processes using FCFS, non-
preemptive SJF, non-preemptive priority scheduling and Round Robin
scheduling (Time quantum =1). Also find average turnaround time and waiting
time for each of the scheduling.

PREPARED BY DR J FARITHA BANU 5


PREPARED BY DR J FARITHA BANU 6
❖ Multilevel Queue Scheduling

• It partitions the ready queue into several separate queues.


• The processes are permanently assigned to one queue, generally based on some
property of the process, such as memory size, process priority, or process type.
• There must be scheduling between the queues, which is commonly implemented
as a fixed-priority preemptive scheduling.
• For example the foreground queue may have absolute priority over the
background queue.
Example : of a multilevel queue scheduling algorithm with five queues
1. System processes
2. Interactive processes
3. Interactive editing processes
4. Batch processes
5. Student processes
• Each queue has absolute priority over lower-priority queue.

PREPARED BY DR J FARITHA BANU 7


❖ Multilevel Feedback Queue Scheduling
• It allows a process to move between queues.
• The idea is to separate processes with different CPU-burst characteristics.
• If a process uses too much CPU time, it will be moved to a lower-priority
queue.
• This scheme leaves I/O-bound and interactive processes in the higher-priority
queues.
• Similarly, a process that waits too long in a lower priority queue may be
moved to a higher-priority queue.
• This form of aging prevents starvation.

Example:
• Consider a multilevel feedback queue scheduler with three queues, numbered
from 0 to 2 .
• The scheduler first executes all processes in queue 0.
• Only when queue 0 is empty will it execute processes in queue 1.
• Similarly, processes in queue 2 will be executed only if queues 0 and 1 are empty.
• A process that arrives for queue 1 will preempt a process in queue 2.
• A process that arrives for queue 0 will, in turn, preempt a process in queue 1.

PREPARED BY DR J FARITHA BANU 8


• A multilevel feedback queue scheduler is defined by the following parameters:
1. The number of queues
2. The scheduling algorithm for each queue
3. The method used to determine when to upgrade a process to a higher
priority queue
4. The method used to determine when to demote a process to a lower-
priority queue
5. The method used to determine which queue a process will enter when
that process needs service

Thread Scheduling

The scheduling of thread involves two boundary scheduling:


• Scheduling of Kernel-Level Threads by the system scheduler.
• Scheduling of User-Level Threads to Kernel-Level Threads by using Lightweight
process (LWP).
Lightweight process (LWP):
The Lightweight process is threads that act as an interface for the User-Level Threads to access
the physical CPU resources.
The number of the lightweight processes depends on the type of application, for an I\O bound
application the number of LWP depends on the user level threads, and for CPU bound
application each thread is connected to a separate kernel-level thread.
In real-time, the first boundary of thread scheduling is beyond specifying the scheduling policy
and the priority, therefore, it requires two controls to be specified for the User level threads:

PREPARED BY DR J FARITHA BANU 9


Contention scope – Control scope defines the extent to which contention takes place.
Contention refers to the competition among the ULTs to access the KLTs.
Contention scope can be further classified into Process Contention Scope (PCS) and System
Contention Scope (SCS).
Process Contention Scope: Process Contention Scope is when the contention takes place
among threads belongs to the same process.
System contention scope (SCS): System Contention Scope refers to the contention that takes
place among all the threads in the system.
Allocation domain – The allocation domain is a set of multiple (or single) resources for which
a thread is competing.
Pthreads identifies the following contention scope values:
• PTHREAD SCOPE PROCESS schedules threads using PCS scheduling.
• PTHREAD SCOPE SYSTEM schedules threads using SCS scheduling.

Multiple-Processor Scheduling

When multiple processors are available, then the scheduling gets more complicated, because
now there is more than one CPU which must be kept busy and in effective use at all times.
Load sharing revolves around balancing the load between multiple processors.

Multi-processor systems may be heterogeneous, (different kinds of CPUs), or homogenous,


(all the same kind of CPU). Even in the latter case there may be special scheduling constraints,
such as devices which are connected via a private bus to only one of the CPUs. This book will
restrict its discussion to homogenous systems.

Approaches to Multiple-Processor Scheduling

One approach to multi-processor scheduling is asymmetric multiprocessing, in which one


processor is the master, controlling all activities and running all kernel code, while the other
runs only user code. This approach is relatively simple, as there is no need to share critical
system data.
Another approach is symmetric multiprocessing, SMP, where each processor schedules its own
jobs, either from a common ready queue or from separate ready queues for each processor.
Virtually all modern OSes support SMP, including XP, Win 2000, Solaris, Linux, and Mac
OSX.

Processor Affinity

Processors contain cache memory, which speeds up repeated accesses to the same memory
locations. If a process were to switch from one processor to another each time it got a time
slice, the data in the cache (for that process) would have to be invalidated and re-loaded from
main memory, thereby obviating the benefit of the cache.
PREPARED BY DR J FARITHA BANU 10
Therefore SMP systems attempt to keep processes on the same processor, via processor affinity.
Soft affinity occurs when the system attempts to keep processes on the same processor but
makes no guarantees. Linux and some other OSes support hard affinity, in which a process
specifies that it is not to be moved between processors.

Main memory architecture can also affect process affinity, if particular CPUs have faster access
to memory on the same chip or board than to other memory loaded elsewhere. (Non-Uniform
Memory Access, NUMA.) As shown below, if a process has an affinity for a particular CPU,
then it should preferentially be assigned memory storage in "local" fast access areas.

Figure : NUMA and CPU scheduling


Load Balancing

Obviously an important goal in a multiprocessor system is to balance the load between


processors, so that one processor won't be sitting idle while another is overloaded. Systems
using a common ready queue are naturally self-balancing, and do not need any special
handling. Most systems, however, maintain separate ready queues for each processor.

Balancing can be achieved through either push migration or pull migration:


Push migration involves a separate process that runs periodically, (e.g. every 200 milliseconds),
and moves processes from heavily loaded processors onto less loaded ones. Pull migration
involves idle processors taking processes from the ready queues of other processors. Push and
pull migration are not mutually exclusive.

Note that moving processes from processor to processor to achieve load balancing works
against the principle of processor affinity, and if not carefully managed, the savings gained by
balancing the system can be lost in rebuilding caches. One option is to only allow migration
when imbalance surpasses a given threshold.

PREPARED BY DR J FARITHA BANU 11


Multicore Processors
Traditional SMP required multiple CPU chips to run multiple kernel threads concurrently.
Recent trends are to put multiple CPUs (cores) onto a single chip, which appear to the system
as multiple processors.

Compute cycles can be blocked by the time needed to access memory, whenever the needed
data is not already present in the cache. (Cache misses) In Figure, as much as half of the CPU
cycles are lost to memory stall.

Figure : Memory stall.

By assigning multiple kernel threads to a single processor, memory stall can be avoided ( or
reduced ) by running one thread on the processor while the other thread waits for memory.

Figure : Multithreaded multicore system

A dual-threaded dual-core system has four logical processors available to the operating system.
The UltraSPARC T1 CPU has 8 cores per chip and 4 hardware threads per core, for a total of
32 logical processors per chip.
There are two ways to multi-thread a processor:
• Coarse-grained multithreading switches between threads only when one thread blocks,
say on a memory read. Context switching is similar to process switching, with considerable
overhead.
• Fine-grained multithreading occurs on smaller regular intervals, say on the boundary of
instruction cycles. However the architecture is designed to support thread switching, so
the overhead is relatively minor.
Note that for a multi-threaded multi-core system, there are two levels of scheduling, at the
kernel level:

The OS schedules which kernel thread(s) to assign to which logical processors, and when to
make context switches using algorithms as described above.

PREPARED BY DR J FARITHA BANU 12


On a lower level, the hardware schedules logical processors on each physical core using some
other algorithm.
The UltraSPARC T1 uses a simple round-robin method to schedule the 4 logical processors
(kernel threads) on each physical core.
The Intel Itanium is a dual-core chip which uses a 7-level priority scheme (urgency) to
determine which thread to schedule when one of 5 different events occurs.

Virtualization and Scheduling


Virtualization adds another layer of complexity and scheduling.
Typically there is one host operating system operating on "real" processor(s) and a number of
guest operating systems operating on virtual processors.
The Host OS creates some number of virtual processors and presents them to the guest OSes
as if they were real processors.
The guest OSes don't realize their processors are virtual, and make scheduling decisions on the
assumption of real processors.
As a result, interactive and especially real-time performance can be severely compromised on
guest systems. The time-of-day clock will also frequently be off.

Real Time CPU scheduling:

A real-time system is one whose correctness depends on timing as well as functionality.


Real Time scheduling: In which Process or job has deadline, execution is to be completed
within the deadline, If a result is delayed, huge loss may happen.
Real-time computing is divided into two types.
1. Hard real-time systems: Hard Real-Time System must generate accurate
responses to the events within the specified time, otherwise there may be huge loss. In
Hard real time task must be serviced by its deadline. A hard real-time system is a purely
deterministic and time constraint system. Examples: Flight Control Systems, Missile
Guidance Systems, Weapons Defense System, Medical System etc
2. Soft real-time systems: In a soft real-time system, if the operation is not
performed within the time limits, outputs / results are degraded. Soft real time provide
no guarantee as to when a critical real-time process will be scheduled. They guarantee
only that the process will be given preference over noncritical processes. Ex: Weather
Monitoring Systems, Electronic games, Multimedia system, Web browsing, Online
transaction systems

Rate Monotonic Scheduling


• The rate-monotonic scheduling algorithm schedules periodic tasks using a static
priority policy with preemption.
• If a lower-priority process is running and a higher-priority process becomes
available to run, it will preempt the lower-priority process.

PREPARED BY DR J FARITHA BANU 13


• The shorter the period, the higher the priority; the longer the period, the lower
the priority will be assigned.
• Every time a process acquires the CPU, the duration of its CPU burst is the same
for all the process.
• Example: Consider the following 3 process

Process Capacity Period


P1 3 20
P2 2 5
P3 2 10

Solution:
Step1: LCM of period (20,5,10) = 20

For P1: every 20 period need to execute 3 time slots / capacity


For P2: every 5 period need to execute 2 time slots / capacity
For P3: every 10 period need to execute 2 time slots / capacity
Step2: Assigning priority

P1 : 03 because time period 20 high


P2 : 01 because time period 5 less
P3 : 02 because time period 10 medium
Order of execution is p2, p3, p1

P P P P P P P P P P P P P P P P P P P P
2 2 3 3 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
20

Explanation:
Time slot 1, 2 -- p2 will execute 2 capacity
Time slot 3, 4 -- p3 will execute 2 capacity
Time slot 5 -- p1 will execute 1 capacity, after that next slot starts, so high
priority job p2 have to continue, Therefore it preempts p1

Time slot 6, 7 -- p2 will execute again 2 capacity


The p3 does not required any slot, since for total 10 slots – it has to execute only
2 capacity
Time slot 8, 9 -- p1 will execute 2 capacity
Time slot 10 -- idle
Again next 5 mins slots start for p2,
Time slot 11, 12 -- p2 will execute again 2 capacity
Time slot 13, 14 -- p3 will execute again 2 capacity

PREPARED BY DR J FARITHA BANU 14


Time slot 15 -- idle, since p1 have completed 3 capacity for 20 periods
already
Time slot 16, 17 -- p2 will execute 2 capacity
Time slot 18,19,20 -- idle

Link : https://www.youtube.com/watch?v=xgW8VhEOpFg

Deadline Monotonic Scheduling


It is a fixed priority based algorithm in which priorities are assigned to each task based
on their relative deadline. Task with shortest deadline is assigned highest priority. It is
a Preemptive Scheduling Algorithm that means if any task of higher priority comes then,
running task is preempted and higher priority task is assigned to CPU.
Priority of task is inversely proportional to deadline i.e., task with shortest deadline is
assigned highest priority. Deadline is time limit in which task has to be completed.
Example –
Process Capacity Deadline Period
P1 3 7 20
P2 2 4 5
P3 2 9 10

Solution:
Step1: LCM of period (20,5,10) = 20

For P1: every 20 period need to execute 3 time slots / capacity


For P2: every 5 period need to execute 2 time slots / capacity
For P3: every 10 period need to execute 2 time slots / capacity
Step2: Assigning priority
P1 : 02 because deadline is 7 high
P2 : 01 because deadline is 4 - less
P3 : 03 deadline is 9
Complete execution process is shown in figure below –
Order of execution is p2, p1, p3

PREPARED BY DR J FARITHA BANU 15


P2 P2 P1 P1 P1 P2 P2 P3 P3 P2 P2 P3 P3 P2 P2
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
20
Advantages:
Optimal for Static Priority Scheduling.
Performs well in case of availability of tasks having longer period but shorter deadline.
Good performance in case of overload.
Disadvantages:
Implementation is complex.
It is a time taking process.

https://www.youtube.com/watch?v=E6KGDpY_XoI

Deadlocks
Deadlocks are a set of blocked / waiting processes each holding a resource and waiting to
acquire a resource held by another process. Hence these waiting process is never again able to
change state, because the resources it has requested are held by other waiting processes. This
situation is called a deadlock.

System Model

PREPARED BY DR J FARITHA BANU 16


For the purposes of deadlock discussion, a system can be modeled as a collection of limited
resources, which can be partitioned into different categories, to be allocated to a number of
processes, each having different needs.
Resource categories may include memory, printers, CPUs, open files, tape drives, CD-ROMS,
etc.
In normal operation a process must request a resource before using it, and release it when it is
done, in the following sequence:
• Request - If the request cannot be immediately granted, then the process must wait until
the resource(s) it needs become available. For example the system calls open( ),
malloc( ), new( ), and request( ).
• Use - The process uses the resource, e.g. prints to the printer or reads from the file.
• Release - The process relinquishes the resource. so that it becomes available for other
processes. For example, close( ), free( ), delete( ), and release( ).

Deadlock Characterization
Necessary Conditions for occurring the deadlock
A deadlock situation can arise if the following four conditions hold simultaneously in a system.
1. Mutual exclusion. At least one resource must be held in a nonsharable mode; that is, only
one process at a time can use the resource. If another process requests that resource, the
requesting process must be delayed until the resource has been released.

2. Hold and wait. A process must be holding at least one resource and waiting to acquire
additional resources that are currently being held by other processes.

3. No preemption. Resources cannot be preempted; that is, a resource can be released only
voluntarily by the process holding it, after that process has completed its task.

4. Circular wait. A set {P0, P1, ..., Pn} of waiting processes must exist such that P0 is waiting
for a resource held by P1, P1 is waiting for a resource held by P2, ..., Pn−1 is waiting for
a resource held by Pn, and Pn is waiting for a resource held by P0.
Resource-Allocation Graph
• It is a Directed Graph with a set of vertices V and set of edges E.
• V is partitioned into two types:

1. nodes P = {p1, p2,..pn}


2. Resource type R ={R1,R2,...Rm}
• Pi -->Rj - request => request edge
• Rj-->Pi - allocated => assignment edge.
• Pi is denoted as a circle and Rj as a square.
• Rj may have more than one instance represented as a dot with in the square.
Sets P,R and E.
PREPARED BY DR J FARITHA BANU 17
P = { P1,P2,P3}
R = {R1,R2,R3,R4}
E= {P1->R1, P2->R3, R1->P2, R2->P1, R3->P3 }

• Resource instances
One instance of resource type R1, Two instance of resource type R2,One instance
of resource type R3,Three instances of resource type R4.

Process states

Process P1 is holding an instance of resource type R2, and is waiting for an instance
of resource type R1.Resource Allocation Graph with a deadlock

Process P2 is holding an instance of R1 and R2 and is waiting for an instance of


resource type R3.Process P3 is holding an instance of R3.
P1->R1->P2->R3->P3->R2->P1
P2->R3->P3->R2->P2

Methods for handling Deadlocks

1. Deadlock Prevention
2. Deadlock Avoidance
3. Deadlock Detection and Recovery

PREPARED BY DR J FARITHA BANU 18


Deadlock Prevention:

• This ensures that the system never enters the deadlock state.
• Deadlock prevention is a set of methods for ensuring that at least one of the
necessary conditions cannot hold.
• By ensuring that at least one of these conditions cannot hold, we can prevent the
occurrence of a deadlock.

1. Denying Mutual exclusion

• Mutual exclusion condition must hold for non-sharable resources.


• Printer cannot be shared simultaneously shared by prevent processes.
• sharable resource - example Read-only files.
• If several processes attempt to open a read-only file at the same time, they can
be granted simultaneous access to the file.
• A process never needs to wait for a sharable resource.

2. Denying Hold and wait

• Whenever a process requests a resource, it does not hold any other resource.
• One technique that can be used requires each process to request and be allocated
all its resources before it begins execution.
• Another technique is before it can request any additional resources, it must
release all the resources that it is currently allocated.
• These techniques have two main disadvantages :
First, resource utilization may be low, since many of the resources may
be allocated but unused for a long time.
We must request all resources at the beginning for both protocols.
starvation is possible.

3. Denying No preemption

• If a Process is holding some resources and requests another resource that cannot
be immediately allocated to it. (that is the process must wait), then all resources
currently being held are preempted.(ALLOW PREEMPTION)
• These resources are implicitly released.
• The process will be restarted only when it can regain its old resources.

PREPARED BY DR J FARITHA BANU 19


4. Denying Circular wait

• Impose a total ordering of all resource types and allow each process to request
for resources in an increasing order of enumeration.
• Let R = {R1,R2,...Rm} be the set of resource types.
• Assign to each resource type a unique integer number.
• If the set of resource types R includes tapedrives, disk drives and printers.

F(tapedrive)=1,
F(diskdrive)=5,
F(Printer)=12.

• Each process can request resources only in an increasing order of enumeration.

Deadlock Avoidance:

• Deadlock avoidance request that the OS be given in advance additional


information concerning which resources a process will request and use
during its life time. With this information it can be decided for each request
whether or not the process should wait.
• To decide whether the current request can be satisfied or must be delayed, a
system must consider the resources currently available, the resources currently
allocated to each process and future requests and releases of each process.
• Safe State
A state is safe if the system can allocate resources to each process in some order
and still avoid a dead lock.


A deadlock is an unsafe state.

Not all unsafe states are dead locks

An unsafe state may lead to a dead lock
• Two algorithms are used for deadlock avoidance namely;

1. Resource Allocation Graph Algorithm - single instance of a resource type.


2. Banker’s Algorithm – several instances of a resource type.

PREPARED BY DR J FARITHA BANU 20


Resource allocation graph algorithm

• Claim edge - Claim edge Pi---> Rj indicates that process Pi may request resource
Rj at some time, represented by a dashed directed edge.
• When process Pi request resource Rj, the claim edge Pi -> Rj is converted to a
request edge.
Similarly, when a resource Rj is released by Pi the assignment edge Rj -> Pi is
reconverted to a claim edge Pi -> Rj
 The request can be granted only if converting the request edge Pi -> Rj to an
assignment edge Rj -> Pi does not form a cycle.

• If no cycle exists, then the allocation of the resource will leave the system in a
safe state.
• If a cycle is found, then the allocation will put the system in an unsafe state.

Banker's algorithm
• Available: indicates the number of available resources of each type.
• Max: Max[i, j]=k then process Pi may request at most k instances of resource
type Rj
• Allocation : Allocation[i. j]=k, then process Pi is currently allocated K instances
of resource type Rj
• Need : if Need[i, j]=k then process Pi may need K more instances of resource
type Rj
Need [i, j]=Max[i, j]-Allocation[i, j]

Safety algorithm
Initialize work := available and Finish [i]:=false for i=1,2,3 .. n
Find an i such that both
i. Finish[i]=false
ii. Needi<= Work
if no such i exists, goto step 4
3. work :=work+ allocationi;
PREPARED BY DR J FARITHA BANU 21
Finish[i]:=true
goto step 2
4. If finish[i]=true for all i, then the system is in a safe state

Resource Request Algorithm

Let Requesti be the request from process Pi for resources.


➢ If Requesti<= Needi goto step2, otherwise raise an error condition, since the process
has exceeded its maximum claim.
➢ If Requesti <= Available, goto step3, otherwise Pi must wait, since the resources are
not available.
➢ Available := Availabe-Requesti;
Allocationi := Allocationi + Requesti
Needi := Needi - Requesti;

• Now apply the safety algorithm to check whether this new state is safe or not.
• If it is safe then the request from process Pi can be granted.

Deadlock detection

(i) Single instance of each resource type


• If all resources have only a single instance, then we can define a deadlock detection
algorithm that use a variant of resource-allocation graph called a wait for graph.

Resource Allocation Graph

Wait for Graph

PREPARED BY DR J FARITHA BANU 22


(ii) Several Instance of a resource type

Available : Number of available resources of each type


Allocation : number of resources of each type currently allocated to each process
Request : Current request of each process
If Request [i,j]=k, then process Pi is requesting K more instances of resource type Rj.

1.Initialize work := available


Finish[i]=false, otherwise finish [i]:=true
2. Find an index i such that
both
a. Finish[i]=false
b. Requesti<=work
if no such i exists go to step4.
3. Work:=work+allocationi
Finish[i]:=true
goto step2
4. If finish[i]=false
then process Pi is deadlocked

Deadlock Recovery

1. Process Termination
1. Abort all deadlocked processes.
2. Abort one deadlocked process at a time until the deadlock cycle is eliminated.
After each process is aborted , a deadlock detection algorithm must be invoked
to determine where any process is still dead locked.

2. Resource Preemption
Preemptive some resources from process and give these resources to other
processes until the deadlock cycle is broken.
i. Selecting a victim: which resources and which process are to be preempted.
ii. Rollback: if we preempt a resource from a process it cannot continue with
its normal execution. It is missing some needed resource. We must rollback the process
to some safe state, and restart it from that state.
iii. Starvation: How can we guarantee that resources will not always be
preempted from the same process.

PREPARED BY DR J FARITHA BANU 23


PREPARED BY DR J FARITHA BANU 24

You might also like