Unit 3 Notes
Unit 3 Notes
Unit 3 Notes
Basic Concepts
Whenever the CPU becomes idle, the operating system selects one of the processes in the ready
queue for execution.
The selection process is carried out by the short-term scheduler (or CPU scheduler).
CPU Scheduling can be preemptive or non-preemptive
• Non-preemptive Scheduling: In non-preemptive scheduling, once the CPU has been
allocated a process, the process keeps the CPU until it releases the CPU either by
termination or by switching to the waiting state. Hence, scheduler waits till the process
completes its CPU burst time, and then it can allocate the CPU to another process.
• Preemptive Scheduling: In preemptive scheduling, the CPU is allocated to the processes
for a limited CPU cycle time, When the burst time of the process is greater than CPU cycle,
it is moved back to the ready queue and will execute in the next chance.
Scheduling Criteria
• Turnaround time: The interval from the time of submission of a process to the time of
completion is the turnaround time. i.e A period of time required for completing a
particular process or task.
• Waiting time: refers to the total time that a process spends while waiting in a ready
queue before reaching the CPU.
• Response time: It is the amount of time it takes to start responding, but not the time
that it takes to output that response.
• CPU utilization: The CPU should be kept as busy as possible. CPU utilization may
range from 0 to 100 percent. In a real system, it should range from 40 percent (for a
lightly loaded system) to 90 percent (for a heavily used system).
Example:
Process Burst Time
P1 24
P2 3
P3 3
• If the processes arrive in the order PI, P2, P3, and are served in FCFS order, we get the
result shown in the following Gantt chart
Gantt Chart
Example:
Average waiting
time : P1 : 0+
(10 – 1) = 9
P2 : 1 – 1 = 0
P3 : 17 – 2 = 15
P4 : 5 – 3 = 2
AWT = (9+0+15+2) / 4 = 6.5 ms
❖ Non-preemptive Scheduling
❖ Priority Scheduling
A priority is associated with each process, and the CPU is allocated to the process with the
highest priority.( smallest integer - highest priority).
Example :
Process Burst Time Priority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2
AWT=8.2 ms
Waiting time
P1 = (0-0)+ (10-4) = 6
P2 = 4
P3 = 7 (6+4+7 / 3 = 5.66 ms)
• The average waiting time is 17/3 = 5.66 milliseconds.
• The performance of the RR algorithm depends heavily on the size of the time–
quantum.
• If time-quantum is very large(infinite) then RR policy is same as FCFS policy.
• If time quantum is very small, RR approach is called processor sharing and
appears to the users as though each of n process has its own processor running
at 1/n the speed of real processor.
Example:
• Consider a multilevel feedback queue scheduler with three queues, numbered
from 0 to 2 .
• The scheduler first executes all processes in queue 0.
• Only when queue 0 is empty will it execute processes in queue 1.
• Similarly, processes in queue 2 will be executed only if queues 0 and 1 are empty.
• A process that arrives for queue 1 will preempt a process in queue 2.
• A process that arrives for queue 0 will, in turn, preempt a process in queue 1.
Thread Scheduling
Multiple-Processor Scheduling
When multiple processors are available, then the scheduling gets more complicated, because
now there is more than one CPU which must be kept busy and in effective use at all times.
Load sharing revolves around balancing the load between multiple processors.
Processor Affinity
Processors contain cache memory, which speeds up repeated accesses to the same memory
locations. If a process were to switch from one processor to another each time it got a time
slice, the data in the cache (for that process) would have to be invalidated and re-loaded from
main memory, thereby obviating the benefit of the cache.
PREPARED BY DR J FARITHA BANU 10
Therefore SMP systems attempt to keep processes on the same processor, via processor affinity.
Soft affinity occurs when the system attempts to keep processes on the same processor but
makes no guarantees. Linux and some other OSes support hard affinity, in which a process
specifies that it is not to be moved between processors.
Main memory architecture can also affect process affinity, if particular CPUs have faster access
to memory on the same chip or board than to other memory loaded elsewhere. (Non-Uniform
Memory Access, NUMA.) As shown below, if a process has an affinity for a particular CPU,
then it should preferentially be assigned memory storage in "local" fast access areas.
Note that moving processes from processor to processor to achieve load balancing works
against the principle of processor affinity, and if not carefully managed, the savings gained by
balancing the system can be lost in rebuilding caches. One option is to only allow migration
when imbalance surpasses a given threshold.
Compute cycles can be blocked by the time needed to access memory, whenever the needed
data is not already present in the cache. (Cache misses) In Figure, as much as half of the CPU
cycles are lost to memory stall.
By assigning multiple kernel threads to a single processor, memory stall can be avoided ( or
reduced ) by running one thread on the processor while the other thread waits for memory.
A dual-threaded dual-core system has four logical processors available to the operating system.
The UltraSPARC T1 CPU has 8 cores per chip and 4 hardware threads per core, for a total of
32 logical processors per chip.
There are two ways to multi-thread a processor:
• Coarse-grained multithreading switches between threads only when one thread blocks,
say on a memory read. Context switching is similar to process switching, with considerable
overhead.
• Fine-grained multithreading occurs on smaller regular intervals, say on the boundary of
instruction cycles. However the architecture is designed to support thread switching, so
the overhead is relatively minor.
Note that for a multi-threaded multi-core system, there are two levels of scheduling, at the
kernel level:
The OS schedules which kernel thread(s) to assign to which logical processors, and when to
make context switches using algorithms as described above.
Solution:
Step1: LCM of period (20,5,10) = 20
P P P P P P P P P P P P P P P P P P P P
2 2 3 3 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
20
Explanation:
Time slot 1, 2 -- p2 will execute 2 capacity
Time slot 3, 4 -- p3 will execute 2 capacity
Time slot 5 -- p1 will execute 1 capacity, after that next slot starts, so high
priority job p2 have to continue, Therefore it preempts p1
Link : https://www.youtube.com/watch?v=xgW8VhEOpFg
Solution:
Step1: LCM of period (20,5,10) = 20
https://www.youtube.com/watch?v=E6KGDpY_XoI
Deadlocks
Deadlocks are a set of blocked / waiting processes each holding a resource and waiting to
acquire a resource held by another process. Hence these waiting process is never again able to
change state, because the resources it has requested are held by other waiting processes. This
situation is called a deadlock.
System Model
Deadlock Characterization
Necessary Conditions for occurring the deadlock
A deadlock situation can arise if the following four conditions hold simultaneously in a system.
1. Mutual exclusion. At least one resource must be held in a nonsharable mode; that is, only
one process at a time can use the resource. If another process requests that resource, the
requesting process must be delayed until the resource has been released.
2. Hold and wait. A process must be holding at least one resource and waiting to acquire
additional resources that are currently being held by other processes.
3. No preemption. Resources cannot be preempted; that is, a resource can be released only
voluntarily by the process holding it, after that process has completed its task.
4. Circular wait. A set {P0, P1, ..., Pn} of waiting processes must exist such that P0 is waiting
for a resource held by P1, P1 is waiting for a resource held by P2, ..., Pn−1 is waiting for
a resource held by Pn, and Pn is waiting for a resource held by P0.
Resource-Allocation Graph
• It is a Directed Graph with a set of vertices V and set of edges E.
• V is partitioned into two types:
• Resource instances
One instance of resource type R1, Two instance of resource type R2,One instance
of resource type R3,Three instances of resource type R4.
Process states
Process P1 is holding an instance of resource type R2, and is waiting for an instance
of resource type R1.Resource Allocation Graph with a deadlock
1. Deadlock Prevention
2. Deadlock Avoidance
3. Deadlock Detection and Recovery
• This ensures that the system never enters the deadlock state.
• Deadlock prevention is a set of methods for ensuring that at least one of the
necessary conditions cannot hold.
• By ensuring that at least one of these conditions cannot hold, we can prevent the
occurrence of a deadlock.
• Whenever a process requests a resource, it does not hold any other resource.
• One technique that can be used requires each process to request and be allocated
all its resources before it begins execution.
• Another technique is before it can request any additional resources, it must
release all the resources that it is currently allocated.
• These techniques have two main disadvantages :
First, resource utilization may be low, since many of the resources may
be allocated but unused for a long time.
We must request all resources at the beginning for both protocols.
starvation is possible.
3. Denying No preemption
• If a Process is holding some resources and requests another resource that cannot
be immediately allocated to it. (that is the process must wait), then all resources
currently being held are preempted.(ALLOW PREEMPTION)
• These resources are implicitly released.
• The process will be restarted only when it can regain its old resources.
• Impose a total ordering of all resource types and allow each process to request
for resources in an increasing order of enumeration.
• Let R = {R1,R2,...Rm} be the set of resource types.
• Assign to each resource type a unique integer number.
• If the set of resource types R includes tapedrives, disk drives and printers.
F(tapedrive)=1,
F(diskdrive)=5,
F(Printer)=12.
Deadlock Avoidance:
▪
A deadlock is an unsafe state.
▪
Not all unsafe states are dead locks
▪
An unsafe state may lead to a dead lock
• Two algorithms are used for deadlock avoidance namely;
• Claim edge - Claim edge Pi---> Rj indicates that process Pi may request resource
Rj at some time, represented by a dashed directed edge.
• When process Pi request resource Rj, the claim edge Pi -> Rj is converted to a
request edge.
Similarly, when a resource Rj is released by Pi the assignment edge Rj -> Pi is
reconverted to a claim edge Pi -> Rj
The request can be granted only if converting the request edge Pi -> Rj to an
assignment edge Rj -> Pi does not form a cycle.
• If no cycle exists, then the allocation of the resource will leave the system in a
safe state.
• If a cycle is found, then the allocation will put the system in an unsafe state.
Banker's algorithm
• Available: indicates the number of available resources of each type.
• Max: Max[i, j]=k then process Pi may request at most k instances of resource
type Rj
• Allocation : Allocation[i. j]=k, then process Pi is currently allocated K instances
of resource type Rj
• Need : if Need[i, j]=k then process Pi may need K more instances of resource
type Rj
Need [i, j]=Max[i, j]-Allocation[i, j]
Safety algorithm
Initialize work := available and Finish [i]:=false for i=1,2,3 .. n
Find an i such that both
i. Finish[i]=false
ii. Needi<= Work
if no such i exists, goto step 4
3. work :=work+ allocationi;
PREPARED BY DR J FARITHA BANU 21
Finish[i]:=true
goto step 2
4. If finish[i]=true for all i, then the system is in a safe state
• Now apply the safety algorithm to check whether this new state is safe or not.
• If it is safe then the request from process Pi can be granted.
Deadlock detection
Deadlock Recovery
1. Process Termination
1. Abort all deadlocked processes.
2. Abort one deadlocked process at a time until the deadlock cycle is eliminated.
After each process is aborted , a deadlock detection algorithm must be invoked
to determine where any process is still dead locked.
2. Resource Preemption
Preemptive some resources from process and give these resources to other
processes until the deadlock cycle is broken.
i. Selecting a victim: which resources and which process are to be preempted.
ii. Rollback: if we preempt a resource from a process it cannot continue with
its normal execution. It is missing some needed resource. We must rollback the process
to some safe state, and restart it from that state.
iii. Starvation: How can we guarantee that resources will not always be
preempted from the same process.