0% found this document useful (0 votes)
34 views

Lecture CPU_Scheduling

ss

Uploaded by

Nishanth M.S
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views

Lecture CPU_Scheduling

ss

Uploaded by

Nishanth M.S
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 83

BITS, PILANI – K. K.

BIRLA GOA CAMPUS

Operating Systems
by

Dr. Shubhangi
Dept. of CS and IS

19 October 2024 1
OPERATING SYSTEMS

LECTURE 1: CPU SCHEDULING


Basic Concepts
 Multiprogramming in Uniprocessor
system for Maximum CPU utilization.

 Process and Thread Scheduling

 CPU Burst/Service time = total


processor cycle needed in one cycle.

 I/O Burst/Service time = total I/O time


needed in one cycle.
19 October 2024 3
Alternating Sequence of CPU And I/O Bursts

19 October 2024 4
Processes

 Processes can be described as either:


 CPU-bound process – spends more time doing computations; few very
long CPU bursts

 I/O-bound process – spends more time doing I/O than computations,


many short CPU bursts

19 October 2024 5
19 October 2024 6
19 October 2024 7
19 October 2024 8
Schedulers
 Long-term scheduler (or job scheduler)
 Selects which processes should be brought into the
ready queue
 Controls the degree of multiprogramming
 More processes, smaller percentage of time each
process is executed
 Short-term scheduler (or CPU scheduler)
 Selects which process should be executed next.
Executes most frequently
 Invoked when an event occurs
 Clock interrupts, I/O interrupts, OS calls, Signals

19 October 2024 9
Addition of Medium Term Scheduling
 Part of the swapping function
 Based on the need to manage the degree of
multiprogramming

19 October 2024 10
19 October 2024 11
19 October 2024 12
Schedulers (Cont)
 Short-term scheduler is invoked very frequently
(milliseconds)  (must be fast)
 CPU scheduler: selects from among the processes in memory that
are ready to execute, and allocates the CPU to one of them

 Long-term scheduler is invoked very infrequently


(seconds, minutes)  (may be slow)

19 October 2024 13
Dispatcher
 Dispatcher module gives control of the CPU
to the process selected by the short-term
scheduler; this involves:
 switching context
 switching to user mode
 jumping to the proper location in the user program
to restart that program

 Dispatch latency – time it takes for the


dispatcher to stop one process and start
another running
19 October 2024 14
 CPU scheduling decisions  Scheduling under 1 and 4
may take place when a is nonpreemptive
process:  All other are preemptive
1. Switches from running
to waiting state
2. Switches from running
to ready state
3. Switches from waiting to
ready
4. Terminates

19 October 2024 15
19 October 2024 16
Long Term Scheduler

Processes in NEW state Ready Queue Short Term Scheduler and


Dispatcher

Medium Term Scheduler I/O


CPU

Processes in Ready-Suspend state

Highest Priority Process Processes in Blocked-Suspend state Blocked Queue

Process Terminates

19 October 2024 17
Characterization of Scheduling Policies
 The selection function: determines which process in the
ready queue is selected next for execution.
 The decision mode: specifies the instants in time at
which the selection function is exercised.
 Nonpreemptive:
 Once a process is in the running state, it will continue until it
terminates or blocks itself for I/O.
 Preemptive:
 Currently running process may be interrupted and moved to
the Ready state by the OS.
 Allows for better service since any one process cannot
monopolize the processor for very long.

19 October 2024 18
Issues with Non-Preemptive
Scheduling
 Shopping Mall Queue
(more waiting time)
 Not recommended for
Real Time Systems

19 October 2024 19
Issues with Preemptive Scheduling

 Race condition
 Process is preempted when kernel is
changing data
Solution: disable interrupts
Note: such sections should be small and occur
less frequently

19 October 2024 20
Summary
 3 types of Schedulers
 Long term
 Short term (and Dispatcher, Dispatch Latency)
 Medium term

 4 events on which process transits between


ready and running

 2 types of short term schedulers: Non-


preemptive and Preemptive, issues in it.
19 October 2024 21
Summary
 Only one Decision point in Non preemptive
Schedulers
 running process completion
 2 Decision points in Preemptive Schedulers
 Arrival of a new process in ready queue
 Completion of CPU burst of running process
 2 questions to be addressed
 Which process to be selected next from ready
queue for execution
 for How long or When to invoke scheduler

19 October 2024 22
Scheduling Criteria
 CPU utilization: amount of time in percentage that a
processor is busy.
 Throughput: No of processes completed per time unit.

 Turnaround time: the interval from time of submission to


the time of completion (execution + waiting for
resources/processor)
 Waiting time: sum of periods spent waiting in the ready
queue
 Response time: time from submission of a request until
the first response is produced.
 Deadline: Maximize no of processes meeting their
deadlines ( imp for real time systems)
19 October 2024 23
Scheduling Algorithm Optimization Criteria

 Max CPU utilization

 Max throughput

 Min turnaround time

 Min waiting time

 Min response time

19 October 2024 24
CPU Scheduling Criteria
 User-oriented
 Response Time
 Elapsed time between the submission of a request until
there is output.

 System-oriented
 Effective and efficient utilization of the processor

 Performance-related
 Quantitative
 Measurable such as response time and throughput
19 October 2024 25
Scheduling Algorithms

19 October 2024 26
First-Come-First-Served (FCFS)
 Selection function: the process that has been waiting the
longest in the ready queue – hence called First-Come
First-Served (FCFS).
 Each process joins the Ready queue

 When the current process finishes its execution, the

oldest process in the Ready queue is selected


 Decision mode: Nonpreemptive.
 A short process may have to wait a very long time
before it can execute
 Favors CPU-bound processes

 I/O processes have to wait until CPU-bound process


completes

19 October 2024 27
First-Come, First-Served (FCFS) Scheduling
Process Burst Time
P1 24
P2 3
P3 3
 Suppose that the processes arrive in the order: P1 , P2 , P3
The Gantt Chart for the schedule is:

P1 P2 P3

0 24 27 30
 Waiting time for P1 = 0; P2 = 24; P3 = 27
 Average waiting time: (0 + 24 + 27)/3 = 17
 Turn around time P1 = 24; P2 = 27; P3 = 30
 Average Turn around time = (24 + 27 + 30)/3 = 27

19 October 2024 28
A twist in FCFS Scheduling example
Suppose that the processes arrive in the order
P2 , P3 , P1
 The Gantt chart for the schedule is:

P2 P3 P1

0 3 6 30
 Waiting time for P1 = 6; P2 = 0; P3 = 3
 Average waiting time: (6 + 0 + 3)/3 = 3
 Much better than previous case
 Convoy effect short process behind long process

19 October 2024 29
FCFS with I/O
Process Arrival Time Execution Time
A 0 7
B 2 9
C 4 6
D 6 8

 Assume process A goes for I/O for 5 units of time after


every 3 unit execution in CPU
 Assume B goes for I/O for 2 units after 5 units of
execution in CPU
 Process C is a CPU bound process with no I/O
 Process D goes for I/O for 1 unit after 4 units of
execution in CPU.

19 October 2024 30
FCFS Algorithm

 At each scheduling decision point t, do the


following:
 Update ready queue (delete completed process
and insert processes arrived till time t)
 Select the next ready process from the queue
having earliest arrival time
 Compute Next Decision Point NDP
= t + completion time of selected process

19 October 2024 31
1 4 4 3 4 6 5 3 ET
Ready
A D B A D C B A P#
26 19 10 8 6 4 2 0 AT Queue
A: 3 +5 + 3 + 5 +1
B: 5 + 2 + 4 t P# Next Scheduler
sel Invocation=
C: 6 Next Decision point =
D: 4+1 + 4 t + completion time
0 A 0+3=3
3 B 3+5=8
8 C 8 + 6 = 14
16 D 14 + 4 = 18
18 A 18 + 3 = 21
Turn around time of process A = completion 21 B 21 + 4 = 25
time-arrival time= 30-0=30. 25 D 25 + 4 = 29
Waiting time of process A : 29 A 29 + 1 = 30

From Ready Queue and Schedule : 0 + (18-8) +(29-26)=13


From Schedule and I/O burst info : 0 + (18-3) +(29-21) -10= 13
19 October 2024 32
From turn around time : TA – CPU Burst- I/O Burst = 30 - 7 - 10 =13
Process AT CT TA = EP BP WP =
CT-AT TA-EP-BP
A 0 30 30 7 10 13
B 2 25 23 9 2 12
C 4 14 10 6 0 4
D 6 29 23 8 1 14
AVG (13+12+4+14)/4=10.75

19 October 2024 33
FCFS Drawbacks
 A process that does not perform any I/O will
monopolize the processor (Convoy Effect).
 Favors CPU-bound processes:
 I/O-bound processes have to wait until CPU-

bound process completes.

 They may have to wait even when their I/O are


completed (poor device utilization).
 We could have kept the I/O devices busy by
giving a bit more priority to I/O bound
processes.
19 October 2024 34
Shortest-Job-First (SJF) Scheduling
 Associate with each process the length of its next CPU
burst. Use these lengths to schedule the process with
the shortest time

 SJF is optimal – gives minimum average waiting


time for a given set of processes
 The difficulty is knowing the length of the next CPU
request

 Nonpreemptive policy

 Process with shortest expected processing time is


selected next

 Short process jumps ahead of longer processes


19 October 2024 35
Example of SJF
Process Arrival Time Burst Time
P1 0.0 6
P2 1.0 8
P3 2.0 7
P4 3.0 3
 SJF scheduling chart

P1 P4 P3 P2

0 6 9 16 24

 Average waiting time = (0 + 15 + 7 + 3) / 4 = 6.25

19 October 2024 36
SJF Algorithm

 At each scheduling decision point t, do the


following:
 Update ready queue (delete completed process
and insert processes arrived till time t)
 Select the next ready process from the queue
having smallest CPU burst
 Compute Next Decision Point NDP = t +
completion time of selected process

19 October 2024 37
Shortest-Job-First (SJF) Scheduling

 If estimated time for process not correct, the


operating system may abort it

 Possibility of starvation for longer processes

19 October 2024 38
Determining Length of Next CPU Burst
 To estimate the length of next CPU burst
 Generally predicted as an exponential
average of the measured lengths of previous
CPU bursts
tn  actual length of nth CPU burst
 n1  predicted value for the next CPU burst
, 0   1
Define :

 n 1   t n  1    n .
19 October 2024 39
Examples of Exponential Averaging
  =0
 n+1 = n
 Recent history does not count
  =1
 n+1 =  tn
 Only the actual last CPU burst counts
 If we expand the formula, we get:
n+1 =  tn+(1 - ) tn-1 + …
+(1 -  )j  tn -j + …
+(1 -  )n +1 0
 Since both  and (1 - ) are less than or equal to
1, each successive term has less weight than its
predecessor
19 October 2024 40
Shortest Job First Drawbacks
 Possibility of starvation for longer processes as long as
there is a steady supply of shorter processes.

 Lack of preemption is not suited in a time sharing


environment:
 CPU bound process gets lower priority (as it should)
but a process doing no I/O could still monopolize the
CPU if it is the first one to enter the system.

 SJF implicitly incorporates priorities: shortest jobs are


given preferences.

41
Shortest Remaining Time

 Preemptive version of shortest process next policy


 Must estimate processing time

Process Arrival Time Execution Time


A 0 3
B 2 6
C 4 4
D 6 5
E 8 2

19 October 2024 42
SRTF Algorithm
 At each scheduling decision point t, do the
following:
 Update ready queue (delete completed
process/update execution time of running process
and insert processes arrived till time t)
 Select the next ready process from the queue
having smallest next CPU burst
 Compute Next Decision Point NDP as selection
between the two events (i.e completion time of
selected process and arrival of new process)
whichever event occurs first
= min ( t + completion time, arrival time)
19 October 2024 43
Priority Scheduling
 A priority number (integer) is associated with each
process
 The CPU is allocated to the process with the highest
priority (smallest integer  highest priority)
 Preemptive

 Nonpreemptive

 SJF is a priority scheduling where priority is the


predicted next CPU burst time
 Problem  Starvation – low priority processes may
never execute
 Solution  Aging – as time progresses increase the
priority of the process

19 October 2024 44
Decision Mode
 Nonpreemptive
 Once a process is in the running state, it will continue
until it terminates or blocks itself for I/O

 Preemptive
 Currently running process may be interrupted and
moved to the Ready state by the operating system
 Allows for better service since any one process cannot
monopolize the processor for very long

19 October 2024 45
Priority – Preemptive & Non-
preemptive Scheduling

Process Arrival Time Execution Time Priority


P1 0 3 5
P2 2 2 3
P3 3 5 2
P4 4 4 4
P5 6 1 1

19 October 2024 46
19 October 2024 47
Non Preemptive priority Algorithm
 At each scheduling decision point t, do the
following:
 Update ready queue (delete completed process
and insert processes arrived till time t)
 Select the next ready process from the queue
having highest priority (smallest integer number)
 Compute Next Decision Point NDP
= t + completion time of selected process

19 October 2024 48
Preemptive priority Algorithm
 At each scheduling decision point t, do the
following:
 Update ready queue (delete completed
process/update execution time of running process
and insert processes arrived till time t)
 Select the next ready process from the queue
having highest priority (smallest integer number)
 Compute Next Decision Point NDP as selection
between the two events (i.e completion time of
selected process and arrival of new process)
whichever event occurs first
= min ( t + completion time , arrival time)
19 October 2024 49
Round Robin (RR)
 Each process gets a small unit of CPU time (time
quantum), usually 10-100 milliseconds. After this
time has elapsed, the process is preempted and
added to the end of the ready queue.
 Uses preemption based on a clock.
 If there are n processes in the ready queue and
the time quantum is q, then each process gets 1/n
of the CPU time in chunks of at most q time units
at once. No process waits more than (n-1)q time
units.
Round Robin (RR)
 Clock interrupt is generated at periodic
intervals
 When an interrupt occurs, the currently
running process is placed in the ready queue
 Next ready job is selected

 Known as time slicing


 Performance
 q large i.e longer than longest running process 
FIFO
 q small i.e lesser than time for typical interation 
q must be large with respect to context switch,
Time Quantum and Context Switch Time
Example of RR with Time Quantum = 4
Process Burst Time
P1 24
P2 3
P3 3

 The Gantt chart is:

P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30
 Typically, higher average turnaround than SJF, but better
response
RR with I/O, Time Quantum = 4
Process Arrival Time Execution Time
A 0 7
B 2 9
C 4 6
D 6 8

 Assume process A goes for I/O for 5 units of time after


every 3 unit execution in CPU
 Assume B goes for I/O for 2 units after 5 units of
execution in CPU
 Process C is a CPU bound process with no I/O
 Process D goes for I/O for 1 unit after 4 units of
execution in CPU.
Round Robin Algorithm
 At each scheduling decision point t, do the
following:
 Update ready queue (delete completed
process/insert preempted process with t as arrival
time and updated execution time and insert
processes arrived till time t)
 Select the next ready process from the queue in
FCFS manner
 Compute Next Decision Point NDP as selection
between the two events (i.e time quantum expiry
and completion time of selected process)
whichever event occurs first = min ( t + q, t +
completion time )
19 October 2024 56
RR analysis
 Effective in general-purpose time-sharing
system or transaction processing system

 Relative treatment to CPU and I/O bound


processes
 CPU bound processes uses complete quanta and
immendiately joins the queue whereas
 I/O bound processes uses CPU less than quanta
and waits in blocked queue. Thus results in poor
performance and device utilization.
19 October 2024 58
Virtual Round Robin
 When a process comes back from I/O
{ if its previous time quantum is pending then
it joins auxillary queue with remaining time quantum.

else
it joins back of the ready queue.

}
 Auxillary queue has higher priority than ready
queue. So scheduler first selects the process
from auxillary queue in FIFO manner. If auxillary
queue is empty then it selects the process from
ready queue in RR manner.
19 October 2024 59
Virtual Round Robin
VRR with I/O, Time Quantum = 4
Process Arrival Time Execution Time
A 0 7
B 2 9
C 4 6
D 6 8

 Assume process A goes for I/O for 5 units of time after


every 3 unit execution in CPU
 Assume B goes for I/O for 2 units after 5 units of
execution in CPU
 Process C is a CPU bound process with no I/O
 Process D goes for I/O for 1 unit after 4 units of
execution in CPU.
Virtual Round Robin Algorithm
 At each scheduling decision point t, do the
following:
 Update ready queue (delete completed process/ add blocked
process to auxillary queue if time slice >0 else insert preempted
process with t as arrival time and updated execution time, insert
processes arrived till time t )
 Select the next ready process from auxillary queue in FIFO
manner with remaining time quantum from previous time slice
 If auxillary queue is empty, Select the next ready process from
the ready queue in FIFO manner.
 Compute Next Decision Point NDP as selection between the two
events (i.e time quantum expiry and completion time of selected
process) whichever event occurs first = min ( t + q, t + completion
time)

19 October 2024 63
Multilevel Queue
 Ready queue is partitioned into separate queues:
foreground (interactive)
background (batch)

 Each queue has its own scheduling algorithm


 foreground – RR

 background – FCFS
Multilevel Queue Scheduling
Multilevel Queue
 Scheduling must be done between the queues
 Fixed priority scheduling; (i.e., serve all from foreground

then from background). Possibility of starvation.

 Time slice – each queue gets a certain amount of CPU


time which it can schedule amongst its processes; i.e.,
80% to foreground in RR & 20% to background in FCFS
Multilevel Queue Algorithm
 At each scheduling decision point t, do the
following:
 Update ready queues (delete completed process/ if the process is
preempted by higher priority process then just update execution
time, insert processes arrived till time t )
 Select the next ready process from highest priority queue in FIFO
manner
 If highest priority queue is empty, Select the next ready process
from the next priority nonempty ready queue in FIFO manner.
 Compute Next Decision Point NDP as selection between the
three events (i.e arrival time of new process in higher priority
queue, time quantum expiry and completion time of selected
process) whichever event occurs first = min ( at, t + q, t +
completion time)

19 October 2024 67
Multilevel Feedback Queue
 A process can move between the various queues;
aging can be implemented this way.
 Multilevel-feedback-queue scheduler defined by
the following parameters:
 number of queues
 scheduling algorithms for each queue
 method used to determine when to upgrade a process
 method used to determine when to demote a process
 method used to determine which queue a process will
enter when that process needs service
Example of Multilevel Feedback Queues
Multilevel Feedback Queue Algorithm
 At each scheduling decision point t, do the
following:
 Update ready queues (delete completed process/ add preempted
process due to quantum expiry to next lower priority queue and
assign new time quantum if the process is preempted by higher
priority process then just update execution time, insert processes
arrived till time t)
 Select the next ready process from highest priority queue in FIFO
manner with remaining time quantum from previous time slice
 If highest priority queue is empty, Select the next ready process
from the next priority nonempty ready queue in FIFO manner.
 Compute Next Decision Point NDP as selection between the
three events (i.e arrival time of new process in higher priority
queue, time quantum expiry and completion time of selected
process) whichever event occurs first = min ( at, t + q, t +
completion time)
19 October 2024 71
Example of Multilevel Feedback Queue
 Three queues:
 Q0 – RR with time quantum 8 milliseconds
 Q1 – RR time quantum 16 milliseconds
 Q2 – FCFS
 Scheduling
 A new job enters queue Q0 which is served FCFS.
When it gains CPU, job receives 8 milliseconds. If
it does not finish in 8 milliseconds, job is moved to
queue Q1.
 At Q1 job is again served FCFS and receives 16
additional milliseconds. If it still does not complete,
it is preempted and moved to queue Q2.
Fair-Share Scheduling
 User’s application runs as a collection of processes
(threads)
 User is concerned about the performance of the
application
 Need to make scheduling decisions based on process
sets
 Scheduling decisions could then be made that attempt to
give each group similar service.
 Each user is assigned a weight of some sort that defines
that user’s share of system resources as a fraction of the
total usage of those resources.
 Objective: monitor resource usage
19 October 2024 75
Example
Consider three groups (1,2,3) containing three,
two, and four users respectively, the available
CPU cycles will be distributed as follows:
 100% / 3 groups = 33.3% per group
 Group 1: (33.3% / 3 users) = 11.1% per user
 Group 2: (33.3% / 2 users) = 16.7% per user
 Group 3: (33.3% / 4 users) = 8.3% per user

19 October 2024 76
Priority calculation
The following formulas apply for process j in group k:

19 October 2024 77
Example
 Consider process A is in one group and process
B and process C are in a second group, with
each group having a weight of 0.5.
 Assume that all processes are processor bound
and are usually ready to run.
 All processes have a base priority of 60.
 Processor utilization is measured as follows:
 The processor is interrupted 60 times per second;
 During each interrupt, the processor usage field of the
currently running process is incremented, as is the
corresponding group processor field.
 Once per second, priorities are recalculated.
19 October 2024 78
19 October 2024 79
Traditional UNIX Scheduling
 Time sharing interactive environment
 Goal
 Good response time for interactive users while
ensuring that low priority background processes
do not starve

 Multilevel feedback using round robin within


each of the priority queues
 If a running process does not block or
complete within 1 second, it is preempted
 Priorities are recomputed once per second
19 October 2024 80
Traditional UNIX Scheduling
 Priority is based on process type and execution history

Where CPUj(i) = measure of processor utilization by process j through interval i


Pj(i) = priority of process j at beginning of interval i; lower values equal higher priorities
Basej = base priority of process j
nicej = user-controllable adjustment factor
 Base priority divides all processes into fixed bands of
priority levels
 CPU and nice components are restricted to prevent a
process from migrating out of its assigned band
19 October 2024 81
Bands
 Decreasing order of priority
 Swapper
 Block I/O device control
 File manipulation
 Character I/O device control
 User processes

19 October 2024 82
19 October 2024 83

You might also like