0% found this document useful (0 votes)
59 views

Unit 2-Part2

CPU scheduling decisions may occur when a process switches between running, waiting, and ready states or terminates. Scheduling that occurs during state changes from running to waiting or from running to ready is non-preemptive, while other scheduling is preemptive. Common CPU scheduling algorithms include first come first serve (FCFS), shortest job first (SJF), round robin, and priority-based scheduling.

Uploaded by

Anjana Maganti
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
59 views

Unit 2-Part2

CPU scheduling decisions may occur when a process switches between running, waiting, and ready states or terminates. Scheduling that occurs during state changes from running to waiting or from running to ready is non-preemptive, while other scheduling is preemptive. Common CPU scheduling algorithms include first come first serve (FCFS), shortest job first (SJF), round robin, and priority-based scheduling.

Uploaded by

Anjana Maganti
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 49

Schedulers

! CPU scheduling decisions may take place when a process:


1. Switches from running to waiting state
2. Switches from running to ready state
3. Switches from waiting to ready
4. Terminates
! Scheduling techniques under “1” and “4” are non-preemptive
! All other scheduling techniques are preemptive
" Consider access to shared data
" Consider preemption while in kernel mode
" Consider interrupts occurring during crucial OS activities

Dept. of Computer Science and Engineering 28


Preemptive vs Non-preemptive CPU Scheduling
1. In preemptive scheduling the CPU is allocated to the processes for the limited time
! In Non-preemptive scheduling, the CPU is allocated to the process till it terminates or
switches to waiting state.
2. The executing process in preemptive scheduling is interrupted in the middle of
execution when higher priority one comes whereas,
! The executing process in non-preemptive scheduling is not interrupted in the middle
of execution and wait till its execution.
3. In Preemptive Scheduling, there is the overhead of switching the process from ready state
to running state, vise-verse, and maintaining the ready queue.
! In case of non-preemptive scheduling has no overhead of switching the process from
running state to ready state.

Dept. of Computer Science and Engineering 29


Preemptive vs Non-preemptive CPU Scheduling
4. In preemptive scheduling, if a high priority process frequently arrives in the ready queue
then the process with low priority has to wait for a long, and it may have to starve.
! In the non-preemptive scheduling, if CPU is allocated to the process having larger
burst time then the processes with small burst time may have to starve.
5. Preemptive scheduling attain flexibility by allowing the critical processes to access CPU
as they arrive into the ready queue, no matter what process is executing currently.
! Non-preemptive scheduling is rigid as even if a critical process enters the ready queue
the process running CPU is not disturbed.
6. The Preemptive Scheduling has to maintain the integrity of shared data that’s why it is
cost associative as it which is not the case with Non-preemptive Scheduling.

Dept. of Computer Science and Engineering 30


Preemptive vs Non-preemptive CPU Scheduling
Parameter Preemptive Scheduling Non-preemptive Scheduling
Once resources(CPU Cycle) are allocated to a
In this resources(CPU Cycle) are allocated to
Basic process, the process holds it till it completes its burst
a process for a limited time.
time or switches to waiting state.
Process can not be interrupted untill it terminates
Interrupt Process can be interrupted in between.
itself or its time is up.
If a process having high priority frequently If a process with long burst time is running CPU,
Starvation arrives in the ready queue, low priority then later coming process with less CPU burst time
process may starve. may starve.
Overhead It has overheads of scheduling the processes. It does not have overheads.
Flexibility flexible rigid
Cost cost associated no cost associated
CPU In preemptive scheduling, CPU utilization is
It is low in non preemptive scheduling.
Utilization high.
Round Robin and Shortest Remaining Time
Examples First Come First Serve and Shortest Job First.
First.
Dept. of Computer Science and Engineering 31
Dispatcher
! Dispatcher module of the OS gives control for the CPU to execute the selected process by the
short-term scheduler, involving:
" Switching context
" Switching to user mode
" Jumping to the proper location in the user program to restart that program

! Dispatch latency – time it takes for the dispatcher to stop one process and start another
running

Dept. of Computer Science and Engineering 32


CPU–I/O Burst Cycle
! The success of CPU scheduling depends on an observed
property of processes:
! Process execution consists of a cycle of CPU execution
(CPU Burst) and I/O wait (I/O Burst).
! Processes alternate between these two states.
! Process execution begins with a CPU burst.
! That is followed by an I/O burst, which is followed by
another CPU burst, then another I/O burst, and so on.
! Eventually, the final CPU burst ends with a system request to
terminate execution.
! The durations of CPU bursts have been measured extensively.
! Generally vary from process to process and from computer
to computer.

Dept. of Computer Science and Engineering 33


Different Scheduling Algorithms
" First Come First Serve (FCFS): Simplest scheduling algorithm that schedules according to arrival
times of processes.
! First come first serve scheduling algorithm states that the process that requests the CPU first is
allocated the CPU first.
! It is implemented by using the FIFO queue.
! When a process enters the ready queue, its PCB is linked onto the tail of the queue.
! When the CPU is free, it is allocated to the process at the head of the queue.
! The running process is then removed from the queue.
! FCFS is a non-preemptive scheduling algorithm.
! Note: First come first serve suffers from convoy effect.
" Shortest Job First (SJF): Process which have the shortest burst time are scheduled first.
! If two processes have the same bust time then FCFS is used to break the tie.
! It is a non-preemptive scheduling algorithm.

Dept. of Computer Science and Engineering 34


Different Scheduling Algorithms
" Longest Job First (LJF): It is similar to SJF scheduling algorithm.
" But, in this scheduling algorithm, we give priority to the process having the longest burst
time.
" This is non-preemptive in nature i.e., when any process starts executing, can’t be
interrupted before complete execution.
" Shortest Remaining Time First (SRTF): It is preemptive mode of SJF algorithm in which
jobs are schedule according to shortest remaining time.
" Longest Remaining Time First (LRTF): It is preemptive mode of LJF algorithm in which we
give priority to the process having largest burst time remaining.
" Priority Based scheduling: In this scheduling, processes are scheduled according to their
priorities, i.e., highest priority process is scheduled first.
" If priorities of two processes match, then schedule according to arrival time.
" Here, starvation of process is possible.

Dept. of Computer Science and Engineering 35


Different Scheduling Algorithms
" Round Robin Scheduling: Each process is assigned a fixed time(Time Quantum/Time Slice) in cyclic way.
! It is designed especially for the time-sharing system.
! The ready queue is treated as a circular queue.
! The CPU scheduler goes around the ready queue, allocating the CPU to each process for a time interval of up
to 1-time quantum.
! To implement Round Robin scheduling, we keep the ready queue as a FIFO queue of processes.
! New processes are added to the tail of the ready queue.
! The CPU scheduler picks the first process from the ready queue, sets a timer to interrupt after 1-time quantum,
and dispatches the process.
! One of two things will then happen.
4 The process may have a CPU burst of less than 1-time quantum.
– In this case, the process itself will release the CPU voluntarily.
– The scheduler will then proceed to the next process in the ready queue.
4 Otherwise, if the CPU burst of the currently running process is longer than 1-time quantum, the timer will go
off and will cause an interrupt to the operating system.
4 A context switch will be executed, and the process will be put at the tail of the ready queue.
4 The CPU scheduler will then select the next process in the ready queue.

Dept. of Computer Science and Engineering 36


Different Scheduling Algorithms
! Highest Response Ratio Next (HRRN): In this scheduling, processes with highest response ratio is
scheduled.
! This algorithm avoids starvation.
4 Response Ratio = (Waiting Time + Burst time) / Burst time
! Multilevel Queue Scheduling: According to the priority of process, processes are placed in the
different queues.
! Generally high priority process are placed in the top level queue.
! Only after completion of processes from top level queue, lower level queued processes are
scheduled.
! It can suffer from starvation.
! Multi level Feedback Queue Scheduling: It allows the process to move in between queues.
! The idea is to separate processes according to the characteristics of their CPU bursts.
! If a process uses too much CPU time, it is moved to a lower-priority queue.

Dept. of Computer Science and Engineering 37


Scheduling Criteria
! CPU utilization – keep the CPU as busy as possible (i.e., Max CPU utilization)
! Throughput – number of processes that complete their execution per time unit (i.e., Max throughput)
! Turnaround time – amount of time to execute a particular process (i.e., Min turnaround time)
! Waiting time – amount of time a process has been waiting in the ready queue (i.e., Min waiting time)
" The sum of the time spent in the ready queue during the life of the process.
! Response time – amount of time it takes from when a request was submitted until the first response is
produced, not output (for time-sharing environment) (i.e., Min response time)
" The time from first submission of the process until the first running
! Burst time: It is also called as execution time or running time.
" Burst time is the amount of time required by a process for executing on CPU.
" Burst time of a process can not be known in advance before executing the process.
" It can be known only after the process has executed.
! Service Time: The amount of CPU time that a process will need before it either finishes or voluntarily exits
the CPU, such as to wait for input / output.
" Service time means amount of time after which a process can start execution.
" It is summation of burst time of previous processes (Processes that came before)
Dept. of Computer Science and Engineering 38
Different Time With Respect To A Process
! Arrival Time: Time at which the process arrives in the ready queue.
! Completion Time: Time at which process completes its execution.
! Burst Time: Time required by a process for CPU execution.
! Turn Around Time: Time Difference between completion time and arrival time.
! Turn Around Time = Completion Time – Arrival Time
! Turn Around time = Burst time + Waiting time
! Waiting Time(W.T): Time Difference between turn around time and burst time.
! Waiting Time = Turn Around Time – Burst Time
! Waiting Time = Service Time - Arrival Time
! Response Time: Response time is the amount of time after which a process gets the CPU for the first
time after entering the ready queue.
! Response Time = Time at which process first gets the CPU – Arrival time

Dept. of Computer Science and Engineering 39


Objectives of Scheduling Algorithm
! Max CPU utilization: Keep CPU as busy as possible
! Fair allocation of CPU.
! Max throughput: Number of processes that complete their execution per time unit
! Min turnaround time: Time taken by a process to finish execution
! Min waiting time: Time a process waits in ready queue
! Min response time: Time when a process produces first response

Dept. of Computer Science and Engineering 40


First- Come, First-Served (FCFS) Scheduling
! Given n processes with their burst times, the task is to find average waiting time and average turn
around time using FCFS scheduling algorithm.
! First in, first out (FIFO), also known as first come, first served (FCFS), is the simplest scheduling
algorithm.
! FIFO simply queues processes in the order that they arrive in the ready queue.
! In this, the process that comes first will be executed first and next process starts only after the previous
gets fully executed.
! Here we are considering that arrival time for all processes is 0.

Dept. of Computer Science and Engineering 41


First- Come, First-Served (FCFS) Scheduling
! Completion Time: Time at which process completes its execution.
! Turn Around Time: Time Difference between completion time and arrival time.
Turn Around Time = Completion Time – Arrival Time
Turn Around time = Burst time + Waiting time
! Waiting Time(W.T): Time Difference between turn around time and burst time.
Waiting Time = Turn Around Time – Burst Time

Dept. of Computer Science and Engineering 42


First- Come, First-Served (FCFS) Scheduling
Algorithm:
1. Input the processes along with their burst time (bt).
2. Find waiting time (wt) for all processes.
3. As first process that comes need not to wait so waiting time for process 1 will be 0 i.e. wt[0] = 0.
4. Find waiting time for all other processes i.e. for process i -> wt[i] = bt[i-1] + wt[i-1] .
5. Find turnaround time = waiting_time + burst_time for all processes.
6. Find average waiting time = total_waiting_time / no_of_processes.
7. Similarly, find average turnaround time = total_turn_around_time / no_of_processes.

Dept. of Computer Science and Engineering 43


First- Come, First-Served (FCFS) Scheduling
! Note: In this example, we have assumed arrival times as 0, so turn around time and completion time
are same.

Dept. of Computer Science and Engineering 44


First- Come, First-Served (FCFS) Scheduling
• Service Time: Service time means amount of time
after which a process can start execution.
• It is summation of burst time of previous
processes (Processes that came before)
• To find waiting time: Time taken by all processes
before the current process to be started
• (i.e. burst time of all previous processes) – arrival
time of current process
wait_time[i] = (bt[0] + bt[1] +…… bt[i-1] ) – arrival_time[i]

Process Wait Time : Service Time - Arrival Time


P0à 0 - 0 = 0
P1à 5 - 1 = 4
P2à 8 - 2 = 6
P3à 16 - 3 = 13
Average Wait Time: (0 + 4 + 6 + 13) / 4 = 5.75

Dept. of Computer Science and Engineering 45


First- Come, First-Served (FCFS) Scheduling
Process Burst Time
P1 24
P2 3
P3 3
! Suppose that the processes arrive in the order: P1 , P2 , P3

! The Gantt Chart for the schedule is:

P1 P2 P3
0 24 27 30

! Waiting time for P1 = 0; P2 = 24; P3 = 27


! Average waiting time: (0 + 24 + 27)/3 = 17

Dept. of Computer Science and Engineering 46


FCFS Scheduling (Cont.)
Suppose that the processes arrive in the order:
P2 , P3 , P1
! The Gantt chart for the schedule is:

P2 P3 P1
0 3 6 30

! Waiting time for P1 = 6; P2 = 0; P3 = 3


! Average waiting time: (6 + 0 + 3)/3 = 3
! Much better than previous case
! Convoy effect - short process behind long process
! Consider one CPU-bound and many I/O-bound processes

Dept. of Computer Science and Engineering 47


Convoy Effect in Operating Systems
! Convoy Effect: is phenomenon associated with the First Come First Serve (FCFS) algorithm, in which
the whole Operating System slows down due to few slow processes.

! FCFS algorithm is non-preemptive in nature, that is, once CPU time has been allocated to a process,
other processes can get CPU time only after the current process has finished.
! This property of FCFS scheduling leads to the situation called Convoy Effect.
! To avoid Convoy Effect, preemptive scheduling algorithms like Round Robin Scheduling can be used
! as the smaller processes don’t have to wait much for CPU time
4 making their execution faster and leading to less resources sitting idle

Dept. of Computer Science and Engineering 48


Shortest-Job-First (SJF) Scheduling

! Associate with each process the length of its next CPU burst
! Use these lengths to schedule the process with the shortest time
! SJF is optimal – gives minimum average waiting time for a given set of processes
! The difficulty is knowing the length of the next CPU request
! Could ask the user
! Two schemes:
! Non-preemptive
! Preemptive – Also known as the Shortest-Remaining-Time-First (SRTF).
! Non-preemptive SJF is optimal if all the processes are ready simultaneously– gives
minimum average waiting time for a given set of processes.
! SRTF is optimal if the processes may arrive at different times

Dept. of Computer Science and Engineering 49


SJF (non-preemptive)
Process Arrival Time Burst Time
P1 0.0 7
P2 2.0 4
P3 4.0 1
P4 5.0 4
! SJF (non-preemptive)

P1

0 3 7
! At time 0, P1 is the only process, so it gets the CPU and runs to completion

Dept. of Computer Science and Engineering 50


SJF (non-preemptive)
Process Arrival Time Burst Time
P1 0.0 7
P2 2.0 4
P3 4.0 1
P4 5.0 4
" Once P1 has completed the queue now holds P2, P3 andP4

P1 P3 P2 P4

0 3 7 8 12 16

" P3 gets the CPU first since it is the shortest. P2 then P4 get the CPU in turn (based on arrival
time)
" Avg waiting time = (0+6+3+7)/4 = 4

Dept. of Computer Science and Engineering 51


Example of SJF
ProcessArriva l TBurst Time
P1 0.0 6
P2 2.0 8
P3 4.0 7
P4 5.0 3
! SJF scheduling chart

P4 P1 P3 P2
0 3 9 16 24

! Average waiting time = (3 + 16 + 9 + 0) / 4 = 7

Dept. of Computer Science and Engineering 52


Shortest-Job-First (SJF) Scheduling
" The SJF scheduling algorithm is provably optimal, in that it gives the minimum average waiting time for
a given set of processes.
" Moving a short process before a long one decreases the waiting time of the short process more than it
increases the waiting time of the long process.
" Consequently, the average waiting time decreases.
" The real difficulty with the SJF algorithm is knowing the length of the next CPU request.
" For long-term (job) scheduling in a batch system, we can use the process time limit that a user specifies
when he submits the job.
" In this situation, users are motivated to estimate the process time limit accurately,
" since a lower value may mean faster response but too low a value will cause a time-limit-exceeded
error and require resubmission.
" SJF scheduling is used frequently in long-term scheduling.

Dept. of Computer Science and Engineering 53


Shortest-Job-First (SJF) Scheduling
" Although the SJF algorithm is optimal, it cannot be implemented at the level of short-term CPU
scheduling.
" With short-term scheduling, there is no way to know the length of the next CPU burst.
" One approach to this problem is to try to approximate SJF scheduling.
" We may not know the length of the next CPU burst, but we may be able to predict its value.
" We expect that the next CPU burst will be similar in length to the previous ones.
" The next CPU burst is generally predicted as an exponential average of the measured lengths of
previous CPU bursts.

Dept. of Computer Science and Engineering 54


Determining Length of Next CPU Burst
Exponential Average:
! Can only estimate the length – should be similar to the previous one
! Then pick process with shortest predicted next CPU burst
! Can be done by using the length of previous CPU bursts, using exponential averaging

1. t n = actual length of n th CPU burst


2. t n +1 = predicted value for the next CPU burst
3. a , 0 £ a £ 1
4. Define : 𝜏!"# = 𝛼𝑡! + (1 − 𝛼) 𝜏!
! Commonly, α set to ½
! Preemptive version called shortest-remaining-time-first

Dept. of Computer Science and Engineering 55


Prediction of the Length of the Next CPU Burst
" Problem with SJF: It is very difficult to know exactly the length of the next CPU burst.
" Idea: Based on the observations in the recent past, we can try to predict.
" Exponential averaging:
" nth CPU burst = tn;
" the average of all past bursts tn, using a weighting factor 0<=a<=1,
" the next CPU burst is: tn+1 = a tn + (1- a) tn.

12

τi 10

ti 6

time

CPU burst (ti) 6 4 6 4 13 13 13 …


"guess" (τi) 10 8 6 6 5 9 11 12 …
Dept. of Computer Science and Engineering 56
Examples of Exponential Averaging
" a =0
" tn+1 = tn
" Recent history does not count
" a =1
" tn+1 = a tn
" Only the actual last CPU burst counts
" If we expand the formula, we get:
tn+1 = a tn+(1 - a)a tn -1 + …
+(1 - a )j a tn -j + …
+(1 - a )n +1 t0

" Since both a and (1 - a) are less than or equal to 1, each successive term has less weight than
its predecessor

Dept. of Computer Science and Engineering 57


Shortest-Remaining-Job-First (SRJF) Scheduling
" The SJF algorithm can be either preemptive or non-preemptive.
" The choice arises when a new process arrives at the ready queue while a previous process
is still executing.
" The next CPU burst of the newly arrived process may be shorter than what is left of the
currently executing process.
" A preemptive SJF algorithm will preempt the currently executing process, whereas
" A non-preemptive SJF algorithm will allow the currently running process to finish its
CPU burst.
" Preemptive SJF scheduling is also called as shortest-remaining-time-first scheduling.

Dept. of Computer Science and Engineering 58


Example of Shortest-remaining-time-first
" Now we add the concepts of varying arrival times and preemption to the analysis
Process Arrival Time Burst Time
P1 0 8
P2 1 4
P3 2 9
P4 3 5
" Preemptive SJF Gantt Chart

P1 P2 P4 P1 P3
0 1 5 10 17 26

" Average waiting time = [(10-1)+(1-1)+(17-2)+5-3)]/4 = 26/4 = 6.5 msec

Dept. of Computer Science and Engineering 59


Shortest Remaining Time First (SRTF)
Process Arrival Time CPU Burst Time CT TAT = CT- AT WT = TAT- BT RT
P1 0 7 7 7 0 0
P2 3 5 16 13 8 8
P3 5 4 11 6 2 2
Avg. CT = Avg. TAT = Avg. WT = Avg. RT =
5 4 11.33 8.66 3.33 3.33
Ready
Queue P1 P2 P3
order
0 3 5

Gantt-Chart P1 P3 P2
0 7 11 16

Dept. of Computer Science and Engineering 60


Priority Scheduling
! A priority number (integer) is associated with each process
! The CPU is allocated to the process with the highest priority (smallest integer º highest
priority)
! Preemptive
! Non-preemptive
! SJF is priority scheduling where priority is the inverse of predicted next CPU burst time
! Problem º Starvation – low priority processes may never execute
! Solution º Aging – as time progresses increase the priority of the process

Dept. of Computer Science and Engineering 61


Real-Time CPU Scheduling
" Can present obvious challenges
" Soft real-time systems – no guarantee as to
when critical real-time process will be
scheduled
" Hard real-time systems – task must be
serviced by its deadline
" Two types of latencies affect performance
1. Interrupt latency – time from arrival of
interrupt to start of routine that services
interrupt
2. Dispatch latency – time for scheduler to take
current process off CPU and switch to another
(context switching)

Dept. of Computer Science and Engineering 62


Real-Time CPU Scheduling (Cont.)
event response to event
" Dispatch latency – time for scheduler to take
current process off CPU and switch to another response interval

" Conflict phase of dispatch latency: process made


interrupt available
processing
1. Preemption of any process running in kernel
mode dispatch latency
2. Release the resources of the low-priority real-time
process which are needed by the high-priority process
execution
processes
conflicts dispatch

time

Dept. of Computer Science and Engineering 63


Priority-based Scheduling
" For real-time scheduling, scheduler must support preemptive, priority-based scheduling
" But only guarantees soft real-time
4 i.e., no guarantee as to when the critical real-time process will be scheduled
" For hard real-time: the scheduler must provide the ability to meet the deadlines
" Processes have new characteristics: periodic ones require CPU at constant intervals
" Has processing time t, deadline d, period p
" 0≤t≤d≤p
" Rate of periodic task is 1/p

Dept. of Computer Science and Engineering 64


Example of Priority Scheduling

ProcessAarri Burst TimeT Priority


P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2
! Priority scheduling Gantt Chart

! Average waiting time = (0+1+6+16+18)/5 = 8.2 msec

Dept. of Computer Science and Engineering 65


Round Robin (RR)
! Each process gets a small unit of CPU time (time quantum q), usually 10-100 milliseconds.
! After this time has elapsed, the process is preempted and added to the end of the ready queue.
! If there are n processes in the ready queue and the time quantum is q,
! Then each process gets 1/n of the CPU time in chunks of at most q time units at once.
! No process waits more than (n-1)q time units.
! Timer interrupts every quantum to schedule next process
! Performance
! q large Þ FIFO
! q small Þ q must be large with respect to context switch, otherwise overhead is too high

Dept. of Computer Science and Engineering 66


Example of RR with Time Quantum = 4
Process Burst Time
P1 24
P2 3
P3 3
! The Gantt chart is:

P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30

! Typically, higher average turn around than SJF, but better response
! q should be large compared to context switch time
! q usually 10ms to 100ms, context switch < 10 usec

Dept. of Computer Science and Engineering 67


Round Robin, Time quantum = 4
Process Arrival Time CPU Burst Time CT TAT = CT- AT WT = TAT- BT RT
P1 0 7 11 11 5 0
P2 3 5 16 13 8 1
P3 5 4 15 10 6 6
Avg. CT = Avg. TAT = Avg. WT = Avg. RT =
3 4 1 14 11.33 6.33 2.33
Ready
Queue P1 P2 P1 P3 P2
order
0 3 4 5 8

Gantt-Chart P1 P2 P1 P3 P2
0 4 8 11 15 16

Dept. of Computer Science and Engineering 68


Highest Response Ratio Next
§ Criteria – Response Ratio
§ Mode – Non-Preemptive
§ A Response Ratio is calculated for each of the available jobs, and
§ The Job with the highest response ratio is given priority over the others.
§ Response Ratio = (W + S)/S
§ Here, W is the waiting time of the process so far and S is the Burst time of the process.
§ Performance of HRRN –
§ Shorter Processes are favoured.
§ Aging without service increases ratio, longer jobs can get past shorter jobs.
Advantages
• Its performance is better than SJF Scheduling.
• It limits the waiting time of longer jobs and also supports shorter jobs.

Disadvantages
• It can't be implemented practically.
• This is because the burst time of all the processes can not be known in advance.

Dept. of Computer Science and Engineering 69


Highest Response Ratio Next
§ Response Ratio = (W + S)/S
§ Here, W is the waiting time of the process so far and S is the Burst time of the process

Average Wiating time = ([0-0]+[4-2]+[9-4]+[15-6]+[12-8])/5


= (0+2+5+9+4)/5 = 4

Dept. of Computer Science and Engineering 70


Multilevel Queue Scheduling
! Scheduling algorithms has been created for situations in which processes are easily classified into different
groups.
! A multi-level queue scheduling algorithm partitions the ready queue into several separate queues.
! Ready queue is partitioned into separate queues, e.g:
! foreground (interactive)
! background (batch)
! The processes are permanently assigned to one queue, generally based on some property of the process,
! such as memory size, process priority, or process type.
! Each queue has its own scheduling algorithm.
! A common division is made between foreground processes and background processes.
! These two types of processes have different response-time requirements, and so might have different
scheduling needs.
! For example: separate queues might be used for foreground and background processes.
4 The foreground queue might be scheduled by Round Robin algorithm,
– while the background queue is scheduled by an FCFS algorithm

Dept. of Computer Science and Engineering 71


Multilevel Queue Scheduling
" In addition, there must be scheduling among the queues, which is commonly implemented as fixed-
priority preemptive scheduling.
" The foreground queue may have absolute priority over the background queue.
" Process permanently in a given queue
" Each queue has its own scheduling algorithm:
" foreground – RR
" background – FCFS
" Scheduling must be done between the queues:
" Fixed priority scheduling (i.e., serve all from foreground then from background).
4 Possibility of starvation.
" Time slice – each queue gets a certain amount of CPU time which it can schedule amongst its
processes; i.e., 80% to foreground in RR and 20% to background in FCFS

Dept. of Computer Science and Engineering 72


Multilevel Queue Scheduling
! Let us consider an example of a multilevel queue-
scheduling algorithm with five queues:
! System Processes
! Interactive Processes
! Interactive Editing Processes
! Batch Processes
! Student Processes

Each queue has absolute priority over lower-priority queues.


For example, no process in the batch queue, could run unless the queues for system processes, interactive
processes, and interactive editing processes were all empty.
If an interactive editing process entered the ready queue while a batch process was running, the batch process will
be preempted.

Dept. of Computer Science and Engineering 73


Multiple-Processor Scheduling
" CPU scheduling more complex when multiple CPUs are available
" Homogeneous processors within a multiprocessor
" Asymmetric multiprocessing – only one processor accesses the system data structures,
" Alleviating/reducing the need for data sharing
" Symmetric multiprocessing (SMP) –
" Each processor is self-scheduling, where all processors use a common ready queue, or
" Each processor has its own private queue of ready processes
" Most common
" Processor affinity – process has affinity (liking) for the processor on which it is currently running
" soft affinity
" hard affinity
" Variations including processor sets

Dept. of Computer Science and Engineering 74


NUMA and CPU Scheduling
" Non-Uniform Memory Access (NUMA) is
a computer memory design used
in multiprocessing,
" where the memory access time depends on the
memory location relative to the processor. CPU CPU

" Under NUMA, a processor can access its own local


slo
wa
memory faster than non-local memory (memory local fast access cce fast access
ss
to another processor or memory shared between
processors). memory memory
" The benefits of NUMA are limited to particular
workloads, notably on servers where the data is often
associated strongly with certain tasks or users computer

" Note that memory-placement algorithms can also


consider process affinity

Dept. of Computer Science and Engineering 75


Multiple-Processor Scheduling – Load Balancing
" In Symmetric multiprocessing (SMP), the scheduler should efficiently manage all CPUs loaded
" Load balancing attempts to keep workload evenly distributed
" Push migration – periodic task checks load on each processor, and pushes task from overloaded
CPU to other CPUs
" Pull migration – idle processors pulls waiting task from busy processor

Multicore Processors
§ Recent trend to place multiple processor cores on same physical chip
§ Faster and consumes less power
§ Multiple threads per core also growing
§ Takes advantage of memory stall to make progress on another thread while memory retrieve
happens

Dept. of Computer Science and Engineering 76

You might also like