0% found this document useful (0 votes)
40 views42 pages

5a Processor Scheduling

The document discusses various processor scheduling policies and algorithms. It covers preemptive vs non-preemptive scheduling, priorities, scheduling objectives like throughput and waiting time, and algorithms like FIFO, Round Robin, and Shortest Process First. The key aspects are scheduling policies aim to satisfy performance criteria and there are tradeoffs between different algorithms.

Uploaded by

James
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views42 pages

5a Processor Scheduling

The document discusses various processor scheduling policies and algorithms. It covers preemptive vs non-preemptive scheduling, priorities, scheduling objectives like throughput and waiting time, and algorithms like FIFO, Round Robin, and Shortest Process First. The key aspects are scheduling policies aim to satisfy performance criteria and there are tradeoffs between different algorithms.

Uploaded by

James
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 42

Processor Scheduling

1
2

 When a system has a choice of processes to execute,


it must have a strategy, called a processor scheduling
policy for deciding which processes to run at a given
time.
 A scheduling policy should attempt to satisfy certain
performance criteria, such as maximizing the number
of processes that complete per unit time, minimizing
the amount of time each process waits before
executing, preventing indefinite postponement of
processes, or maximizing CPU utilization.
Preemptive and Non-preemptive Scheduling
3

 A scheduling discipline is non-preemptive if, once


the system has assigned a processor to a process,
the system cannot remove the processor from the
process it is running.
 Hence under a non-preemptive scheduling discipline,
each process, once given a processor, runs to
completion or until it voluntarily relinquishes its
processor.
4

 A scheduling discipline is preemptive if the system


can remove the processor from the process it is
running.
 Hence under a preemptive discipline, the processor
may execute a portion of a processor’s code and
perform a context switch.
Priorities
5

 Schedulers often use priorities to determine how to


schedule and dispatch processes. Priorities may be
statically assigned or may change dynamically.
 Static priority
Remain fixed, so static-priority-based mechanisms are relatively easy
to implement and incur relatively low overhead. However, such
mechanisms are not responsive to changes in environment.
 Dynamic priority
These mechanisms are responsive to change. For example, the
system may want to increase the priority of a process holding a key
resource needed by a higher priority process.
Dynamic priority schemes are more complex to implement and have
greater overhead than static schemes.
Scheduling Objectives
6

 A system designer must consider a number of factors


when developing a scheduling discipline, such as the
type of system and the user’s needs.
 Depending on the system, the user and designer might
expect the scheduler to:
 Maximize throughput
A scheduling discipline should attempt to service the maximum
number of processes per unit time.
 Maximize the number of interactive processes receiving acceptable
response times
 Maximize resource utilization
The scheduling mechanisms should keep the resources of the
system busy.
7

 Avoid indefinite postponement


A process should not experience an unbounded wait time before or
while receiving service.
 Enforce priorities
If the system assigns priorities to processes, the scheduling
mechanism should favor higher priority processes.
 Minimize overheads
Overheads often results in wasted resources. But a certain portion of
system resources effectively invested as overhead can greatly
improve overall system performance.
 Ensure predictability
By minimizing the statistical variance in process response times, a
system can guarantee that processes will receive predictable service
levels.
8

 Despite the differences in goals among systems, many


scheduling disciplines exhibit similar properties:
 Fairness
A scheduling discipline is fair if all similar processes are treated the
same and no process suffers indefinite postponement due to
scheduling issues.
 Predictability
A given process should always run in about the same time under
similar system loads.
 Scalability
System performance should degrade gracefully (i.e. it should not
immediately collapse) under heavy load conditions.
Scheduling Criteria
9

 Different CPU scheduling algorithms have different


properties, and the choice of a particular algorithm
may favor one class of processes over another.
 In choosing which algorithm to use in a particular
situation, we must consider the properties of various
algorithms.
10

 Many criteria have been suggested for comparing


CPU scheduling algorithms. The criteria include the
following:
 CPU utilization
We want to keep the CPU as busy as possible. Conceptually, CPU
utilization can range from 0 to 100%. In real-time systems, it
should range from 40% (for a lightly loaded system) to 90% (for a
heavily loaded system).
 Throughput
If the CPU is busy executing processes, then work is being done.
one measure of work is the number of processes that are
completed per unit time, called throughput.
11

 Turnaround time
From the point of view of a particular process, the important
criteria is how long it takes to execute the process. The interval
from the time of submission of a process to the time of
completion is the turnaround time. Turnaround time is the sum of
the periods spent waiting to get into memory, waiting I the ready
queue, executing on the CPU and doing I/O.
 Waiting time
The CPU scheduling algorithm does not affect the amount of time
during which a process executes or does I/O. It affects only the
amount of time that a process spends waiting in the ready queue.
Waiting time is the sum of the periods spent waiting in the ready
queue.
12

 Response time
In an interactive system, turnaround time may not be the best
criterion. Often, a process can produce some output fairly early
and can continue computing new results while previous results
are being output to the user.
Thus, another measure is the time from the submission of a
request until the first response is produced. Tis measure, called
response time, is the time it takes to start responding, not the
time it takes to output the response.
13

 It is desirable to maximize CPU utilization and


throughput and minimize turnaround time, waiting time,
and response time.
 Researchers have suggested that for interactive
systems, it is more important to minimize the variance
in response time than to minimize the average
response time.
 A system with reasonable and predictable response
time ma be considered more desirable than a system
that is faster on the average but is highly variable.
Scheduling Algorithms
14

 We will now discuss the scheduling algorithms that


determine at runtime which process runs next.
 These algorithms decide when and for how long
each process runs, they make choices abut
preemptability, priorities, running time, time to
completion, fairness and other process
characteristics.
First In First Out (FIFO) Scheduling
15

 FIFO is the simplest scheduling algorithm.


 Processes are dispatched according to their arrival
time at the ready queue.
 FIFO is non-preemptive – once a process has the
processor, the process runs to completion.
 FIFO is fair in that it schedules processes according
to their arrival times, so all processes are treated
equally.
16

 It can also be looked at as being somewhat unfair


because long processes make short processes wait,
and unimportant processes make important
processes wait.
 FIFO is not useful in scheduling interactive
processes because it cannot guarantee short
response times.
FIFO Scheduling

Ready queue
Processor
P3 P2 P1 Completion
Round Robin (RR) Scheduling
18

 In Round Robin scheduling, processes are


dispatched FIFO, but are given a limited amount of
processor time called a time slice or quantum.
 If a process does not complete before its quantum
expires, the system preempts it and gives the
processor to the next waiting process.
 The system then places the preempted process at
the back of the ready queue.
RR Scheduling

Ready queue

P1 P3 P2 P1 Processor Completion
19

Preemption
20

 In the above diagram, process P1 is dispatched to a


processor, where it either executes to completion, in
which case it exits the system, or until the time slice
expires, at which point it is preempted and placed at
the tail of the ready queue.
 Round robin is effective for interactive environments in
which the system needs to guarantee reasonable
response times.
 The system can minimize preemption overhead
through efficient context-switching mechanisms and
by keeping waiting processes in main memory.
Selfish Round Robin (SRR) Scheduling
21

 Kleinrock discusses a variant of Round Robin called


Selfish Round Robin (SRR) that uses aging to
gradually increase process priorities over time.
 In this scheme, as each process enters the system, it
first resides in a holding queue, where it ages until
its priority reaches the level of processes in the
active queue.
 At this point, it is placed in the active queue and is
scheduled round robin with other processes in the
queue.
22

 The scheduler only dispatches processes in the


active queue, meaning that older processes are
favored over those that have just entered the system.
Shortest Process First (SPF) Scheduling
23

 Shortest Process First (SPF) is a non-preemptive


scheduling discipline in which the scheduler selects
the waiting process with the smallest estimated
runtime to completion.
 SPF reduces the waiting time over FIFO.
 The waiting times have a larger variance (more
unpredictable) than FIFO, especially for large
processes.
 SPF favors short processes at the expense of longer
ones.
24

 SPF selects processes for service in a manner


ensuring the next one will complete and leave the
system as soon as possible.
 This tends to reduce the number of waiting
processes and also the number of small processes
waiting behind large processes.
 As a result, SPF can minimize the average waiting
time of processes as they pass through the system.
25

 A key problem with SPF is that it requires precise


knowledge of how long a process will run, and this
information is not usually available.
 SPF, like FIFO, is non-preemptive and thus is not
suitable for environments in which reasonable
response times must be guaranteed.
Priority Scheduling
26

 SPF is a special case of the general priority


scheduling algorithm.
 A priority is associated with each process, and the
CPU is allocated to the process with the highest
priority.
 Equal priority processes are scheduled in FCFS order.
 An SPF algorithm is simply a priority algorithm
where the priority (p) is the inverse of the (predicted)
next CPU burst. The larger the CPU burst, the lower
the priority, and vice versa.
27

 Priority scheduling can either be preemptive or non-


preemptive.
 When a process is at the ready queue, its priority is
compared with the priority of the currently running
process.
 A preemptive priority scheduling algorithm will
preempt the CPU if the priority of the newly arrived
process is higher than the priority of the currently
running process.
 A non-preemptive priority scheduling algorithm will
simply put the new process at the head of the ready
queue.
28

 A major problem with priority scheduling algorithm


is indefinite blocking, or starvation.
 A process that is ready to run but is waiting for the
CPU can be considered blocked. A priority
scheduling algorithm can leave some low-priority
processes waiting indefinitely.
 In a heavily loaded system, a steady stream of
higher priority processes can prevent a low-priority
process from ever getting the CPU.
29

 A solution to the problem of indefinite blockage of


low-priority processes is aging.
 Aging involves gradually increasing the priority of
processes that wait in the system for a long time.
 Fr example if priorities range from 10 (low) to 100
(high), we could increase the priority of a waiting
process every 5 minutes. Eventually, even a process
with an initial low priority will have the highest
priority in the system and will be executed.
Highest Response Ratio Next (HRRN) Scheduling
30

 Brinch Hansen developed Highest-Response-Ratio-


Next (HRRN) policy that corrects some of the
weaknesses in SPF, particularly excessive bias
against longer processes and excessive favoritism
towards short processes.
 HRRN is a non-preemptive scheduling discipline in
which each process’s priority is a function not only
of its service time but also of its time spent waiting
for service.
31

 Once a process obtains it, it runs to completion.


 HRRN calculates dynamic priorities according to the
formula:
 Priority = (time waiting + service time)/ service time
32

 Because the service time appears in the


denominator, shorter processes receive preference.
 However, because the waiting time appears in the
numerator, longer processes that have been waiting
will also be given favorable treatment.
 This technique is similar to aging and prevents the
scheduler from indefinitely postponing processes.
Shortest Remaining Time (SRT) Scheduling
33

 Shortest Remaining Time (SRT) scheduling is the


preemptive counterpart of SPF that attempts to
increase throughput by servicing small processes.
 SRT was effective for job-processing systems that
received a stream of incoming jobs, but it is no
longer useful in most of today’s operating systems.
34

 In SRT, the scheduler selects the process with the


smallest estimated runtime to completion.
 In SPF, once a process begins executing, it runs to
completion.
 In SRT, a newly arrived process with a shorter
estimated runtime preempts a running process with
a longer runtime-to-completion.
 Again, SRT requires estimates of future process
behavior to be effective.
35

 The algorithm must maintain information about


elapsed service time of the running process and
perform occasional preemptions.
 Newly arrived processes with short runtimes execute
almost immediately.
 Longer processes, however, have an even longer
waiting time and variance of waiting times than in
SPF.
 These factors contribute to a larger overhead in SRT
than SPF.
Multilevel Feedback Queues
36

 When a process obtains a processor, the scheduler


cannot determine the precise amount of processor
time the process will require.
 I/O-bound processes normally use the processor
only briefly before generating an I/O request.
 Processor-bound processes might use the processor
for a long time if the system makes it available on a
non-preemptable basis.
37

 A scheduling algorithm should typically favor short


processes, favor I/O-bound processes to get good
interactive response times and should determine the
nature of a process as quickly as possible and
schedule the process accordingly.
38

 Multilevel Feedback Queues help accomplish these


goals.
 A new process enters the queuing network at the
tail of the highest queue.
 The process progresses through that queue in FIFO
order until the process obtains a processor.
 If the process completes its execution, or if it
relinquishes the processor to wait for an I/O
completion, it exits the queuing network.
39

 If a process’s quantum expires before the process


voluntarily relinquishes the processor, the system
places the process at the tail of the next lower-level
queue.
 As long as the process uses the full quantum
provided at each level, it continues to move to the
tail of the next lower queue.
 Usually there is a bottom-level queue through which
the process circulates round robin until it completes.
40

 The process that next gets the processor is the


one at the head of the highest non-empty
queue in the multilevel feedback queue.
 In this system, a process in the lower queue can
suffer indefinite postponement if a higher
queue always contains at least one process.
41

 In many multilevel feedback schemes, the


scheduler increases the quantum size as the
process moves to each lower level queue.
 Thus, the longer a process has been in the
queuing network, the larger the quantum it
receives each time it obtains the processor.
The End
42

You might also like