CPU Scheduling-1

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 30

CPU Scheduling

Course Instructor: Nausheen Shoaib


Basic Concepts
 Maximum CPU utilization
obtained with
multiprogramming

 CPU–I/O Burst Cycle –


Process execution consists
of a cycle of CPU
execution and I/O wait

 CPU burst followed by


I/O burst

 CPU burst distribution is of


main concern
Histogram of CPU-burst Times
Preemptive Vs. Non Preemptive
Scheduling
Dispatcher
 Dispatcher module gives control of the CPU to
the process selected by the short-term
scheduler; this involves:
◦ switching context
◦ switching to user mode
◦ jumping to the proper location in the user program
to restart that program

 Dispatch latency – time it takes for the


dispatcher to stop one process and start
another running
Scheduling Criteria
 CPU utilization – keep the CPU as busy as possible

 Throughput – # of processes that complete their execution


per time unit

 Turnaround time – amount of time to execute a particular


process
 Turnaround time (TAT)=Completion time – Arrival time

 Waiting time – amount of time a process has been waiting


in the ready queue

 Response time – amount of time it takes from when a


request was submitted until the first response is produced,
not output (for time-sharing environment)
Scheduling Algorithm Optimization
Criteria

 Max CPU utilization


 Max throughput
 Min turnaround time
 Min waiting time
 Min response time
First-Come, First-Served (FCFS) Scheduling

Process Burst Time


P1 24
P2 3
P3 3
 Suppose that the processes arrive in the

order: P1 , P2 , P3
 P1 P2 P3
The Gantt Chart for the schedule is:
0 24 27 30
Example FCFS
 Covered in class
FCFS Scheduling (Cont.)

Suppose that the processes arrive in the order:


P2 , P3 , P1
 The Gantt chart for the schedule is:

P2 P3 P1

0 3 6 30

 Waiting time for P1 = 6; P2 = 0; P3 = 3


 Average waiting time: (6 + 0 + 3)/3 = 3
 Much better than previous case
 Convoy effect - short process behind long process
◦ Consider one CPU-bound and many I/O-bound processes
Shortest-Job-First (SJF) Scheduling

 Associate with each process the length of


its next CPU burst
◦ Use these lengths to schedule the process with
the shortest time

 SJF is optimal – gives minimum average


waiting time for a given set of processes
◦ The difficulty is knowing the length of the next
CPU request
◦ Could ask the user
Example of SJF
ProcessArriva l Time Burst Time
P1 0.0 6
P2 2.0 8
P3 4.0 7
P4 5.0 3
 SJF scheduling chart
P4 P1 P3 P2

0 3 9 16 24

 Average waiting time = (3 + 16 + 9 + 0) / 4 = 7


Example of SJF
 Covered in class
Determining Length of Next CPU
Burst
 Can only estimate the length – should be similar to the
previous one
◦ Then pick process with shortest predicted next CPU burst
 Can be done by using the length of previous CPU bursts, using
exponential averaging

 tn = most recent info

 If α=0 then recent history has no effect


 If α=1, then =recent CPU burst matters
 Preemptive version called shortest-remaining-time-first
Shortest-remaining-time-first

Now we add the concepts of varying arrival


times and preemption to the analysis:
 the process with the smallest amount of time

remaining until completion is selected to


execute.
Example of SRTF
 Covered in class
Priority Scheduling
 A priority number (integer) is associated with each
process
 The CPU is allocated to the process with the
highest priority (smallest integer  highest priority)
◦ Preemptive
◦ Non-preemptive

 Problem  Starvation – low priority processes may


never execute
 Solution  Aging – as time progresses increase the
priority of the process
Example of Priority Scheduling

ProcessAarri Burst TimeT Priority


P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2
 Priority scheduling Gantt Chart
P2 P5 P1 P3 P4

0 1 6 16 18 19

 Average waiting time = 8.2 msec


Example Priority
 Covered in class
Round Robin (RR)
 Each process gets a small unit of CPU time (time
quantum q), usually 10-100 milliseconds. After this
time has elapsed, the process is preempted and added to
the end of the ready queue.
 If there are n processes in the ready queue and the time
quantum is q, then each process gets 1/n of the CPU time
in chunks of at most q time units at once. No process
waits more than (n-1)q time units.
 Timer interrupts every quantum to schedule next process
 Performance
◦ q large  FIFO
◦ q small  q must be large with respect to context switch,
otherwise overhead is too high
Example of RR with Time Quantum = 4

Process Burst Time


P1 24
P2 3
P3 3
 The Gantt chart is:

P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30

 Completion time = P1=30; P2=7; P3=10


 TAT =P1= (30-0); P2=(7-4); P3=(10-7)

 Avg. TAT=
Example RR

 Covered in class
Multilevel Queue
 Process is assigned to one queue based on memory size,
priority, process type.

 Ready queue is partitioned into separate queues, eg:


◦ foreground (interactive)
◦ background (batch)

 Each queue has its own scheduling algorithm:


◦ foreground – RR
◦ background – FCFS
 Scheduling must be done between the queues:
◦ Fixed priority scheduling; (i.e., serve all from foreground then from
background). Possibility of starvation.
◦ Time slice – each queue gets a certain amount of CPU time which it
can schedule amongst its processes; i.e., 80% to foreground in RR
◦ 20% to background in FCFS
Multilevel Queue Scheduling
Multilevel Feedback Queue

 Idea is to separate process according to CPU burst time


 If a process uses too much CPU time, it is moved to low
priority
 If a process waits too long (aging) in low priority then it is
moved to high priority

 Multilevel-feedback-queue scheduler defined by the


following parameters:
◦ number of queues
◦ scheduling algorithms for each queue
◦ method used to determine when to upgrade a process
◦ method used to determine when to demote a process
◦ method used to determine which queue a process will enter when
that process needs service
Example of Multilevel Feedback
Queue
 Three queues:
◦ Q0 – RR with time quantum 8
milliseconds
◦ Q1 – RR time quantum 16 milliseconds
◦ Q2 – FCFS

 Scheduling
◦ A new job enters queue Q0 which is
served FCFS
 When it gains CPU, job receives 8
milliseconds
 If it does not finish in 8 milliseconds,
job is moved to queue Q1
◦ At Q1 job is again served FCFS and
receives 16 additional milliseconds
 If it still does not complete, it is
preempted and moved to queue Q2
Multiple-Processor Scheduling

 CPU scheduling more complex when multiple


CPUs are available
 Homogeneous processors within a
multiprocessor
 Asymmetric multiprocessing – only one
processor accesses the system data structures,
alleviating the need for data sharing
 Symmetric multiprocessing (SMP) – each
processor is self-scheduling, all processes in
common ready queue, or each has its own private
queue of ready processes
Multiple-Processor Scheduling
 Processor affinity – process has affinity for
processor on which it is currently running
 soft affinity: When an operating system has a

policy of attempting to keep a process running


on the same processor—but not guaranteeing
that it will do so known as soft affinity.
 hard affinity: allowing a process to specify a

subset of processors on which it may run.


Multiple-Processor Scheduling – Load
Balancing

 If SMP, need to keep all CPUs loaded for efficiency

 Load balancing attempts to keep workload


evenly distributed

 Push migration – periodic task checks load on


each processor, and if found pushes task from
overloaded CPU to other CPUs

 Pull migration – idle processors pulls waiting


task from busy processor
Real-Time CPU Scheduling

 Soft real-time systems – no


guarantee as to when critical
real-time process will be
scheduled
 Hard real-time systems –
task must be serviced by its
deadline
 Two types of latencies affect
performance
1. Interrupt latency – time
from arrival of interrupt to
start of routine that services
interrupt
2. Dispatch latency – time for
schedule to take current
process off CPU and switch
to another

You might also like