OS UNIT-I

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 108

Operating System

UNIT -I
Computer System and Operating System
Overview:
• Overview of Computer System hardware.
• Operating System Objectives and functions.
• Operating System Services.
• System Calls.
• System Programs.

2
What is an Operating System?
• A program that acts as an intermediary between a user of a
computer and the computer hardware

Operating system goals:


• Execute user programs and make solving user problems
easier
• Make the computer system convenient to use
• Use the computer hardware in an efficient manner

3
• Computer System Overview

• An Operating System makes the computing power available


to users by controlling the hardware.

• Let us review the aspects of computer hardware which are


important for the OS
Basic Components
• Processor (CPU)
• Main Memory (real memory, primary memory)
• Holds data in code
• I/O modules (I/O controllers, I/O channels, I/O
processors...)
• Hardware (with registers called I/O ports) that
moves data between cpu and peripherals like:
• Secondary memory devices (eg: hard disks)
• Keyboard, display...
• Communications equipment
• System interconnection (ie: Buses)
• Communication among processors, memory, and
I/O modules
I/O Module Structure

• Data to/from system bus are


buffered in data register(s)
• Status/Control register(s) holds
• current status information
• current control information from
• I/O logic interact with CPU via
control bus
• Contains logic specific to the
interface of each device
CPU Registers (fast memory on CPU)
Control & Status Registers

• Generally not available to user programs


• Some used by CPU to control its
operation
• Some used by OS to control program
execution
User-visible Registers

• Available to system (OS) and user


programs
Examples of Control & Status
Registers
Program Counter (PC)
• Contains the address of the next instruction to
be fetched
Instruction Register (IR)
• Contains the instruction most recently fetched
Program Status Word (PSW)
A register or group of registers containing:
• condition codes and status info bits
• Interrupt enable/disable bit
• Supervisor(OS)/user mode bit
User-Visible Registers
• Data Registers
• Can be assigned by the user program to perform operations on data
• Address Registers
• Contain memory address of data and instructions
• May contain a portion of an address that is used to calculate the
complete address
User-Visible Registers
• Examples of Address Registers
• Index/Offset
• involves adding an index to a base value to get an
address
• Segment pointer
• when memory is divided into segments, memory is
referenced by a segment and an offset
• Stack pointer
• points to top of stack
User-Visible Registers
• Condition Codes or Flags
• Bits set by the processor hardware as a result of
operations
• Can be accessed by a program but not changed
directly
• Examples
• Sign flag
• Zero flag
• Overflow flag
The Basic Instruction Cycle

• The CPU fetches the next instruction (with operands) from


memory.
• Then the CPU executes the instruction
• Program counter (PC) holds address of the instruction to be
fetched next
• Program counter is automatically incremented after each fetch
Then CPU must wait for I/O to complete!

• WRITE transfer control to the printer driver


(I/O program)

• I/O program prepare I/O module for printing


(4)

• CPU has to WAIT for I/O command to complete

• Long wait for a printer

• I/O pgm finishes in (5) and report status of


operation
Interrupts
• Computers now permit I/O modules to INTERRUPT the
CPU.

• For this the I/O module just assert an interrupt request


line on the control bus

• Then CPU transfer control to an Interrupt Handler


Routine (normally part of the OS)
Instruction Cycle with
Interrupts!

• CPU checks for interrupts after each instruction


• If no interrupts, then fetch the next instruction
for the current program
• If an interrupt is pending, then suspend
execution of the current program, and execute
the interrupt handler
Interrupt Handler
• Is a program that determines nature of
the interrupt and performs whatever
actions are needed
• Control is transferred to this program
• Control must be transferred back to the
interrupted program so that it can be
resumed from the point of interruption
• This point of interruption can occur
anywhere in the program
• Thus: must save the state of the program
(content of PC + PSW + registers + ...)
Simple Interrupt Processing
Interrupts improve CPU usage
• I/O pgm prepares the I/O
module and issues the I/O
command (eg: to printer)
• I/O pgm branches to user pgm
• User code gets executed during
I/O operation (eg: printing): no
waiting
• User pgm gets interrupted (x)
when I/O operation is done and
branches to interrupt handler to
examine status of I/O module
• Execution of user code resumes
Classes of Interrupts
• I/O
• signals normal completion of operation or
error
• Program Exception
• overflows
• try to execute illegal instruction
• reference outside user’s memory space
• Timer
• preempts a pgm to perform another task
• Hardware failure (eg: memory parity error)
Multiple interrupts: sequential
order

• Disable interrupts during an interrupt


• Interrupts remain pending until the processor enables
interrupts
• After interrupt handler routine completes, the processor
checks for additional interrupts
Multiple Interrupts: priorities

• Higher priority interrupts cause lower-priority interrupts to


wait
• Causes a lower-priority interrupt handler to be interrupted
• Example: when input arrives from communication line, it
needs to be absorbed quickly to make room for more input
Multiprogramming
• When a program reads a value on a I/O device it will
need to wait for the I/O operation to complete

• Interrupts are mostly effective when a single CPU is


shared among several concurrently active processes.

• The CPU can then switch to execute another program


when a program waits for the result of the read
operation.
I/O communication techniques
• 3 Techniques are possible for I/O operation
• Programmed I/O
• Does not use interrupts: CPU has to wait for completion of
each I/O operation
• Interrupt-driven I/O
• CPU can execute code during I/O operation: it gets
interrupted when I/O operation is done.
• Direct Memory Access
• A block of data is transferred directly from/to memory
without going through CPU
Programmed
I/O
• I/O module performs the action, on behalf
of the processor

• But the I/O module does not interrupt the


CPU when I/O is done

• Processor is kept busy checking status of


I/O module
Interrupt-Driven I/O

• Processor is interrupted when I/O


module ready to exchange data

• Processor is free to do other work

• No needless waiting

• Consumes a lot of processor time


because every word read or written
passes through the processor
Direct Memory
Access
• CPU issues request to a DMA module
(separate module or incorporated into I/O
module)

• DMA module transfers a block of data directly


to or from memory (without going through
CPU)

• An interrupt is sent when the task is


complete

• The CPU is only involved at the beginning


and end of the transfer
Memory Hierarchy
Cache Memory
• Small cache of expensive but very fast
memory interacting with slower but much
larger memory

• Invisible to OS and user programs but interact


with other memory management hardware

• Processor first checks if word referenced to is


in cache

• If not found in cache, a block of memory


containing the word is moved to the cache
The Hit Ratio

• Hit ratio = fraction of access where


data is in cache

• T1 = access time for fast memory

• T2 = access time for slow memory

• T2 >> T1

• When hit ratio is close to 1 the


average access time is close to T1
Locality of reference
• Memory reference for both instruction and data tend to
cluster over a long period of time.

• Example: once a loop is entered, there is frequent access to


a small set of instructions.

• Hence: once a word gets referenced, it is likely that nearby


words will get referenced often in the near future.

• Thus, the hit ratio will be close to 1 even for a small cache.
Disk Cache (same
principles)
• A portion of main memory used as a buffer to
temporarily to hold data for the disk.

• Locality of reference also applies here: once a record


gets referenced, it is likely that nearby records will get
referenced often in the near future.

• If a record referenced is not in the disk cache, the


sector containing the record is moved into the disk
cache.
Cache
Design
• Mapping function determines which cache location
the block will occupy

• Replacement algorithm chooses within the mapping


function which block to replace (LRU) is common
Cache Design
• Write policy

Writes a block of cache to main memory

Main memory must be current for access by I/O modules


and multiple processors

When a memory write should take place

• Everytime the block is updated

• Only when a block is replaced


I/O Communication Techniques
• Programmed I/O

• Interrupt-drive I/O

• Direct Memory
Access
Programmed I/O
• I/O module performs action, not processor

• Sets bits in the I/O status register

• No interrupts occur

• Processor is kept busy checking status

• Processor extracts data from main memory

• I/O module allows processor control over

Control
Read/write
Test
Interrupt-driven I/O

• Processor issues I/O to module

• Processor does other work

• Processor is interrupted when I/O


finished

• Processor performs data transfer

• All data passes through processor


Direct Memory
Access
• Processor sends to I/O DMA module

Whether a read or write


Address of the I/O device
Location in memory to access
Number of words to transfer
• Processor continues to work
• When DMA is finished sends interrupt
• Causes bus contention processor &
I/O
Computer System Structure
• Computer system can be divided into four components:
• Hardware – provides basic computing resources
• CPU, memory, I/O devices
• Operating system
• Controls and coordinates use of hardware among various
applications and users
• Application programs – define the ways in which the
system resources are used to solve the computing
problems of the users
• Word processors, compilers, web browsers, database
systems, video games
• Users
• People, machines, other computers
Four Components of a Computer System
OS Objectives and
functions
• OS is a layer of software whose job is to manage all devices
and provide user programs with a simpler interface to the
hardware

• Objectives of OS

• Convenience
• Efficiency
• Ability to evolve
• Management of Systems Resources
•Convenience: An operating system works on the utilization of a
machine. It lets the user start with the processes that the user wants to
accomplish using the machine in a very quick manner without having
the stress of configuring the system first.

•Efficiency: An operating system helps utilise the resource capacity


in the best way possible. This is because it reduces the time to
configure the system.
•Ability to Evolve: An operating system is supposed to be
developed in a way that it must have the space to test and adopt new
features without going through a complete restructuring of the whole
interface.

•Management of Systems Resources: It ensures that the power of


resources is distributed equally among the various processes.
Functions of
OS
• Process Management
• Memory Management
• File Management
• Device Management
• Secondary Memory Management
• Security and Protection
• Networking
• Command interpretation
The functions of the operating system:
Functions of Operating Systems

1. Security
The operating system helps in keeping the user data safe with help of protection
like passwords and some other similar methods, it is also capable of avoiding
unauthorized access to user data and programs.
2. Control over System Performance
The operating system acts as a watchdog over the system performance so that it
can be improved. It also keeps a record of the response time between when the
user has requested the service till the system responds back to have a complete
overview of the system's health. This data can help improve the performance of
the system by providing crucial information required for troubleshooting the
problems.
3. Job Accounting
The operating system keeps a track record of the time and resources consumed
by various tasks and users. That can further assist whenever there is a demand
for specific information.

4. Error Detecting Aids


The operating system is the one that is in charge of the detection of any error or
bugs that can happen while performing any task. A well-secured OS sometimes
also acts as protection for preventing any type of breach of the computer system
from any external source.

5. Coordination Between Other Software and Users


Operating systems also connect and assign interpreters, editors, compilers, and
other software to a variety of computer programming users.
6. Memory Management
The operating system manages Basic Memory or Primary Memory. The main
memory is made up of a large array of bytes where each byte is given a specific
address. The main memory is fast storage and can be directly accessed by the
CPU.
In order to run a program, it must first be loaded into the main memory.
The operating system performs the following functions of memory control:
o It keeps track of basic memory, like, as which bytes of memory are being used
by which user program. The memory addresses that are being used and being
unallocated.
o In multiprogramming, the operating system is the one that decides the
chronology for granting memory to the process. It provides the memory for a
process when the process demands that and afterwards deallocates the
memory when the process is completed or the process is terminated.
7. Processor Management/Scheduling
In a multitasking environment, the operating system performs a function called
process scheduling, in which the OS decides the order in which different
processes will have access to the processor, and how much power each process
will have.
An operating system carries out the following activities in order to manage the
processor:
o Keeps track of the status of processes and the program which performs this
task is called by the name of the traffic controller.
o Allocates the CPU’s power, a processor that carries out the process. De-
allocation of the processor’s power when a process is no longer required.
8. Device Management
An operating system manages communication between the device and the
system with the help of specific drivers. It performs certain activities for
managing devices such as keeping track of all such devices that are connected
with the system. It specifically appoints a program that is responsible for
communicating with every device which is known as the Input/Output
controller.

This control is designated to decide which process gets access to a certain


device and for how long. It allocates devices in an efficient manner with high
effectiveness and as well deauthorizes the devices when they are no longer
needed.
9. File Management
A file management system is an organised collection of data into directories so
that it can be easily navigated and used. These may contain other directories and
files as well. An operating system carries out certain tasks for file management
activities like tracking where the information is stored, the status of every file
and user settings and many more. All these facilities are collectively known as
file management systems.
Services Provided by an OS
a) Program Creation: O.S provides editors, debuggers etc.. These are actually part of
O.S but are accessible through O.S

b) Program Execution: To execute a program,


1)it must be loaded into memory
2) I/O devices , files and other resources must be initialized. The O.S handles all these
tasks.

c) Access to I/O devices: Each I/O device has its own instructions and control signals.
O.S takes care of these details and provides a simple interface to programmer.

d) Controlled access to files i) Provides Protection mechanisms ii) Provides common


interface to access files stored in different secondary memories (hard disk/magnetic tape)
and having different file system.
Services Provided by an OS
e) System access: protects data and resources from unauthorized users.
Resolves conflicts in case of contention for resources.

f) Error detection and response: Many internal and external hardware


errors , software errors occur while a computer is running. The O.S
handles these errors by ending the program, retrying the operation that
caused error or prints the error message to user.

g) Accounting: O.S. collects usage statistics of various resources for


improving performance and future enhancements.
Services Provided by an OS
OS as User/Computer
Interface
• OS provides services in the following areas:
• Program Development- Editors/Debuggers assist programmer
in creating programs. These are provided as application
program development tool.
• Program execution – load program-run program-execute
program
• Access to I/O devices - OS provides a uniform interface for
I/O devices which are hidden from end users.
• Controlled Access to files - OS needs to understand I/O,
structure of file and also provide protection to users in
multiuser environment.
Services Provided by an OS
OS as User/Computer Interface
• System Access: Provide access to system as whole and to specific
system resources. Resolve conflicts for resource contention.
• Error detection – OS needs to be constantly aware of possible errors
• May occur in the CPU and memory hardware, in I/O devices, in
user program
• For each type of error, OS should take the appropriate action to
ensure correct and consistent computing
• Debugging facilities can greatly enhance the user’s and
programmer’s abilities to efficiently use the system
• Accounting - To keep track of which users use how much and what
kinds of computer resources
System Calls
• System calls provide the interface between a running
program and the operating system.
• Generally available as assembly-language instructions.
• Languages defined to replace assembly language for
systems programming allow system calls to be made
directly (e.g., C, C++)
• Three general methods are used to pass parameters
between a running program and the operating
system.
• Pass parameters in registers.
• Store the parameters in a table in memory, and the table
address is passed as a parameter in a register.
• Push (store) the parameters onto the stack by the
program, and pop off the stack by operating system.
Passing of Parameters As A Table
Types of System Calls

• Process control
• File management
• Device management
• Information maintenance
• Communications
Types of System Calls
• Process control
• create process, terminate process
• end, abort
• load, execute
• get process attributes, set process attributes
• wait for time
• wait event, signal event
• allocate and free memory
• Dump memory if error
• Debugger for determining bugs, single step execution
• Locks for managing access to shared data between
processes
Types of System Calls
• File management
• create file, delete file
• open, close file
• read, write, reposition
• get and set file attributes
• Device management
• request device, release device
• read, write, reposition
• get device attributes, set device attributes
• logically attach or detach devices
Types of System Calls (Cont.)
• Information maintenance
• get time or date, set time or date
• get system data, set system data
• get and set process, file, or device attributes
• Communications
• create, delete communication connection
• send, receive messages if message passing model to host name or
process name
• From client to server
• Shared-memory model create and gain access to memory regions
• transfer status information
• attach and detach remote devices
System Programs
• System programs provide a convenient environment for program
development and execution. The can be divided into:
• File management
• Status information
• File modification
• Programming language support
• Program loading and execution
• Communications
• Application programs
• Most user’s view of the operation system is defined by system programs,
not the actual system calls.
• File management
These programs create, delete, copy, rename, print, dump, list, and generally
manipulate files and directories.

• Status information
Some programs simply ask the system for the date, time, amount of available memory
or disk space, number of users, or similar status information.

Others are more complex, providing detailed performance, logging, and debugging
information.

Typically, these programs format and print the output to the terminal or other output
devices or files or display it in a window of the graphical user interface (GUI).

Some systems also support a registry, which is used to store and retrieve configuration
information. 63
Programming-language support
Compilers, assemblers, debuggers, and interpreters for common
programming languages (such as C, C++, Java, and PERL) are often provided
with the operating system or available as a separate download.

Program loading and execution


Once a program is assembled or compiled, it must be loaded into memory to
be executed. The system may provide absolute loaders, relocatable loaders,
linkage editors, and overlay loaders.

Debugging systems for either higher-level languages or machine language


are needed as well.

64
Communications
These programs provide the mechanism for creating virtual connections
among processes, users, and computer systems.

They allow users to send messages to one another’s screens, to browse Web
pages, to send e-mail messages, to log in remotely, or to transfer files from
one machine to another.

Background services
All general-purpose systems have methods for launching certain system-
program processes at boot time. Some of these processes terminate after
completing their tasks, while others continue to run until the system is halted.

Constantly running system-program processes are known as services,


subsystems, or daemons. 65
CPU Scheduling:
• Basic Concepts
• Scheduling Criteria.
• Scheduling Algorithms and evaluation.

66
Operating Systems-IT508-IV-II 67
Two Suspend States
Scheduling and Process State Transitions
Nesting of Scheduling Functions
Basic Concepts

• Maximum CPU utilization obtained with


multiprogramming
• CPU–I/O Burst Cycle – Process execution
consists of a cycle of CPU execution and
I/O wait
• CPU burst distribution
Operating Systems-IT508-IV-II 72
Operating Systems-IT508-IV-II 73
Operating Systems-IT508-IV-II 74
Operating Systems-IT508-IV-II 75
Operating Systems-IT508-IV-II 76
Scheduling
Criteria
Scheduling Criteria
 In choosing which algorithm to use in a particular situation, we
must consider the properties of the various algorithms.

 Many criteria have been suggested for comparing CPU-scheduling


algorithms. Which characteristics are used for comparison can
make a substantial difference in which algorithm is judged to be
best. The criteria include the following:

 CPU utilization. We want to keep the CPU as busy as possible.


Conceptually, CPU utilization can range from 0 to 100 percent. In a
real system, it should range from 40 percent (for a lightly loaded
system) to 90 percent (for a heavily used system).

 Throughput. If the CPU is busy executing processes, then work is


being done. One measure of work is the number of processes that
are completed per time unit, called throughput. For long
processes, this rate may be one process per hour; for short
transactions, it may be ten processes per second.
Operating Systems-IT508-IV-II 78
Turnaround time. From the point of view of a particular process, the
important criterion is how long it takes to execute that process. The
interval from the time of submission of a process to the time of
completion is the turnaround time. Turnaround time is the sum of the
periods spent waiting to get into memory, waiting in the ready queue,
executing on the CPU, and doing I/O.

Waiting time. The CPU-scheduling algorithm does not affect the


amount of time during which a process executes or does I/O; it affects
only the amount of time that a process spends waiting in the ready
queue. Waiting time is the sum of the periods spent waiting in the
ready queue.

Response time. In an interactive system, turnaround time may not be


the best criterion. Often, a process can produce some output fairly
early and can continue computing new results while previous results
are being output to the user. Thus, another measure is the time from
the submission of a request until the first response is produced. This
measure, called response time, is the time it takes to start
responding, not the time Operatingit takes to output the response. The
Systems-IT508-IV-II 79
Scheduling Criteria

 CPU utilization – Percent of time that the CPU is busy executing a


process.

 Throughput – Number of processes that are completed per time


unit.

 Response time – Amount of time it takes from when a request was


submitted until the first response occurs (but not the time it takes
to output the entire response).

 Waiting time – The amount of time before a process starts after


first entering the ready queue (or the sum of the amount of time a
process has spent waiting in the ready queue).

 Turnaround time – Amount of time to execute a particular process


from the time of submission through the time of completion.
Optimization Criteria
• It is desirable to
Maximize CPU utilization
Maximize throughput
Minimize turnaround time
Minimize start time
Minimize waiting time
Minimize response time
• In most cases, we strive to optimize the average measure of
each metric.
• In other cases, it is more important to optimize the minimum
or maximum values rather than the average.
Single Processor Scheduling
Algorithms
Scheduling Algorithms

First Come, First Served (FCFS)


Shortest Job First (SJF)
Priority
Round Robin (RR)
Multilevel Queue Scheduling
Multilevel Feedback Queue
First Come First Served (FCFS) Scheduling
First-Come First-Served (FCFS)
Scheduling
 In FCFS, the process that arrived first and requests the CPU first is
assigned the CPU first.

 In FCFS, the ready queue can be maintained as a FIFO queue.

 When processes arrive they are added to the tail of the queue.

 When the CPU becomes free, the process in the head of the queue is
taken for execution.

 This is a non - preemptive algorithm. Once a process is assigned the


CPU, the process continues to use the CPU till its CPU burst time is
completed.

 When its CPU burst gets over, the process by itself relinquishes the CPU.
First-Come, First-Served (FCFS) Scheduling

Process Burst Time

P1 24
P2 3
P3 3
 With FCFS, the process that requests the CPU first is allocated the CPU first
 Case #1: Suppose that the processes arrive in the order: P ,P2,P3
1

The Gantt Chart for the schedule is:


P1 P2 P3

0 24 27 30

 Waiting time for P = 0; P2 = 24; P3 = 27


1
 Average waiting time: (0 + 24 + 27)/3 = 17
 Average turn-around time: (24 + 27 + 30)/3 = 27
FCFS Scheduling
(Cont.)
Case #2:
Suppose that the processes arrive in the order: P2 , P3 , P1
The Gantt chart for the schedule is:

P2 P3 P1

0 3 6 30

Waiting time for P1 = 6; P2 = 0; P3 = 3


Average waiting time: (6 + 0 + 3)/3 = 3

(Much better than Case #1)


Average turn-around time: (3 + 6 + 30)/3 = 13
FCFS Scheduling (Cont.)

• Case #1 is an example of the convoy effect


• All the other processes wait for one long-running process to finish using the CPU.

• This problem results in lower CPU and device utilization.

• Case #2 shows that higher utilization might be possible if the


short processes were allowed to run first.

• The FCFS scheduling algorithm is non- preemptive.

• Once the CPU has been allocated to a process, that process keeps the CPU
until it releases it either by terminating or by requesting I/O.

• It is a trouble some algorithm for time-sharing systems


Operating Systems-IT508-IV-II 88
Shortest Job First (SJF)
Scheduling
Shortest-Job-First (SJF)
Scheduling
The SJF algorithm associates with each process the length of its next CPU
burst.

When the CPU becomes available, it is assigned to the process that has the
smallest next CPU burst (in the case of matching bursts, FCFS is used).

Two schemes:

 Non-preemptive – once the CPU is given to the process, it cannot be


preempted until it completes its CPU burst.

 Preemptive – if a new process arrives with a CPU burst length less than
the remaining time of the current executing process, preempt. This
scheme is know as the Shortest-Remaining-Time-First (SRTF).
Example #1: Non-Preemptive SJF (simultaneous arrival)

Process Arrival Time Burst Time


P1 0.0 6
P2 0.0 4
P3 0.0 1
P4 0.0 5
• SJF (non-preemptive, simultaneous arrival)
P3 P2 P4 P1

0 1 5 10 16

• Average waiting time = (0 + 1 + 5 + 10)/4 = 4


• Average turn-around time = (1 + 5 + 10 + 16)/4 = 8
Example #2: Non-Preemptive SJF

Process Arrival Time Burst Time


P1 0.0 7
P2 2.0 4
P3 4.0 1
P4 5.0 4
• SJF (non-preemptive, varied arrival times)
P1 P3 P2 P4

0 3 7 8 12 16

• Average waiting time


= ( (0 – 0) + (8 – 2) + (7 – 4) + (12 – 5) )/4
= (0 + 6 + 3 + 7)/4 = 4
• Average turn-around time:
= ( (7 – 0) + (12 – 2) + (8 - 4) + (16 – 5))/4
= ( 7 +that
Waiting time : sum of time 10 +a 4process
+ 11)/4 has
= 8 spent waiting in the ready queue
Example #3: Preemptive SJF (Shortest-remaining-time-first)

Process Arrival Time Burst Time


P1 0.0 7
P2 2.0 4
P3 4.0 1
P4 5.0 4
• SJF (preemptive, varied arrival times)
P1 P2 P3 P2 P4 P1

0 2 4 5 7 11 16

• Average waiting time


= ( [(0 – 0) + (11 - 2)] + [(2 – 2) + (5 – 4)] + (4 - 4) + (7 – 5) )/4
= 9 + 1 + 0 + 2)/4
=3
• Average turn-around time = (16 + 7 + 5 + 11)/4 = 9.75
Waiting time : sum of time that a process has spent waiting in the ready queue
Determining Length of Next CPU Burst

To estimate the length of next CPU burst using the length of previous CPU bursts

Operating Systems-IT508-IV-II 94
Priority
Scheduling
Priority Scheduling
• The SJF algorithm is a special case of the general priority scheduling
algorithm
• A priority number (integer) is associated with each process
• The CPU is allocated to the process with the highest priority (smallest
integer = highest priority)
• Priority scheduling can be either preemptive or non-preemptive
• A preemptive approach will preempt the CPU if the priority of the
newly-arrived process is higher than the priority of the currently
running process
• A non-preemptive approach will simply put the new process (with
the highest priority) at the head of the ready queue
• SJF is a priority scheduling algorithm where priority is the predicted
next CPU burst time
• The main problem with priority scheduling is starvation, that is, low
priority processes may never execute
• A solution is aging; as time progresses, the priority of a process in the
ready queue is increased
Round Robin (RR)
Scheduling
Round Robin (RR) Scheduling
• In the round robin algorithm, each process gets a small unit of CPU
time (a time quantum), usually 10-100 milliseconds. After this time
has elapsed, the process is preempted and added to the end of the
ready queue.
• If there are n processes in the ready queue and the time quantum is
q, then each process gets 1/n of the CPU time in chunks of at most q
time units at once. No process waits more than (n-1)q time units.
• Performance of the round robin algorithm
• q large  FCFS
• q small  q must be greater than the context switch time;
otherwise, the overhead is too high
• One rule of thumb is that 80% of the CPU bursts should be shorter
than the time quantum
Example of RR with Time Quantum = 20
Process Burst Time
P1 53
P2 17
P3 68
P4 24
• The Gantt chart is:

P1 P2 P3 P4 P1 P3 P4 P1 P3 P3

0 20 37 57 77 97 117 121 134 154 162

• Typically, higher average turnaround than SJF, but better response time
• Average waiting time
= ( [(0 – 0) + (77 - 20) + (121 – 97)] + (20 – 0) + [(37 – 0) + (97 - 57) + (134 – 117)] +
[(57 – 0) + (117 – 77)] ) / 4
= (0 + 57 + 24) + 20 + (37 + 40 + 17) + (57 + 40) ) / 4
= (81 + 20 + 94 + 97)/4
= 292 / 4 = 73
• Average turn-around time = 134 + 37 + 162 + 121) / 4 = 113.5
Time Quantum and Context Switches
Turnaround Time Varies With The Time
Quantum

As can be seen from this graph, the average turnaround time of a set
of processes does not necessarily improve as the time quantum size
increases. In general, the average turnaround time can be improved
if most processes finish their next CPU burst in a single time quantum.
Multi-level Queue
Scheduling
Multi-level Queue Scheduling
• Multi-level queue scheduling is used when processes can be classified into groups
• For example, foreground (interactive) processes and background (batch)
processes
• The two types of processes have different response-time requirements and
so may have different scheduling needs.
• Also, foreground processes may have priority (externally defined) over
background processes
• A multi-level queue scheduling algorithm partitions the ready queue into several
separate queues.
• The processes are permanently assigned to one queue, generally based on some
property of the process such as memory size, process priority, or process type
• Each queue has its own scheduling algorithm
• The foreground queue might be scheduled using an RR algorithm
• The background queue might be scheduled using an FCFS algorithm
• In addition, there needs to be scheduling among the queues, which is commonly
implemented as fixed-priority pre-emptive scheduling
• The foreground queue may have absolute priority over the background queue
Multi-level Queue Scheduling
• One example of a multi-level queue are the five queues shown below.
• Each queue has absolute priority over lower priority queues.
• For example, no process in the batch queue can run unless the queues
above it are empty.
• However, this can result in starvation for the processes in the lower
priority queues.
Multilevel Queue Scheduling
• Another possibility is to time slice among the queues

• Each queue gets a certain portion of the CPU time, which it


can then schedule among its various processes

• The foreground queue can be given 80% of the CPU time


for RR scheduling

• The background queue can be given 20% of the CPU time


for FCFS scheduling
Multi-level Feedback Queue
Scheduling
Multilevel Feedback Queue Scheduling
• In multi-level feedback queue scheduling, a process can move between the various
queues; aging can be implemented this way.

• A multilevel-feedback-queue scheduler is defined by the following parameters:

• Number of queues

• Scheduling algorithms for each queue

• Method used to determine when to promote a process

• Method used to determine when to demote a process

• Method used to determine which queue a process will enter when that process
needs service
Example of Multilevel Feedback Queue

• Scheduling
• A new job enters queue Q0 (RR) Q0
and is placed at the end. When it
gains the CPU, the job receives 8
milliseconds. If it does not finish
in 8 milliseconds, the job is
moved to the end of queue Q1. Q1

• A Q1 (RR) job receives 16


milliseconds. If it still does not
complete, it is preempted and
moved to queue Q2 (FCFS). Q2

You might also like