OS UNIT 3

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

UNIT 3

Concept of Scheduling in Operating Systems​


Scheduling is the process of determining which tasks or processes are given access to
system resources (like CPU, memory, and I/O devices) at any given time. It ensures that
system resources are used efficiently while also achieving objectives like fairness,
responsiveness, and throughput.

Types of Schedulers

Schedulers are responsible for selecting processes from the process queue and allocating
system resources. There are three main types of schedulers:

1. Long-Term Scheduler (Job Scheduler)

●​ Function: Controls the admission of processes into the system (i.e., decides which
processes are brought into memory for processing).
●​ When it runs: Infrequently, as it is responsible for the decision to add a new process
to the ready queue.
●​ Objective: Controls the degree of multiprogramming (the number of processes in
memory).
●​ Role: Ensures a balance between I/O-bound and CPU-bound processes to avoid
bottlenecks.
●​ Example: When a user runs a program, the long-term scheduler determines if the
program should be brought into main memory.
●​ Key Metric: Degree of multiprogramming (number of processes in memory).

2. Short-Term Scheduler (CPU Scheduler)

●​ Function: Selects which process from the ready queue will execute next on the
CPU.
●​ When it runs: Very frequently, since it must select a process whenever the CPU
becomes idle (context switch).
●​ Objective: Minimize context-switching overhead and maximize CPU utilization.
●​ Role: Ensures a fair distribution of CPU time among processes.
●​ Example: When a process goes from "ready" to "running," the short-term scheduler
makes this decision.
●​ Key Metric: Response time, CPU utilization, and throughput.

3. Medium-Term Scheduler

●​ Function: Temporarily removes processes from memory (swaps them out) to reduce
the degree of multiprogramming.
●​ When it runs: Periodically, when the system is overloaded, and processes must be
swapped to disk.
●​ Objective: Reduce the load on the system and ensure smooth execution of
higher-priority tasks.
●​ Role: Suspends and resumes processes to optimize the use of CPU and memory.
●​ Example: If a process is waiting for I/O, it might be swapped out to disk to make
room for other processes.
●​ Key Metric: Swapping overhead, system responsiveness.

Dispatcher

The dispatcher is responsible for the actual context switch from one process to another, as
directed by the short-term scheduler.

●​ Role: It loads the selected process onto the CPU by saving the state of the current
process and restoring the state of the new process.
●​ Components of Dispatching:
○​ Context Switching: Save the state of the old process and load the state of
the new process.
○​ Switching to User Mode: The process runs in user mode instead of kernel
mode.
○​ Jump to the Program Counter (PC): The PC is set to the instruction where
the process was paused.
●​ Dispatch Latency: The time it takes for the dispatcher to stop one process and start
another.
●​ Goal: Minimize dispatch latency to improve system responsiveness.

Performance Criteria for Scheduling

To evaluate the efficiency of a scheduling algorithm, several criteria are used:

1.​ CPU Utilization: Keep the CPU as busy as possible (should be close to 100%).
2.​ Throughput: Number of processes that complete execution per unit time. Higher
throughput is preferred.
3.​ Turnaround Time: Total time taken from process submission to completion. It
includes waiting time, execution time, and I/O time.
4.​ Waiting Time: The total time a process spends in the ready queue.
5.​ Response Time: Time from process submission until the first response is produced
(important in interactive systems).
6.​ Fairness: Ensure that each process gets a fair share of the CPU without indefinite
postponement.
7.​ Context-Switching Overhead: Minimize context-switching since it leads to overhead
(CPU cycles spent switching processes instead of executing them).
CPU Scheduling Algorithms

CPU scheduling algorithms determine how processes are assigned to the CPU for
execution. Each algorithm has its own advantages, disadvantages, and specific use cases.

1. First-Come, First-Served (FCFS) Scheduling

●​ Type: Non-preemptive
●​ How it works: Processes are scheduled in the order of their arrival (like a queue).
●​ Implementation: Uses a FIFO (First In, First Out) queue.
●​ Advantages:
○​ Simple and easy to implement.
○​ Fair in the sense that requests are served in order of arrival.
●​ Disadvantages:
○​ Convoy Effect: Short processes may wait behind long ones.
○​ Poor average waiting time and response time.
●​ Example: Suppose three processes arrive:
○​ P1 (CPU time: 24 ms), P2 (CPU time: 3 ms), P3 (CPU time: 3 ms).
○​ They are executed in order of arrival, causing P2 and P3 to wait a long time
for P1 to finish.
●​ Use Case: Suitable for batch systems where process completion time isn't critical.

2. Shortest Job First (SJF) Scheduling

●​ Type: Can be preemptive or non-preemptive.


●​ How it works: The process with the shortest CPU burst time is scheduled first.
●​ Implementation: Uses a priority queue, where shorter jobs have higher priority.
●​ Advantages:
○​ Provides optimal turnaround time.
○​ Minimizes waiting time for short processes.
●​ Disadvantages:
○​ Starvation: Long processes may never get executed if shorter processes
keep arriving.
○​ Requires knowledge of CPU burst time, which is difficult to predict.
●​ Example: If P1 (CPU time: 6 ms), P2 (CPU time: 8 ms), P3 (CPU time: 2 ms) arrive,
P3 is executed first, followed by P1, then P2.
●​ Use Case: Used in batch systems where job runtimes are known in advance.

3. Priority Scheduling

●​ Type: Can be preemptive or non-preemptive.


●​ How it works: Each process is assigned a priority, and the CPU is allocated to the
process with the highest priority (lowest priority number = highest priority).
●​ Implementation: Uses a priority queue.
●​ Advantages:
○​ Can be used to implement process importance or urgency.
○​ Useful in real-time systems where tasks have different urgencies.
●​ Disadvantages:
○​ Starvation: Lower-priority processes may never execute if high-priority
processes continue to arrive (solution: use aging, which increases the priority
of a process that has waited too long).
●​ Example: Suppose 3 processes have priorities P1 (priority 1), P2 (priority 2), and P3
(priority 3). P1 will execute first, followed by P2, then P3.
●​ Use Case: Used in real-time systems where priority-based execution is essential
(e.g., emergency system responses).

4. Round-Robin (RR) Scheduling

●​ Type: Preemptive.
●​ How it works: Each process is assigned a time quantum (time slice) during which it
can execute. If the process is not finished, it is moved to the back of the queue.
●​ Implementation: Uses a circular queue (processes are rotated in the queue).
●​ Advantages:
○​ Fairness: All processes get equal CPU time.
○​ Response Time: Good for time-sharing systems (like multitasking).
●​ Disadvantages:
○​ Context-Switch Overhead: If the time quantum is too short, context switches
happen too frequently.
○​ Performance depends on time quantum: If too large, it behaves like FCFS;
if too small, it increases context switch overhead.
●​ Example: Assume 3 processes (P1, P2, P3) have burst times of 5 ms each, and the
time quantum is 2 ms.
○​ Time slice 1: P1 runs for 2 ms, P2 for 2 ms, P3 for 2 ms.
○​ Time slice 2: P1 runs for 2 ms, P2 runs for 1 ms, and P3 runs for 1 ms
(finishing all processes).
●​ Use Case: Used in interactive systems, like multitasking operating systems.

5. Multilevel Queue (MLQ) Scheduling

●​ Type: Can be preemptive or non-preemptive.


●​ How it works: Processes are divided into multiple queues based on their type
(foreground, background, etc.). Each queue has its own scheduling algorithm.
●​ Implementation: Multiple queues, each with its own scheduling policy (e.g.,
foreground may use RR, and background may use FCFS).
●​ Advantages:
○​ Separates long-running processes from interactive ones.
○​ Provides flexibility in scheduling.
●​ Disadvantages:
○​ Starvation: If high-priority queues are always full, low-priority queues may
never be served.
○​ Complex to implement.
●​ Example: There are two queues — foreground (interactive) uses RR, and
background (batch jobs) uses FCFS. Processes are permanently assigned to a
queue.
●​ Use Case: Used in systems with a mix of batch and interactive jobs, like
time-sharing operating systems.

6. Multilevel Feedback Queue (MLFQ) Scheduling

●​ Type: Preemptive.
●​ How it works: Similar to MLQ but with the ability to move processes between
queues. If a process uses too much CPU time, it is moved to a lower-priority queue.
●​ Implementation: Multiple queues with different priority levels. New jobs start in the
highest-priority queue, and if they consume too much CPU time, they move to a
lower-priority queue.
●​ Advantages:
○​ Dynamic Adjustment: Processes are moved to queues based on execution
behavior (aging).
○​ Provides a balance between short and long processes.
●​ Disadvantages:
○​ Complexity: More complex to implement than other algorithms.
○​ Starvation: Processes in low-priority queues may wait indefinitely (solution:
aging).
●​ Example: A system with 3 queues:
○​ Queue 1 (highest priority) uses RR with time quantum = 4 ms.
○​ Queue 2 (medium priority) uses RR with time quantum = 8 ms.
○​ Queue 3 (lowest priority) uses FCFS.
○​ A new process enters Queue 1. If it exceeds 4 ms, it moves to Queue 2, and
if it exceeds 8 ms, it moves to Queue 3.
●​ Use Case: Used in systems where fairness and responsiveness are crucial, like
general-purpose operating systems (e.g., Windows, Linux).
Comparison Table of CPU Scheduling Algorithms

Algorithm Preemptiv Starvation? Fairnes Efficienc Use Case


e? s y

FCFS No Yes (Convoy Low Low Batch systems


Effect)

SJF Both Yes (Long Jobs Medium High Batch systems


Starve)

Priority Both Yes (Low Priority Low High Real-time


Starves) systems

Round Yes No High Medium Time-sharing


Robin systems

Multilevel Both Yes (Low Queue Medium High Mixed interactive


Queue Starves) systems

MLFQ Yes No (with aging) High High General-purpose


OS

Deadlock Concepts & Handling Techniques

A deadlock is a situation in operating systems where a group of processes are waiting for
resources that are held by each other, causing a permanent blocking condition. In simpler
terms, no process can move forward because each one is waiting for another to release a
resource.

Deadlock Necessary Conditions

For a deadlock to occur, four conditions must be met simultaneously:


1.​ Mutual Exclusion: At least one resource must be held in a non-shareable mode.
2.​ Hold and Wait: A process holding at least one resource is waiting for additional
resources held by other processes.
3.​ No Preemption: A resource allocated to a process cannot be forcibly taken away.
4.​ Circular Wait: A set of processes are waiting for each other in a circular chain (P1 →
P2 → P3 → ... → Pn → P1).

If any one of these conditions is eliminated, a deadlock cannot occur.

Deadlock Handling Techniques

Deadlocks can be handled using four major approaches:

1.​ Deadlock Prevention (proactively prevent deadlock).


2.​ Deadlock Avoidance (carefully avoid deadlock).
3.​ Deadlock Detection and Recovery (allow deadlock but recover from it).
4.​ Ignoring Deadlock (do nothing, used in some systems like Unix and Linux).

1. Deadlock Prevention

The goal is to break at least one of the four necessary conditions to prevent deadlock.​
Techniques to Prevent Deadlock

1.​ Mutual Exclusion:


○​ Make resources shareable (where possible) to avoid the "non-shareable
resource" problem.
○​ Example: Read-only files can be shared.
2.​ Hold and Wait:
○​ Require processes to request all required resources at once before execution
begins.
○​ Drawback: It may lead to low resource utilization and starvation.
3.​ No Preemption:
○​ If a process is holding some resources and requests others that are not
available, it must release its current resources and re-request them later.
○​ Drawback: This can result in frequent context switches and lower system
performance.
4.​ Circular Wait:
○​ Impose an ordering on resource types and require processes to request
resources in increasing order of enumeration.
○​ Example: Printers (R1), Scanners (R2), and Disk Drives (R3) are numbered,
and processes must request them in that order.

Pros: Prevents deadlocks entirely.​


Cons: Can result in inefficient resource utilization and low system throughput.
2. Deadlock Avoidance

In deadlock avoidance, the system makes decisions at runtime to ensure that a system
never enters an unsafe state where a deadlock can occur.​
Key Concept: Use a safe state to avoid deadlock.

●​ A state is "safe" if the system can allocate resources to each process in some order
and still avoid deadlock.
●​ If a system moves from a safe state to an unsafe state, deadlock may occur.

Common Algorithm for Deadlock Avoidance

1.​ Banker's Algorithm (proposed by Edsger Dijkstra):


○​ Works like a bank that never loans out more money than it has.
○​ Before allocating resources to a process, the system checks if the resulting
state will still be "safe."
○​ If not, it waits.

How Banker's Algorithm Works:

●​ Each process declares its maximum resource needs before execution.


●​ The system ensures it can fulfill the worst-case needs of every process.
●​ If a resource request leads to an unsafe state, the process is made to wait.

Pros: Ensures that the system never enters a deadlock state.​


Cons: Requires advance knowledge of resource requirements, and runtime checks are
computationally expensive.

3. Deadlock Detection and Recovery

Instead of preventing deadlocks, we allow them to happen and then detect and recover from
them.

1.​ Deadlock Detection


○​ The system periodically checks for deadlocks.
○​ Use Resource-Allocation Graph (RAG):
■​ Nodes represent processes and resources.
■​ If the graph contains a cycle, a deadlock may exist.
○​ For single instance resources, cycle detection in RAG is sufficient.
○​ For multiple instance resources, more sophisticated algorithms (like Wait-For
Graphs) are used.
2.​ Deadlock Recovery
○​ Once a deadlock is detected, recovery must be performed.
○​ Techniques for Recovery:
■​ Terminate Processes: Kill one or more processes in the cycle to
break the circular wait.
■​ Kill the process with the lowest priority, the shortest remaining
time, or the least number of resources used.
■​ Resource Preemption: Temporarily take resources away from a
process and allocate them to another.
■​ Roll back the affected process to an earlier state and restart it
later.

Pros: Avoids the need for deadlock prevention and avoidance overhead.​
Cons: Detecting deadlocks can be computationally expensive. Recovery may result in
process termination or rollback, leading to loss of progress.

4. Ignore Deadlock (Ostrich Approach)

●​ In this approach, the system assumes that deadlocks will not happen or happen
very rarely.
●​ If deadlocks do occur, the system may crash or be rebooted manually.
●​ Example: Unix and Linux often use this approach since deadlocks are rare.
●​ Use Case: Suitable for desktop or embedded systems where deadlock frequency is
low.

Comparison of Deadlock Handling Techniques


Handling Key Idea How it Works Overhead Drawbacks
Technique

Prevention Eliminate at least Imposes High Poor utilization,


one of the 4 constraints on starvation
conditions resource allocation

Avoidance Ensure a safe Uses the Banker's High Requires max


state Algorithm (Run-time resource info
checks)

Detection & Allow, detect, Periodically detect Moderate May require


Recovery recover cycles and kill process
processes termination

Ignore Do nothing No prevention, no None System crash


detection possible

Example of Deadlock
Problem: Two processes (P1, P2) share two resources (R1, R2).

1.​ P1 requests R1 and gets it.


2.​ P2 requests R2 and gets it.
3.​ P1 requests R2 but is blocked since P2 has it.
4.​ P2 requests R1 but is blocked since P1 has it.

Conditions Met:

●​ Mutual Exclusion: R1 and R2 are held exclusively by P1 and P2, respectively.


●​ Hold and Wait: P1 holds R1 and waits for R2; P2 holds R2 and waits for R1.
●​ No Preemption: Resources cannot be forcibly taken.
●​ Circular Wait: P1 → P2 → P1, causing a circular wait.

Summary
Technique Key Idea Action Example

Prevention Break one of 4 Request all Force users to declare all


conditions resources at once resources upfront

Avoidance Avoid unsafe Use Banker's Allocate resources only if


states Algorithm safe state exists

Detection & Detect then Kill processes or Use RAG to detect cycles
Recovery recover roll back and recover

Ignore Do nothing Assume deadlocks Used in Unix/Linux systems


are rare

You might also like