OS UNIT 3
OS UNIT 3
OS UNIT 3
Types of Schedulers
Schedulers are responsible for selecting processes from the process queue and allocating
system resources. There are three main types of schedulers:
● Function: Controls the admission of processes into the system (i.e., decides which
processes are brought into memory for processing).
● When it runs: Infrequently, as it is responsible for the decision to add a new process
to the ready queue.
● Objective: Controls the degree of multiprogramming (the number of processes in
memory).
● Role: Ensures a balance between I/O-bound and CPU-bound processes to avoid
bottlenecks.
● Example: When a user runs a program, the long-term scheduler determines if the
program should be brought into main memory.
● Key Metric: Degree of multiprogramming (number of processes in memory).
● Function: Selects which process from the ready queue will execute next on the
CPU.
● When it runs: Very frequently, since it must select a process whenever the CPU
becomes idle (context switch).
● Objective: Minimize context-switching overhead and maximize CPU utilization.
● Role: Ensures a fair distribution of CPU time among processes.
● Example: When a process goes from "ready" to "running," the short-term scheduler
makes this decision.
● Key Metric: Response time, CPU utilization, and throughput.
3. Medium-Term Scheduler
● Function: Temporarily removes processes from memory (swaps them out) to reduce
the degree of multiprogramming.
● When it runs: Periodically, when the system is overloaded, and processes must be
swapped to disk.
● Objective: Reduce the load on the system and ensure smooth execution of
higher-priority tasks.
● Role: Suspends and resumes processes to optimize the use of CPU and memory.
● Example: If a process is waiting for I/O, it might be swapped out to disk to make
room for other processes.
● Key Metric: Swapping overhead, system responsiveness.
Dispatcher
The dispatcher is responsible for the actual context switch from one process to another, as
directed by the short-term scheduler.
● Role: It loads the selected process onto the CPU by saving the state of the current
process and restoring the state of the new process.
● Components of Dispatching:
○ Context Switching: Save the state of the old process and load the state of
the new process.
○ Switching to User Mode: The process runs in user mode instead of kernel
mode.
○ Jump to the Program Counter (PC): The PC is set to the instruction where
the process was paused.
● Dispatch Latency: The time it takes for the dispatcher to stop one process and start
another.
● Goal: Minimize dispatch latency to improve system responsiveness.
1. CPU Utilization: Keep the CPU as busy as possible (should be close to 100%).
2. Throughput: Number of processes that complete execution per unit time. Higher
throughput is preferred.
3. Turnaround Time: Total time taken from process submission to completion. It
includes waiting time, execution time, and I/O time.
4. Waiting Time: The total time a process spends in the ready queue.
5. Response Time: Time from process submission until the first response is produced
(important in interactive systems).
6. Fairness: Ensure that each process gets a fair share of the CPU without indefinite
postponement.
7. Context-Switching Overhead: Minimize context-switching since it leads to overhead
(CPU cycles spent switching processes instead of executing them).
CPU Scheduling Algorithms
CPU scheduling algorithms determine how processes are assigned to the CPU for
execution. Each algorithm has its own advantages, disadvantages, and specific use cases.
● Type: Non-preemptive
● How it works: Processes are scheduled in the order of their arrival (like a queue).
● Implementation: Uses a FIFO (First In, First Out) queue.
● Advantages:
○ Simple and easy to implement.
○ Fair in the sense that requests are served in order of arrival.
● Disadvantages:
○ Convoy Effect: Short processes may wait behind long ones.
○ Poor average waiting time and response time.
● Example: Suppose three processes arrive:
○ P1 (CPU time: 24 ms), P2 (CPU time: 3 ms), P3 (CPU time: 3 ms).
○ They are executed in order of arrival, causing P2 and P3 to wait a long time
for P1 to finish.
● Use Case: Suitable for batch systems where process completion time isn't critical.
3. Priority Scheduling
● Type: Preemptive.
● How it works: Each process is assigned a time quantum (time slice) during which it
can execute. If the process is not finished, it is moved to the back of the queue.
● Implementation: Uses a circular queue (processes are rotated in the queue).
● Advantages:
○ Fairness: All processes get equal CPU time.
○ Response Time: Good for time-sharing systems (like multitasking).
● Disadvantages:
○ Context-Switch Overhead: If the time quantum is too short, context switches
happen too frequently.
○ Performance depends on time quantum: If too large, it behaves like FCFS;
if too small, it increases context switch overhead.
● Example: Assume 3 processes (P1, P2, P3) have burst times of 5 ms each, and the
time quantum is 2 ms.
○ Time slice 1: P1 runs for 2 ms, P2 for 2 ms, P3 for 2 ms.
○ Time slice 2: P1 runs for 2 ms, P2 runs for 1 ms, and P3 runs for 1 ms
(finishing all processes).
● Use Case: Used in interactive systems, like multitasking operating systems.
● Type: Preemptive.
● How it works: Similar to MLQ but with the ability to move processes between
queues. If a process uses too much CPU time, it is moved to a lower-priority queue.
● Implementation: Multiple queues with different priority levels. New jobs start in the
highest-priority queue, and if they consume too much CPU time, they move to a
lower-priority queue.
● Advantages:
○ Dynamic Adjustment: Processes are moved to queues based on execution
behavior (aging).
○ Provides a balance between short and long processes.
● Disadvantages:
○ Complexity: More complex to implement than other algorithms.
○ Starvation: Processes in low-priority queues may wait indefinitely (solution:
aging).
● Example: A system with 3 queues:
○ Queue 1 (highest priority) uses RR with time quantum = 4 ms.
○ Queue 2 (medium priority) uses RR with time quantum = 8 ms.
○ Queue 3 (lowest priority) uses FCFS.
○ A new process enters Queue 1. If it exceeds 4 ms, it moves to Queue 2, and
if it exceeds 8 ms, it moves to Queue 3.
● Use Case: Used in systems where fairness and responsiveness are crucial, like
general-purpose operating systems (e.g., Windows, Linux).
Comparison Table of CPU Scheduling Algorithms
A deadlock is a situation in operating systems where a group of processes are waiting for
resources that are held by each other, causing a permanent blocking condition. In simpler
terms, no process can move forward because each one is waiting for another to release a
resource.
1. Deadlock Prevention
The goal is to break at least one of the four necessary conditions to prevent deadlock.
Techniques to Prevent Deadlock
In deadlock avoidance, the system makes decisions at runtime to ensure that a system
never enters an unsafe state where a deadlock can occur.
Key Concept: Use a safe state to avoid deadlock.
● A state is "safe" if the system can allocate resources to each process in some order
and still avoid deadlock.
● If a system moves from a safe state to an unsafe state, deadlock may occur.
Instead of preventing deadlocks, we allow them to happen and then detect and recover from
them.
Pros: Avoids the need for deadlock prevention and avoidance overhead.
Cons: Detecting deadlocks can be computationally expensive. Recovery may result in
process termination or rollback, leading to loss of progress.
● In this approach, the system assumes that deadlocks will not happen or happen
very rarely.
● If deadlocks do occur, the system may crash or be rebooted manually.
● Example: Unix and Linux often use this approach since deadlocks are rare.
● Use Case: Suitable for desktop or embedded systems where deadlock frequency is
low.
Example of Deadlock
Problem: Two processes (P1, P2) share two resources (R1, R2).
Conditions Met:
Summary
Technique Key Idea Action Example
Detection & Detect then Kill processes or Use RAG to detect cycles
Recovery recover roll back and recover