0% found this document useful (0 votes)
11 views87 pages

OS_Unit-2_23-24 (1)

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views87 pages

OS_Unit-2_23-24 (1)

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 87

Operating Systems - Unit-II

CPU Scheduling; Process Management and Synchronization

February 23, 2024

1/87
Operating Systems - Unit-II February 23, 2024 1 / 87
Unit-II Syllabus

CPU Scheduling
Scheduling Criteria
Scheduling Algorithms
Multiple -Processor Scheduling
System call interface for process management-fork, exit, wait, waitpid,
exec.
Process Management and Synchronization
The Critical Section Problem
Synchronization Hardware
Semaphores
Classical Problems of Synchronization
Critical Regions
Monitors

2/87
Operating Systems - Unit-II February 23, 2024 2 / 87
Scheduling Algorithms

Basic Concepts

Maximum CPU utilization obtained with multiprogramming


CPU–I/O Burst Cycle – Process execution consists of a cycle of CPU
execution and I/O wait
CPU burst followed by I/O burst
CPU burst distribution is of main concern

3/87
Operating Systems - Unit-II February 23, 2024 3 / 87
Scheduling Algorithms CPU Scheduler

CPU Scheduler

Short-term scheduler selects from among the processes in ready


queue, and allocates the CPU to one of them
Queue may be ordered in various ways
CPU scheduling decisions may take place when a process:
Switches from running to waiting state
Switches from running to ready state
Switches from waiting to ready
Terminates
Scheduling under above conditions is nonpreemptive
All other scheduling is preemptive
Consider access to shared data
Consider preemption while in kernel mode
Consider interrupts occurring during crucial OS activities

4/87
Operating Systems - Unit-II February 23, 2024 4 / 87
Scheduling Algorithms CPU Scheduler

Dispatcher

Dispatcher module gives control of the CPU to the process selected


by the short-term scheduler
this involves:
switching context
switching to user mode
jumping to the proper location in the user program to restart that
program
Dispatch latency – time it takes for the dispatcher to stop one process
and start another running

5/87
Operating Systems - Unit-II February 23, 2024 5 / 87
Scheduling Algorithms CPU Scheduler

Scheduling Criteria

CPU utilization – keep the CPU as busy as possible


Throughput – No of processes that complete their execution per time
unit
Turnaround time – amount of time to execute a particular process
Waiting time – amount of time a process has been waiting in the
ready queue
Response time – amount of time it takes from when a request was
submitted until the first response is produced, not output (for
time-sharing environment)

6/87
Operating Systems - Unit-II February 23, 2024 6 / 87
Scheduling Algorithms CPU Scheduler

Scheduling Algorithm Optimization Criteria

Max CPU utilization


Max throughput
Min turnaround time
Min waiting time
Min response time

7/87
Operating Systems - Unit-II February 23, 2024 7 / 87
Scheduling Algorithms Scheduling Algorithms

Scheduling Algorithms

First Come First serve Scheduling


Shortest Job First Scheduling
Preemptive Scheduling
Round robin Scheduling

8/87
Operating Systems - Unit-II February 23, 2024 8 / 87
Scheduling Algorithms Scheduling Algorithms

FCFS Scheduling

9/87
Operating Systems - Unit-II February 23, 2024 9 / 87
Scheduling Algorithms Scheduling Algorithms

FCFS Scheduling

10/87
Operating Systems - Unit-II February 23, 2024 10 / 87
Scheduling Algorithms Scheduling Algorithms

SJF Scheduling

Associate with each process the length of its next CPU burst
Use these lengths to schedule the process with the shortest time
SJF is optimal – gives minimum average waiting time for a given set
of processes
The difficulty is knowing the length of the next CPU request
Could ask the user
Preemptive version called shortest-remaining-time-first

11/87
Operating Systems - Unit-II February 23, 2024 11 / 87
Scheduling Algorithms Scheduling Algorithms

SJF Scheduling

12/87
Operating Systems - Unit-II February 23, 2024 12 / 87
Scheduling Algorithms Scheduling Algorithms

Shortest Remaining Time First Scheduling

13/87
Operating Systems - Unit-II February 23, 2024 13 / 87
Scheduling Algorithms Scheduling Algorithms

Priority Scheduling

A priority number (integer) is associated with each process


The CPU is allocated to the process with the highest priority
(smallest integer = highest priority)
Preemptive
Nonpreemptive
SJF is priority scheduling where priority is the inverse of predicted
next CPU burst time
Problem – Starvation – low priority processes may never execute
Solution – Aging – as time progresses increase the priority of the
process

14/87
Operating Systems - Unit-II February 23, 2024 14 / 87
Scheduling Algorithms Scheduling Algorithms

Priority Scheduling

15/87
Operating Systems - Unit-II February 23, 2024 15 / 87
Scheduling Algorithms Scheduling Algorithms

RoundRobin Scheduling

Each process gets a small unit of CPU time (time quantum q),
usually 10-100 milliseconds. After
this time has elapsed, the process is preempted and added to the end
of the ready queue.
If there are n processes in the ready queue and the time quantum is q,
then each process gets 1/n of the CPU time in chunks of at most q
time units at once. No process waits more than (n-1)q time units.
Timer interrupts every quantum to schedule next process
Performance
q large - FIFO
q small - q must be large with respect to context switch, otherwise
overhead is too high

16/87
Operating Systems - Unit-II February 23, 2024 16 / 87
Scheduling Algorithms Scheduling Algorithms

RR Scheduling

17/87
Operating Systems - Unit-II February 23, 2024 17 / 87
Scheduling Algorithms Scheduling Algorithms

Multi Level Queue Scheduling

Ready queue is partitioned into separate queues, eg:


foreground (interactive)
background (batch)
Process permanently in a given queue
Each queue has its own scheduling algorithm:
foreground – RR
background – FCFS
Scheduling must be done between the queues:
Fixed priority scheduling; (i.e., serve all from foreground then from
background). Possibility of starvation.
Time slice – each queue gets a certain amount of CPU time which it
can schedule amongst its processes; i.e., 80% to foreground in RR
20% to background in FCFS

18/87
Operating Systems - Unit-II February 23, 2024 18 / 87
Scheduling Algorithms Scheduling Algorithms

Multi Level Queue Scheduling

19/87
Operating Systems - Unit-II February 23, 2024 19 / 87
Scheduling Algorithms Scheduling Algorithms

Multi Level Feedback Queue Scheduling

A process can move between the various queues; aging can be


implemented this way
Multilevel-feedback-queue scheduler defined by the following
parameters:
number of queues
scheduling algorithms for each queue
method used to determine when to upgrade a process
method used to determine when to demote a process
method used to determine which queue a process will enter when that
process needs service

20/87
Operating Systems - Unit-II February 23, 2024 20 / 87
Scheduling Algorithms Scheduling Algorithms

Multi Level Feedback Queue Scheduling

21/87
Operating Systems - Unit-II February 23, 2024 21 / 87
Scheduling Algorithms Thread Scheduling

Thread Scheduling

Distinction between user-level and kernel-level threads


When threads supported, threads scheduled, not processes
Many-to-one and many-to-many models, thread library schedules
user-level threads to run on LWP
Known as process-contention scope (PCS) since scheduling
competition is within the process
Typically done via priority set by programmer
Kernel thread scheduled onto available CPU is system-contention
scope (SCS) – competition among all threads in system

22/87
Operating Systems - Unit-II February 23, 2024 22 / 87
Scheduling Algorithms Thread Scheduling

PThread Scheduling

API allows specifying either PCS or SCS during thread creation


PTHREAD SCOPE PROCESS schedules threads using PCS scheduling
PTHREAD SCOPE SYSTEM schedules threads using SCS scheduling
Can be limited by OS – Linux and Mac OS X only allow
PTHREAD SCOPE SYSTEM

23/87
Operating Systems - Unit-II February 23, 2024 23 / 87
Scheduling Algorithms Multiple Processor Scheduling

Multiple Processor Scheduling

CPU scheduling more complex when multiple CPUs are available


Homogeneous processors within a multiprocessor
Asymmetric multiprocessing – only one processor accesses the
system data structures, alleviating the need for data sharing
Symmetric multiprocessing (SMP) – each processor is
self-scheduling, all processes in common ready queue, or each has its
own private queue of ready processes
Currently, most common
Processor affinity – process has affinity for processor on which it is
currently running
soft affinity
hard affinity
Variations including processor sets

24/87
Operating Systems - Unit-II February 23, 2024 24 / 87
Scheduling Algorithms Multiple Processor Scheduling

Multiple-Processor Scheduling – Load Balancing

If SMP, need to keep all CPUs loaded for efficiency


Load balancing attempts to keep workload evenly distributed
Push migration – periodic task checks load on each processor, and if
found pushes task from overloaded CPU to other CPUs
Pull migration – idle processors pulls waiting task from busy
processor

25/87
Operating Systems - Unit-II February 23, 2024 25 / 87
Scheduling Algorithms Multiple Processor Scheduling

Multicore Processors

Recent trend to place multiple processor cores on same physical chip


Faster and consumes less power
Multiple threads per core also growing
Takes advantage of memory stall to make progress on another thread
while memory retrieve happens

26/87
Operating Systems - Unit-II February 23, 2024 26 / 87
Scheduling Algorithms Multiple Processor Scheduling

Multithreaded Multicore System

27/87
Operating Systems - Unit-II February 23, 2024 27 / 87
Scheduling Algorithms Real-Time CPU Scheduling

Real-Time CPU Scheduling

Can present obvious challenges


Soft real-time systems – no guarantee as to when critical real-time
process will be scheduled
Hard real-time systems – task must be serviced by its deadline
Two types of latencies affect performance
Interrupt latency – time from arrival of interrupt to start of routine
that services interrupt
Dispatch latency – time for schedule to take current process off CPU
and switch to another

28/87
Operating Systems - Unit-II February 23, 2024 28 / 87
Scheduling Algorithms Real-Time CPU Scheduling

Real-Time CPU Scheduling

29/87
Operating Systems - Unit-II February 23, 2024 29 / 87
Scheduling Algorithms Real-Time CPU Scheduling

Real-Time CPU Scheduling

Conflict phase of dispatch latency:


Preemption of any process running in kernel mode
Release by low-priority process of resources needed by high-priority
processes

30/87
Operating Systems - Unit-II February 23, 2024 30 / 87
Scheduling Algorithms Real-Time CPU Scheduling

Real-Time CPU Scheduling

31/87
Operating Systems - Unit-II February 23, 2024 31 / 87
Scheduling Algorithms Real-Time CPU Scheduling

LINUX Scheduling

Prior to kernel version 2.5, ran variation of standard UNIX scheduling


algorithm
Version 2.5 moved to constant order O(1) scheduling time
Preemptive, priority based
Two priority ranges: time-sharing and real-time
Real-time range from 0 to 99 and nice value from 100 to 140
Map into global priority with numerically lower values indicating higher
priority(Higher priority gets larger q)
Task run-able as long as time left in time slice (active)
If no time left (expired), not run-able until all other tasks use their
slices
All run-able tasks tracked in per-CPU runqueue data structure
Two priority arrays (active, expired)
Tasks indexed by priority
When no more active, arrays are exchanged
Worked well, but poor response times for interactive processes
32/87
Operating Systems - Unit-II February 23, 2024 32 / 87
Scheduling Algorithms Real-Time CPU Scheduling

LINUX Scheduling

Completely Fair Scheduler (CFS)


Scheduling classes
Each has specific priority
Scheduler picks highest priority task in highest scheduling class
Rather than quantum based on fixed time allotments, based on
proportion of CPU time
2 scheduling classes included, others can be added
default
real-time
Quantum calculated based on nice value from -20 to +19
Lower value is higher priority
Calculates target latency – interval of time during which task should
run at least once
Target latency can increase if say number of active tasks increases

33/87
Operating Systems - Unit-II February 23, 2024 33 / 87
Scheduling Algorithms Real-Time CPU Scheduling

LINUX Scheduling

CFS scheduler maintains per task virtual run time in variable


vruntime
Associated with decay factor based on priority of task – lower priority is
higher decay rate
Normal default priority yields virtual run time = actual run time
To decide next task to run, scheduler picks task with lowest virtual
run time

34/87
Operating Systems - Unit-II February 23, 2024 34 / 87
Scheduling Algorithms Real-Time CPU Scheduling

Windows Scheduling

Windows uses priority-based preemptive scheduling


Highest-priority thread runs next
Dispatcher is scheduler
Thread runs until (1) blocks, (2) uses time slice, (3) preempted by
higher-priority thread
Real-time threads can preempt non-real-time
32-level priority scheme
Variable class is 1-15, real-time class is 16-31
Priority 0 is memory-management thread
Queue for each priority
If no run-able thread, runs idle thread

35/87
Operating Systems - Unit-II February 23, 2024 35 / 87
Systemcalls

Systemcalls

Fork()
Exec()
Wait()
Exit()

36/87
Operating Systems - Unit-II February 23, 2024 36 / 87
Systemcalls

Fork()

A system call that creates a new process identical to the calling one
Makes a copy of text, data, stack, and heap – Starts executing on that
new copy
Uses of fork()
To create a parallel program with multiple processes
(E.g. Web server forks a process on each HTTP request)
To launch a new program using exec() family of functions (E.g. Linux
shell forks an ‘ls’ process)

37/87
Operating Systems - Unit-II February 23, 2024 37 / 87
Systemcalls

Fork() System call

38/87
Operating Systems - Unit-II February 23, 2024 38 / 87
Systemcalls

Exec()

The exec() system call is used to make the processes.


When the exec() function is used, the currently running process is
terminated and replaced with the newly formed process.
In other words, only the new process persists after calling exec(). The
parent process is shut down.
This system call also substitutes the parent process’s text segment,
address space, and data segment with the child process.

39/87
Operating Systems - Unit-II February 23, 2024 39 / 87
Systemcalls

Wait()

When a parent process makes a child process, the parent process


execution is suspended until the child process is finished.
The wait() system call is used to suspend the parent process. Once
the child process has completed its execution, control is returned to
the parent process.

40/87
Operating Systems - Unit-II February 23, 2024 40 / 87
Systemcalls

Exit()

The exit() is a system call that is used to end program execution.


This call indicates that the thread execution is complete, which is
especially useful in multi-threaded environments.
The operating system reclaims resources spent by the process
following the use of the exit() system function.

41/87
Operating Systems - Unit-II February 23, 2024 41 / 87
Process Synchrinization

Process Synchrinization

Processes can execute concurrently


May be interrupted at any time, partially completing execution
Concurrent access to shared data may result in data inconsistency
Maintaining data consistency requires mechanisms to ensure the
orderly execution of cooperating processes
Illustration of the problem
Suppose that we wanted to provide a solution to the consumer-producer
problem that fills all the buffers. We can do so by having an integer
counter that keeps track of the number of full buffers. Initially, counter is
set to 0. It is incremented by the producer after it produces a new buffer
and is decremented by the consumer after it consumes a buffer.

42/87
Operating Systems - Unit-II February 23, 2024 42 / 87
Process Synchrinization

Bounded Buffer-Producer and Consumer

43/87
Operating Systems - Unit-II February 23, 2024 43 / 87
Process Synchrinization

Race Condition

44/87
Operating Systems - Unit-II February 23, 2024 44 / 87
Process Synchrinization Critical Section Problem

Critical Section Problem

Consider system of n processes p0, p1, . . . pn-1


Each process has critical section segment of code
Process may be changing common variables, updating table, writing
file, etc
When one process in critical section, no other may be in its critical
section
Critical section problem is to design protocol to solve this
Each process must ask permission to enter critical section in entry
section, may follow critical section with exit section, then remainder
section

45/87
Operating Systems - Unit-II February 23, 2024 45 / 87
Process Synchrinization Critical Section Problem

Critical Section

General structure of process Pi

46/87
Operating Systems - Unit-II February 23, 2024 46 / 87
Process Synchrinization Critical Section Problem

Algorithm for Process Pi

47/87
Operating Systems - Unit-II February 23, 2024 47 / 87
Process Synchrinization Critical Section Problem

Solution to Critical-Section Problem

1 Mutual Exclusion - If process Pi is executing in its critical section,


then no other processes can be executing in their critical sections
2 Progress - If no process is executing in its critical section and there
exist some processes that wish to enter their critical section, then the
selection of the processes that will enter the critical section next
cannot be postponed indefinitely
3 Bounded Waiting - A bound must exist on the number of times that
other processes are allowed to enter their critical sections after a
process has made a request to enter its critical section and before
that request is granted
Assume that each process executes at a nonzero speed
No assumption concerning relative speed of the n processes

48/87
Operating Systems - Unit-II February 23, 2024 48 / 87
Process Synchrinization Critical Section Problem

Critical-Section Handling in OS

Two approaches depending on if kernel is preemptive or non- preemptive


Preemptive allows preemption of process when running in kernel mode
Non-preemptive runs until exits kernel mode, blocks, or voluntarily
yields CPU
Essentially free of race conditions in kernel mode

49/87
Operating Systems - Unit-II February 23, 2024 49 / 87
Process Synchrinization Critical Section Problem

Peterson’s Solution

Good algorithmic description of solving the problem


Two process solution
Assume that the load and store machine-language instructions are
atomic; that is, cannot be interrupted
The two processes share two variables:
int turn;
Boolean flag[2];
The variable turn indicates whose turn it is to enter the critical section
The flag array is used to indicate if a process is ready to enter the
critical section. flag[i] = true implies that process Pi is ready!

50/87
Operating Systems - Unit-II February 23, 2024 50 / 87
Process Synchrinization Critical Section Problem

Algorithm for Pi

51/87
Operating Systems - Unit-II February 23, 2024 51 / 87
Process Synchrinization Critical Section Problem

Petersons Solution

Provable that the three CS requirement are met:


1 Mutual exclusion is preserved
Pi enters CS only if: either flag[j] = false or turn =i
2 Progress requirement is satisfied
3 Bounded-waiting requirement is met

52/87
Operating Systems - Unit-II February 23, 2024 52 / 87
Process Synchrinization Synchronization Hardware

Synchronization Hardware

Many systems provide hardware support for implementing the critical


section code.
All solutions below based on idea of locking
Protecting critical regions via locks
Uniprocessors – could disable interrupts
Currently running code would execute without preemption
Generally too inefficient on multiprocessor systems
Operating systems using this not broadly scalable
Modern machines provide special atomic hardware instructions
Atomic = non-interruptible
Either test memory word and set value
Or swap contents of two memory words

53/87
Operating Systems - Unit-II February 23, 2024 53 / 87
Process Synchrinization Synchronization Hardware

Solution to Critical-section Problem Using Locks

54/87
Operating Systems - Unit-II February 23, 2024 54 / 87
Process Synchrinization Synchronization Hardware

test and set Instruction

1 Executed atomically
2 Returns the original value of passed parameter
3 Set the new value of passed parameter to “TRUE”.

55/87
Operating Systems - Unit-II February 23, 2024 55 / 87
Process Synchrinization Synchronization Hardware

Solution using test and set()

Shared Boolean variable lock, initialized to FALSE

56/87
Operating Systems - Unit-II February 23, 2024 56 / 87
Process Synchrinization Synchronization Hardware

compare and swap Instruction


Definition:

1 Executed atomically
2 Returns the original value of passed parameter “value”
3 Set the variable “value” the value of the passed parameter
“new value” but only if “value” ==“expected”. That is, the swap
takes place only under this condition.
57/87
Operating Systems - Unit-II February 23, 2024 57 / 87
Process Synchrinization Synchronization Hardware

Solution using compare and swap

Shared integer “lock” initialized to 0;


Solution:

58/87
Operating Systems - Unit-II February 23, 2024 58 / 87
Process Synchrinization Synchronization Hardware

Bounded-waiting Mutual Exclusion with test and set

59/87
Operating Systems - Unit-II February 23, 2024 59 / 87
Process Synchrinization Synchronization Hardware

Bounded-waiting Mutual Exclusion with compare and swap

60/87
Operating Systems - Unit-II February 23, 2024 60 / 87
Process Synchrinization Synchronization Hardware

Mutex Locks

Previous solutions are complicated and generally inaccessible to


application programmers
OS designers build software tools to solve critical section problem
Simplest is mutex lock
Protect a critical section by first acquire() a lock then release() the
lock
Boolean variable indicating if lock is available or not
Calls to acquire() and release() must be atomic
Usually implemented via hardware atomic instructions
But this solution requires busy waiting
This lock therefore called a spinlock

61/87
Operating Systems - Unit-II February 23, 2024 61 / 87
Process Synchrinization Synchronization Hardware

acquire() and release()

62/87
Operating Systems - Unit-II February 23, 2024 62 / 87
Process Synchrinization Semaphore

Semaphore
Synchronization tool that provides more sophisticated ways (than
Mutex locks) for process to synchronize their activities.
Semaphore S – integer variable
Can only be accessed via two indivisible (atomic) operations
wait() and signal()
Originally called P() and V()

63/87
Operating Systems - Unit-II February 23, 2024 63 / 87
Process Synchrinization Semaphore

Semaphore Usage

Counting semaphore – integer value can range over an unrestricted


domain
Binary semaphore – integer value can range only between 0 and 1
Same as a mutex lock
Can solve various synchronization problems

Can implement a counting semaphore S as a binary semaphore


64/87
Operating Systems - Unit-II February 23, 2024 64 / 87
Process Synchrinization Semaphore

Semaphore Implementation

Must guarantee that no two processes can execute the wait() and
signal() on the same semaphore at the same time
Thus, the implementation becomes the critical section problem where
the wait and signal code are placed in the critical section
Could now have busy waiting in critical section implementation
But implementation code is short
Little busy waiting if critical section rarely occupied
Note that applications may spend lots of time in critical sections and
therefore this is not a good solution

65/87
Operating Systems - Unit-II February 23, 2024 65 / 87
Process Synchrinization Semaphore

Semaphore Implementation with no Busy waiting

With each semaphore there is an associated waiting queue


Each entry in a waiting queue has two data items:
value (of type integer)
pointer to next record in the list
Two operations:
block – place the process invoking the operation on the appropriate
waiting queue
wakeup – remove one of processes in the waiting queue and place it in
the ready queue

66/87
Operating Systems - Unit-II February 23, 2024 66 / 87
Process Synchrinization Semaphore

Implementation with no Busy waiting (Cont.)

67/87
Operating Systems - Unit-II February 23, 2024 67 / 87
Process Synchrinization Semaphore

Deadlock and Starvation


Deadlock – two or more processes are waiting indefinitely for an event
that can be caused by only one of the waiting processes
Let S and Q be two semaphores initialized to 1

Starvation – indefinite blocking


A process may never be removed from the semaphore queue in which it
is suspended
Priority Inversion – Scheduling problem when lower-priority process
holds a lock needed by higher-priority process
Solved via priority-inheritance protocol
68/87
Operating Systems - Unit-II February 23, 2024 68 / 87
Process Synchrinization Classical Problems of Synchronization

Classical Problems of Synchronization

Classical problems used to test newly-proposed synchronization


schemes
Bounded-Buffer Problem
Readers and Writers Problem
Dining-Philosophers Problem

69/87
Operating Systems - Unit-II February 23, 2024 69 / 87
Process Synchrinization Classical Problems of Synchronization

Bounded-Buffer Problem

n buffers, each can hold one item


Semaphore mutex initialized to the value 1
Semaphore full initialized to the value 0
Semaphore empty initialized to the value n

70/87
Operating Systems - Unit-II February 23, 2024 70 / 87
Process Synchrinization Classical Problems of Synchronization

Bounded Buffer Problem (Cont.)

The structure of the producer and consumer process

71/87
Operating Systems - Unit-II February 23, 2024 71 / 87
Process Synchrinization Classical Problems of Synchronization

Readers-Writers Problem

A data set is shared among a number of concurrent processes


Readers – only read the data set; they do not perform any updates
Writers – can both read and write
Problem – allow multiple readers to read at the same time
Only one single writer can access the shared data at the same time
Several variations of how readers and writers are considered – all
involve some form of priorities
Shared Data
Data set
Semaphore rw mutex initialized to 1
Semaphore mutex initialized to 1
Integer read count initialized to 0

72/87
Operating Systems - Unit-II February 23, 2024 72 / 87
Process Synchrinization Classical Problems of Synchronization

Readers-Writers Problem

73/87
Operating Systems - Unit-II February 23, 2024 73 / 87
Process Synchrinization Classical Problems of Synchronization

Readers-Writers Problem Variations

First variation – no reader kept waiting unless writer has permission


to use shared object
Second variation – once writer is ready, it performs the write ASAP
Both may have starvation leading to even more variations
Problem is solved on some systems by kernel providing reader-writer
locks

74/87
Operating Systems - Unit-II February 23, 2024 74 / 87
Process Synchrinization Classical Problems of Synchronization

Dining-Philosophers Problem

Philosophers spend their lives alternating thinking and eating


Don’t interact with their neighbors, occasionally try to pick up 2
chopsticks (one at a time) to eat from bowl
Need both to eat, then release both when done
In the case of 5 philosophers
Shared data
Bowl of rice (data set)
Semaphore chopstick [5] initialized to 1
75/87
Operating Systems - Unit-II February 23, 2024 75 / 87
Process Synchrinization Classical Problems of Synchronization

Dining-Philosophers Problem Algorithm

76/87
Operating Systems - Unit-II February 23, 2024 76 / 87
Process Synchrinization Classical Problems of Synchronization

Dining-Philosophers Problem Algorithm (Cont.)

Deadlock handling
Allow at most 4 philosophers to be sitting simultaneously at the table.
Allow a philosopher to pick up the forks only if both are available
(picking must be done in a critical section.
Use an asymmetric solution – an odd-numbered philosopher picks up
first the left chopstick and then the right chopstick. Even-numbered
philosopher picks up first the right chopstick and then the left
chopstick.

77/87
Operating Systems - Unit-II February 23, 2024 77 / 87
Process Synchrinization Classical Problems of Synchronization

Problems with Semaphores

Incorrect use of semaphore operations


signal(mutex) - wait(mutex)
wait(mutex) - wait(mutex)
Omitting of wait (mutex) or signal (mutex) (or both)
Deadlock and starvation are possible.

78/87
Operating Systems - Unit-II February 23, 2024 78 / 87
Process Synchrinization Monitors

Monitors

A high-level abstraction that provides a convenient and effective


mechanism for process synchronization
Abstract data type, internal variables only accessible by code within
the procedure
Only one process may be active within the monitor at a time
But not powerful enough to model some synchronization schemes

79/87
Operating Systems - Unit-II February 23, 2024 79 / 87
Process Synchrinization Monitors

Condition Variables

condition x, y;
Two operations are allowed on a condition variable:
x.wait() – a process that invokes the operation is suspended until
x.signal()
x.signal() – resumes one of processes (if any) that invoked x.wait()
If no x.wait() on the variable, then it has no effect on the variable

80/87
Operating Systems - Unit-II February 23, 2024 80 / 87
Process Synchrinization Monitors

Monitor with Condition Variables

81/87
Operating Systems - Unit-II February 23, 2024 81 / 87
Process Synchrinization Monitors

Condition Variables Choices

If process P invokes x.signal(), and process Q is suspended in


x.wait(), what should happen next?˙
Both Q and P cannot execute in paralel. If Q is resumed, then P must
wait
Options include
Signal and wait – P waits until Q either leaves the monitor or it waits
for another condition
Signal and continue – Q waits until P either leaves the monitor or it
waits for another condition
Both have pros and cons – language implementer can decide
Monitors implemented in Concurrent Pascal compromise
P executing signal immediately leaves the monitor, Q is resumed
Implemented in other languages including Mesa, C#, Java

82/87
Operating Systems - Unit-II February 23, 2024 82 / 87
Process Synchrinization Monitors

Monitor Solution to Dining Philosophers

83/87
Operating Systems - Unit-II February 23, 2024 83 / 87
Process Synchrinization Monitors

Monitor Solution to Dining Philosophers(Cont.)

84/87
Operating Systems - Unit-II February 23, 2024 84 / 87
Process Synchrinization Monitors

Monitor Solution to Dining Philosophers(Cont.)

Each philosopher i invokes the operations pickup() and putdown() in


the following sequence
DiningPhilosophers.pickup(i);
EAT
DiningPhilosophers.putdown(i);
No deadlock, but starvation is possible

85/87
Operating Systems - Unit-II February 23, 2024 85 / 87
Process Synchrinization Examples

Linux Synchronization

Linux:
Prior to kernel Version 2.6, disables interrupts to implement short
critical sections
Version 2.6 and later, fully preemptive
Linux provides:
Semaphores
atomic integers
spinlocks
reader-writer versions of both
On single-cpu system, spinlocks replaced by enabling and disabling
kernel preemption

86/87
Operating Systems - Unit-II February 23, 2024 86 / 87
Process Synchrinization Examples

Windows Synchronization

Uses interrupt masks to protect access to global resources on


uniprocessor systems
Uses spinlocks on multiprocessor systems
Spinlocking-thread will never be preempted
Also provides dispatcher objects user-land which may act mutexes,
semaphores, events, and timers
Events
An event acts much like a condition variable
Timers notify one or more thread when time expired
Dispatcher objects either signaled-state (object available) or
non-signaled state (thread will block)

87/87
Operating Systems - Unit-II February 23, 2024 87 / 87

You might also like