0% found this document useful (0 votes)
30 views82 pages

Process management and Inter process communication

The document provides an overview of process management and inter-process communication in operating systems, defining a process as a program in execution with its own virtual CPU. It discusses concepts such as multiprogramming, process states, process control blocks, context switching, and various scheduling algorithms including First-Come, First-Served and Shortest-Job-First. Additionally, it outlines the roles of different types of schedulers and the criteria for optimizing scheduling performance.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views82 pages

Process management and Inter process communication

The document provides an overview of process management and inter-process communication in operating systems, defining a process as a program in execution with its own virtual CPU. It discusses concepts such as multiprogramming, process states, process control blocks, context switching, and various scheduling algorithms including First-Come, First-Served and Shortest-Job-First. Additionally, it outlines the roles of different types of schedulers and the criteria for optimizing scheduling performance.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 82

Process management

and
Inter process communication
What is Process?

Program Process
What is Process?

Process is a program under execution.


Process is an abstraction of a running program.
Process is an instance of an executing program,
including the current values of the program counter,
registers & variables.
Each process has its own virtual CPU.
Process Concept
An operating system executes a variety of programs:
Batch system – jobs
Time-shared systems – user programs or tasks
Textbook uses the terms job and process almost interchangeably
Process – a program in execution; process execution must progress in
sequential fashion, unit of work in a modern time- sharing system
Program is passive entity stored on disk (executable file), process is
active
Program becomes process when executable file loaded into memory
Execution of program started via GUI mouse clicks, command line entry of
its name, etc
One program can be several processes
Consider multiple users executing the same program
Process Concept (Cont.)
Multiple parts
The program code, also called text section
Current activity including program counter, processor registers
Stack containing temporary data
Function parameters, return addresses, local variables
Data section containing global variables
Heap containing memory dynamically allocated during run time
Multiprogramming

The real CPU switches back and forth from process


to process.
This rapid switching back and forth is called
multiprogramming.
The number of processes loaded simultaneously in
memory is called degree of multiprogramming.
Multiprogramming execution

P P P Memory
1 2 3 Logical
Program Counter
P1
Processor
Logical Physical
Program Counter Program Counter
P2

Logical
Program Counter
P3

 There are three processes, one processor (CPU), three logical program
counter (one for each processes) in memory and one physical program counter in
processor.
 Here CPU is free (no process is running).
 No data in physical program counter.
Multiprogramming execution

P P P Memory
1 2 3 Logical
Program Counter
P1
P Processor
1
Logical Physical
Program Counter Program Counter
P2

Logical
Program Counter
P3

 CPU is allocated to process P1 (process P1 is running).


 Data of process P1 is copied from its logical program counter to the physical
program counter.
Multiprogramming execution

P P P Memory
1 2 3 Logical
Program Counter
P Processor
1
Logical Physical
P
Program Counter Program Counter
2 P
P2
1

Logical
Program Counter
P3

 CPU switches from process P1 to process P2.


 CPU is allocated to process P2 (process P2 is running).
 Data of process P1 is copied back to its logical program counter.
 Data of process P2 is copied from its logical program counter to the physical
program counter.

9
Multiprogramming execution

P P P Memory
1 2 3 Logical
Program Counter
P1
Processor
Logical Physical
P
Program Counter Program Counter
2 P
P 2
3
Logical
Program Counter
P3

 CPU switches from process P2 to process P3.


 CPU is allocated to process P3 (process P3 is running).
 Data of process P2 is copied back its logical program counter.
 Data of process P3 is copied from its logical program counter to the physical
program counter.

10
Process Model

Multiprogramming of Conceptual model of 4 independent, Over a long period of time interval, all the
four programs in memory sequential processes, each with its own flow processes have made progress, but at any
of control (i.e., its own logical program given instant only one process is actually
counter) and each one running independently running.
of the other ones.
Process State
As a process executes, it changes state
new: The process is being created
running: Instructions are being executed
waiting: The process is waiting for some event to occur
ready: The process is waiting to be assigned to a processor
terminated: The process has finished execution
Process Control Block (PCB)
Information associated with each
process (also called task control
block)
Process state – running, waiting, etc
Program counter – location
of instruction to next execute
CPU registers – contents of all process-
centric registers
CPU scheduling information-
priorities, scheduling queue pointers
Memory-management information –
memory allocated to the process
Accounting information – CPU
used, clock time elapsed since start,
time limits
I/O status information – I/O
devices allocated to process, list of
open files
Context Switch
When CPU switches to another process, the system must save the
state of the old process and load the saved state for the new process via
a context switch
Context of a process represented in the PCB
Context-switch time is overhead; the system does no useful work while
switching
The more complex the OS and the PCB the longer the context switch
Time dependent on hardware support
Some hardware provides multiple sets of registers per CPU
multiple contexts loaded at once
CPU Scheduler

Short-term scheduler selects from among the


processes in ready queue, and allocates the
CPU to one of them
Queue may be ordered in various ways
CPU scheduling decisions may take place when
a process:
1. Switches from running to waiting state
2. Switches from running to ready state
3. Switches from waiting to ready
4. Terminates
Scheduling under 1 and 4 is nonpreemptive
All other scheduling is preemptive
Schedulers
Short-term scheduler (or CPU scheduler) – selects which process should
be executed next and allocates CPU
Sometimes the only scheduler in a system
Short-term scheduler is invoked frequently (milliseconds) ⇒ (must be
fast)
Long-term scheduler (or job scheduler) – selects which processes should
be brought into the ready queue
Long-term scheduler is invoked infrequently (seconds, minutes) ⇒ (may
be slow)
The long-term scheduler controls the degree of multiprogramming
If the degree of multiprogramming is stable, then the average rate of
process creation must be equal to the average departure rate of processes
leaving the system.
The primary distinction between these two schedulers lies in frequency of
execution. The short-term scheduler must select a new process for the
CPU frequently.
Schedulers
Processes can be described as either:
I/O-bound process – spends more time doing I/O than computations,
many short CPU bursts
CPU-bound process – spends more time doing computations; few very
long CPU bursts
Medium-term scheduler can be added if degree of multiple
programming needs to decrease
Remove process from memory, store on disk, bring back in from disk
to continue execution: swapping
Dispatcher

Dispatcher module gives control of the


CPU to the process selected by the
short-term scheduler; this involves:
switching context
switching to user mode
jumping to the proper location in the
user program to restart that program
Dispatch latency – time it takes for
the dispatcher to stop one process and
start another running
Types of Scheduling Algorithms

PREEMPTIVE SCHEDULING: The resources


are allocated to a process for a limited time.
Process can be interrupted in between.

NON PREEMPTIVE SCHEDULING: Process


can not be interrupted till it terminates or
switches to waiting state.
Scheduling Criteria

CPU utilization – keep the CPU as busy as


possible
Throughput – # of processes that complete their
execution per time unit
Turnaround time – amount of time to execute a
particular process
Waiting time – amount of time a process has
been waiting in the ready queue
Response time – amount of time it takes from
when a request was submitted until the first
response is produced, not output
Scheduling Algorithm Optimization Criteria

Max CPU utilization


Max throughput
Min turnaround time
Min waiting time
Min response time
Parameter related to Process

Arrival Time: arrives in the ready queue


Burst Time : amount of time required to finish process.
Completion Time: amount of time up to termination
time.
Turnaround Time (TAT): amount of time taken to fulfill a
request.
TAT = CT – AT
Waiting Time: The time processes spend in the Ready
Queue Waiting their turn to get on the CPU.
WT = TAT – BT
Response Time: It is the time taken in an interactive
program.
First- Come, First-Served (FCFS) Scheduling

Jobs are executed on first come, first serve


basis.
It is a non-preemptive
Easy to understand and implement.
Its implementation is based on FIFO queue.
Poor in performance as average wait time is
high.
First- Come, First-Served (FCFS) Scheduling

Solve given example

P.No A.T B.T


1 0 4
2 1 3
3 2 1
4 3 2
5 4 5
First- Come, First-Served (FCFS) Scheduling

P.No A.T B.T


1 0 4
2 1 3
3 2 1
4 3 2
5 4 5

Gantt chart
P1 P2 P3 P4 P5
0 4 7 8 10 15
First- Come, First-Served (FCFS) Scheduling

Calculate Completion Time,

P.No A.T B.T CT TAT = WT =


(CT- AT) (TAT – B.T)
1 0 4 4 4 0
2 1 3 7 6 3
3 2 1 8 6 5
4 3 2 10 7 5
5 4 5 15 11 6
Gantt chart
P1 P2 P3 P4 P5
0 4 7 8 10 15

Avg TAT = (4+6+6+7+11)/5 = ?


Avg WT = (0+3+5+5+6)/5 = ?
FCFS Scheduling (Cont.)

Convoy effect - short process behind long process.


As FCFS is a non-preemptive scheduling algorithm,
the CPU will be allocated to a process until it get
finished, that means other processes have to wait
for their turn.
This lead to a situation called convoy effect.
Example: VIP visit and other cars passing behind
them ( normal person in waiting)
Convoy effect

P.No A.T B.T


1 0 21
2 1 3
3 1 2

Gantt chart

P1 P2 P3
0 21 24 26
Convoy effect

P.No A.T B.T C.T TAT WT


1 0 21 21 21 0
2 1 3 24 23 20
3 1 2 26 25 23

Gantt chart

P1 P2 P3
0 21 24 26

Avg WT =(0+20+23)/3 = 43/3


Convoy effect

Same process in different order

P.No A.T B.T CT TAT WT


1 1 21 26 25 4
2 0 3 3 3 0
3 0 2 5 5 3

Gantt chart

P2 P3 P1
0 3 5 26

Avg WT =( 4+0+3)/3 = 7/3


Shortest-Job-First (SJF) Scheduling

Associate with each process the length of its


next CPU burst
Use these lengths to schedule the process
with the shortest time
SJF is optimal – gives minimum average waiting
time for a given set of processes
The difficulty is knowing the length of the next
CPU request
Shortest-Job-First (SJF) Scheduling

Solve given example : non preemptive

P.No A.T B.T


1 1 7
2 2 5
3 3 1
4 4 2
5 5 8
Shortest-Job-First (SJF) Scheduling

P.No A.T B.T


1 1 7
2 2 5
3 3 1
4 4 2
5 5 8

Gantt chart
P1 P3 P4 P2 P5
0 1 8 9 11 16 24
Shortest-Job-First (SJF) Scheduling

P.No A.T B.T CT TAT WT


1 1 7 8 7 0
2 2 5 16 14 9
3 3 1 9 6 5
4 4 2 11 7 5
5 5 8 24 19 11

Gantt chart
P1 P3 P4 P2 P5
0 1 8 9 11 16 24

AVG TAT: ?
AVG WT: ?
shortest-remaining-time-first

Shortest remaining time scheduling is the preemptive

P.No A.T B.T


1 0 7
2 1 5
3 2 3
4 3 1
5 4 2
6 5 1
shortest-remaining-time-first

P.No A.T B.T


1 0 7
2 1 5
3 2 3
4 3 1
5 4 2
6 5 1

Gantt chart
P1
0 1
shortest-remaining-time-first

P.No A.T B.T


1 0 7, 6
2 1 5
3 2 3
4 3 1
5 4 2
6 5 1

Gantt chart
P1 P2
0 1 2
shortest-remaining-time-first

P.No A.T B.T


1 0 6
2 1 4
3 2 3
4 3 1
5 4 2
6 5 1

Gantt chart
P1 P2 P3
0 1 2 3
shortest-remaining-time-first

P.No A.T B.T


1 0 6
2 1 4
3 2 2
4 3 1
5 4 2
6 5 1

Gantt chart
P1 P2 P3 P4
0 1 2 3 4
shortest-remaining-time-first

P.No A.T B.T


1 0 6
2 1 4
3 2 2

5 4 2
6 5 1

NOW p3 and p5 both have same time : pick based on A.T, so p3 is picked
Gantt chart
P1 P2 P3 P4 P3
0 1 2 3 4 5
shortest-remaining-time-first

P.No A.T B.T


1 0 6
2 1 4
3 2 1

5 4 2
6 5 1
NOW p3 and p6 both have same time : pick based on A.T, so p3 is picked
Gantt chart

P1 P2 P3 P4 P3 P3
0 1 2 3 4 5 6
shortest-remaining-time-first

P.No A.T B.T


1 0 6
2 1 4

5 4 2
6 5 1

Gantt chart

P1 P2 P3 P4 P3 P3 P6
0 1 2 3 4 5 6 7
shortest-remaining-time-first

P.No A.T B.T


1 0 6
2 1 4

5 4 2

Gantt chart

P1 P2 P3 P4 P3 P3 P6 P5
0 1 2 3 4 5 6 7 9
shortest-remaining-time-first

P.No A.T B.T


1 0 6
2 1 4

Gantt chart

P1 P2 P3 P4 P3 P3 P6 P5 P2 P1
0 1 2 3 4 5 6 7 9 13 19
shortest-remaining-time-first

P.No A.T B.T CT TAT WT


1 0 7 19 19 12
2 1 5 13 12 7
3 2 3 6 4 1
4 3 1 4 1 0
5 4 2 9 5 3
6 5 1 7 2 1

Gantt chart

P1 P2 P3 P4 P3 P3 P6 P5 P2 P1
0 1 2 3 4 5 6 7 9 13 19
AVG TAT:?
AVG WT:?
Round Robin (RR)

Each process gets a small unit of CPU time (time quantum q),
usually 10-100 milliseconds. After this time has elapsed, the
process is preempted and added to the end of the ready queue.
If there are n processes in the ready queue and the time
quantum is q, then each process gets 1/n of the CPU time in
chunks of at most q time units at once. No process waits more
than (n-1)q time units.
Timer interrupts every quantum to schedule next process
Performance
q large  FIFO
q small  q must be large with respect to context switch,
otherwise overhead is too high
Round Robin (RR) Algorithm
Round Robin (RR) Example
TQ= 2

P.No A.T B.T


1 0 4
2 1 5
3 2 2
4 3 1
5 4 6
6 6 3
Always look at arrival time ( maintain queue for next process selection)
and TQ both for process selection
Gantt chart
P1 P2 P3 P1 P4 P5 P2 P6 P5 P2 P6 P5
0 2 4 6 8 9 11 13 15 17 18 19 21
Round Robin (RR) Example
TQ= 2

P.No A.T B.T CT TAT WT


1 0 4 8 8 4
2 1 5 18 17 12
3 2 2 6 4 2
4 3 1 9 6 5
5 4 6 21 17 11
6 6 3 19 13 10
Gantt chart

P1 P2 P3 P1 P4 P5 P2 P6 P5 P2 P6 P5
0 2
AVG4TAT:?6 8 9 11 13 15 17 18 19 21
AVG WT:?
Round Robin (RR) Example
What if we have TQ= 4 and TQ= 1 ?

P.No A.T B.T


1 0 4
2 1 5
3 2 2
4 3 1
5 4 6
6 6 3

TQ=1 provides more context switching ( as per decrease)


TQ= 4 might lead to starvation for few processes as per TQ increase
Inter process communication

A cooperating process is one that can affect or be


affected by other processes executing in the system.

Processes can execute concurrently


May be interrupted at any time, partially completing
execution

Concurrent access to shared data may result in data


inconsistency

Maintaining data consistency requires mechanisms to


ensure the orderly execution of cooperating processes
Suppose two processes trying to access printer, both
processes want to access shared resources or memory.
One process is writing and other one is reading, at this
situation access can be given one after another
Both are going to write at same time lead to data lost
Process synchronization required to manage shared
recourse
Better utilization
Race Condition

counter++ could be implemented as


register1 = counter
register1 = register1 + 1
counter = register1
counter-- could be implemented as
register2 = counter
register2 = register2 - 1
counter = register2

Consider this execution interleaving with “count = 5” initially:


S0: producer execute register1 = counter {register1 = 5}
S1: producer execute register1 = register1 + 1 {register1 = 6}
S2: consumer execute register2 = counter {register2 = 5}
S3: consumer execute register2 = register2 – 1 {register2 = 4}
S4: producer execute counter = register1 {counter = 6 }
S5: consumer execute counter = register2 {counter = 4}
Race Condition

A situation like this, where


several processes access and manipulate the same data
concurrently and the outcome of the execution depends
on the particular order in which the access takes place,
is called a race condition.
Critical Section Problem

Consider system of n processes {p0, p1, … pn-1}


Each process has critical section segment of code
Process may be changing common variables,
updating table, writing file, etc
When one process in critical section, no other may
be in its critical section
Critical section problem is to design protocol to solve
this
Each process must ask permission to enter critical
section in entry section, may follow critical section
with exit section, then remainder section
Critical Section

General structure of process Pi


Solution to Critical-Section Problem

1. Mutual Exclusion - If process Pi is executing in its critical


section, then no other processes can be executing in their
critical sections

2. Progress - If no process is executing in its critical section and


there exist some processes that wish to enter their critical
section, then the selection of those process are not in
remainder section can participate.
▪ For example p1, p2, p3 and p4 are there and now p1 is starting
critical section and p2 in remainder section
▪ So, p2 again get the chance. Need to give chance to p3 or p4
3. Bounded Waiting - A bound must exist on the number of times
that other processes are allowed to enter their critical sections
after a process has made a request to enter its critical section
and before that request is granted
Mutex Locks
Previous solutions are complicated and generally inaccessible
to application programmers
OS designers build software tools to solve critical section
problem
Simplest is mutex lock
Protect a critical section by first acquire() a lock then
release() the lock
Boolean variable indicating if lock is available or not
Calls to acquire() and release() must be atomic
Usually implemented via hardware atomic instructions
But this solution requires busy waiting
This lock therefore called a spinlock
acquire() and release()
acquire() {
while (!available); /* busy wait */

available = false;
}
release() {
available = true;
}
do {
acquire lock
critical section
release lock
remainder section
} while (true);
Semaphore
Synchronization tool that provides more sophisticated ways (than Mutex locks)
for process to synchronize their activities.
Semaphore S – integer variable
Can only be accessed via two indivisible operations
wait() and signal()
 Originally called P() and V()
Definition of the wait() operation
wait(S) {
while (S <= 0); // busy wait
S--;
}
Definition of the signal() operation
signal(S) {
S++;
}
do {
wait(s)
critical section
signal(s)
remainder section
} while (TRUE);

Suppose p1,p2…pn want to execute in critical section


Now s=1 than p1 can complete but at after critical
section if context switch happened
P2 get the chance but s=0 (waiting)
Now p1 get back and call signal with s increase
Mutual Exclusion and progress guaranteed
Bounded wait is not possible because of one process can
get chance again and again
Semaphore Usage
Counting semaphore – integer value can range over an unrestricted
domain
Binary semaphore – integer value can range only between 0 and 1
Same as a mutex lock
Counting semaphores can be used to control access to a given
resource consisting of a finite number of instances.
The semaphore is initialized to the number of resources available.
Each process that wishes to use a resource performs a wait() operation
on the semaphore.
When a process releases a resource, it performs a signal() operation
Deadlocks
System Model

System consists of resources


Resource types R1, R2, . . ., Rm
CPU cycles, memory space, I/O devices
Each resource type Ri has Wi instances.
Each process utilizes a resource as follows:
request
use
release
Deadlock Characterization
Deadlock can arise if four conditions hold simultaneously.
Mutual exclusion: only one process at a time
can use a resource
Hold and wait: a process holding at least one
resource is waiting to acquire additional resources
held by other processes
No preemption: a resource can be released only
voluntarily by the process holding it, after that
process has completed its task
Circular wait: there exists a set {P0, P1, …, Pn} of
waiting processes such that P0 is waiting for a
resource that is held by P1, P1 is waiting for a
resource that is held by P2, …, Pn–1 is waiting for a
resource that is held by Pn, and Pn is waiting for a
resource that is held by P0.
Resource-Allocation Graph

Deadlock can be described more precisely in terms of a


directed graph called a system resource allocation graph.

A set of vertices V and a set of edges E.


V is partitioned into two types:
P = {P1, P2, …, Pn}, the set consisting of all the
processes in the system

R = {R1, R2, …, Rm}, the set consisting of all resource


types in the system

request edge – directed edge Pi → Rj

assignment edge – directed edge Rj → Pi


Resource-Allocation Graph (Cont.)
Process

Resource Type with 4 instances

Pi requests instance of Rj

Pi
Rj
Pi is holding an instance of Rj

Pi
Rj
Example of a Resource Allocation Graph
Resource Allocation Graph With A Deadlock
Graph With A Cycle But No Deadlock
Basic Facts

If graph contains no cycles  no deadlock

If graph contains a cycle 


if only one instance per resource type, then
deadlock
if several instances per resource type, possibility of
deadlock
Methods for Handling Deadlocks

Ensure that the system will never enter a deadlock


state:
Deadlock prevention
Deadlock avoidance
Allow the system to enter a deadlock state and then
recover
Deadlock detection
Deadlock recovery
Ignore the problem and pretend that deadlocks
never occur in the system; used by most operating
systems, including UNIX
Deadlock Prevention
Ensure that at least one of the necessary conditions does not hold

Mutual Exclusion: Use shareable resources


Then it will not require mutually exclusive access and thus cannot be
involved in a deadlock
Ex. read only files
Hold and Wait: Ensure that whenever a process requests a resource, it
does not hold any other resources
Protocol-1: Each process should request and be allocated all its
resources before it begins execution,
Protocol-2: Allow process to request resources only when the
process has none allocated to it.
 A process must release all the resources before requesting for
any additional resources.
Low resource utilization; starvation possible
Deadlock Prevention (Cont.)
No Preemption –
If a process that is holding some resources requests another
resource that cannot be immediately allocated to it, then all
resources currently being held are released
Preempted resources are added to the list of resources for
which the process is waiting
Process will be restarted only when it can regain its old
resources, as well as the new ones that it is requesting
Circular Wait – impose a total ordering of all resource types,
and require that each process requests resources in an
increasing order of enumeration
Deadlock Avoidance
Requires that the system has some additional a priori information
available

Simplest and most useful model requires that


each process declare the maximum
number of resources of each type that it
may need
The deadlock-avoidance algorithm
dynamically examines the resource-
allocation state to ensure that there can
never be a circular-wait condition
Resource-allocation state is defined by the
number of available and allocated resources,
and the maximum demands of the processes
Safe State

When a process requests an available resource, system


must decide if immediate allocation leaves the system in a
safe state
System is in safe state if there exists a sequence <P1, P2,
…, Pn> of ALL the processes in the systems such that for
each Pi, the resources that Pi can still request can be
satisfied by currently available resources + resources held
by all the Pj, with j < I
That is:
If Pi resource needs are not immediately available, then
Pi can wait until all Pj have finished
When Pj is finished, Pi can obtain needed resources,
execute, return allocated resources, and terminate
When Pi terminates, Pi +1 can obtain its needed
resources, and so on
Basic Facts

If a system is in safe state  no deadlocks

If a system is in unsafe state  possibility of


deadlock

Avoidance  ensure that a system will


never enter an unsafe state.
Safe, Unsafe, Deadlock State
Avoidance Algorithms

Single instance of a resource type


Use a resource-allocation graph

Multiple instances of a resource type


Use the banker’s algorithm
Deadlock Detection

Allow system to enter deadlock state

Detection algorithm

Recovery scheme
Recovery from Deadlock: Process Termination

Abort all deadlocked processes

Abort one process at a time until the deadlock cycle is eliminated

In which order should we choose to abort?


1. Priority of the process
2. How long process has computed, and how much longer to
completion
3. Resources the process has used
4. Resources process needs to complete
5. How many processes will need to be terminated
6. Is process interactive or batch?
Recovery from Deadlock: Resource Preemption

Selecting a victim – minimize cost

Rollback – return to some safe state, restart


process for that state

Starvation – same process may always be


picked as victim, include number of rollback in
cost factor

You might also like