OS unit II
OS unit II
done, the CPU remains idol. This is an overhead since it wastes the time and causes the problem of
starvation. However, In Multiprogramming systems, the CPU doesn't remain idle during the waiting
time of the Process and it starts executing other processes. Operating System has to define which
process the CPU will be given.
In Multiprogramming systems, the Operating system schedules the processes on the CPU to have
the maximum utilization of it and this procedure is called CPU scheduling. The Operating System
uses various scheduling algorithm to schedule the processes.
This is a task of the short term scheduler to schedule the CPU for the number of processes present in
the Job Pool. Whenever the running process requests some IO operation then the short term
scheduler saves the current context of the process (also called PCB) and changes its state from
running to waiting. During the time, process is in waiting state; the Short term scheduler picks
another process from the ready queue and assigns the CPU to this process. This procedure is
called context switching.
The Operating system maintains a process control block during the lifetime of the process. The
Process control block is deleted when the process is terminated or killed. There is the following
information which is saved in the process control block and is changing with the state of the process.
In Multiprogramming, if the long term scheduler picks more I/O bound processes then most of the
time, the CPU remains idol. The task of Operating system is to optimize the utilization of resources.
If most of the running processes change their state from running to waiting then there may always be
a possibility of deadlock in the system. Hence to reduce this overhead, the OS needs to schedule the
jobs to get the optimal utilization of CPU and to avoid the possibility to deadlock.
There are various algorithms which are used by the Operating System to schedule the processes on
the processor in an efficient way.
There are the following algorithms which can be used to schedule the jobs.
It is the simplest algorithm to implement. The process with the minimal arrival time will get the CPU
first. The lesser the arrival time, the sooner will the process gets the CPU. It is the non-preemptive
type of scheduling.
2. Round Robin
In the Round Robin scheduling algorithm, the OS defines a time quantum (slice). All the processes
will get executed in the cyclic way. Each of the process will get the CPU for a small amount of time
(called time quantum) and then get back to the ready queue to wait for its next turn. It is a preemptive
type of scheduling.
ADVERTISEMENT
The job with the shortest burst time will get the CPU first. The lesser the burst time, the sooner will
the process get the CPU. It is the non-preemptive type of scheduling.
It is the preemptive form of SJF. In this algorithm, the OS schedules the Job according to the
remaining time of the execution.
In this algorithm, the priority will be assigned to each of the processes. The higher the priority, the
sooner will the process get the CPU. If the priority of the two processes is same then they will be
scheduled according to their arrival time.
In this scheduling Algorithm, the process with highest response ratio will be scheduled next. This
reduces the starvation in the system.
First Come First Serve paves the way for understanding of other algorithms. This algorithm may
have many disadvantages. But these disadvantages created very new and efficient algorithms. So, it
is our responsibility to learn about First Come First Serve CPU Process Scheduling Algorithms.
Important Abbreviations
1. CPU - - - > Central Processing Unit
2. FCFS - - - > First Come First Serve
3. AT - - - > Arrival Time
4. BT - - - > Burst Time
5. WT - - - > Waiting Time
6. TAT - - - > Turn Around Time
7. CT - - - > Completion Time
8. FIFO - - - > First In First Out
First Come First Serve CPU Scheduling Algorithm shortly known as FCFS is the first algorithm of
CPU Process Scheduling Algorithm. In First Come First Serve Algorithm what we do is to allow the
process to execute in linear manner.
This means that whichever process enters process enters the ready queue first is executed first. This
shows that First Come First Serve Algorithm follows First In First Out (FIFO) principle.
The First Come First Serve Algorithm can be executed in Pre Emptive and Non Pre Emptive manner.
Before, going into examples, let us understand what is Pre Emptive and Non Pre Emptive Approach
in CPU Process Scheduling.
In this instance of Pre Emptive Process Scheduling, the OS allots the resources to a Process for a
predetermined period of time. The process transitions from running state to ready state or from
waiting state to ready state during resource allocation. This switching happens because the CPU may
assign other processes precedence and substitute the currently active process for the higher priority
process.
In this case of Non Pre Emptive Process Scheduling, the resource cannot be withdrawn from a
process before the process has finished running. When a running process finishes and transitions to
the waiting state, resources are switched.
Convoy Effect is a phenomenon which occurs in the Scheduling Algorithm named First Come First
Serve (FCFS). The First Come First Serve Scheduling Algorithm occurs in a way of non preemptive
way.
The Non preemptive way means that if a process or job is started execution, then the operating
system must complete its process or job. Until, the process or job is zero the new or next process or
job does not start its execution. The definition of Non Preemptive Scheduling in terms of Operating
System means that the Central Processing Unit (CPU) will be completely dedicated till the end of the
process or job started first and the new process or job is executed only after finishing of the older
process or job.
There may be a few cases, which might cause the Central Processing Unit (CPU) to allot a too much
time. This is because in the First Come First Serve Scheduling Algorithm Non Preemptive Approach,
the Processes or the jobs are chosen in serial order. Due, to this shorter jobs or processes behind the
larger processes or jobs takes too much time to complete its execution. Due, to this the Waiting
Time, Turn Around Time, Completion Time is very high.
So, here as the first process is large or completion time is too high, then this Convoy effect in the
First Come First Serve Algorithm is occurred.
Let us assume that Longer Job takes infinite time to complete. Then, the remaining processes have to
wait for the same infinite time. Due to this Convoy Effect created by the Longer Job the Starvation of
the waiting processes increases very rapidly. This is the biggest disadvantage of FCFS CPU Process
Scheduling.
1. Implementation is simple.
2. Does not cause any causalities while using
3. It adopts a non pre emptive and pre emptive strategy.
4. It runs each procedure in the order that they are received.
5. Arrival time is used as a selection criterion for procedures.
ADVERTISEMENT
o FCFS CPU Scheduling Algorithm has Long Waiting Time
o FCFS CPU Scheduling favors CPU over Input or Output operations
o In FCFS there is a chance of occurrence of Convoy Effect
o Because FCFS is so straight forward, it often isn't very effective. Extended waiting periods go
hand in hand with this. All other orders are left idle if the CPU is busy processing one time-
consuming order.
Now, let us solve this problem with the help of the Scheduling Algorithm named First Come First
Serve in a Non Preemptive Approach.
1 P1 A 0 9 9 9 0
2 P2 B 1 3 12 11 8
3 P3 C 1 2 14 13 1
1
4 P4 D 1 4 18 17 1
3
5 P5 E 2 3 21 19 1
6
6 P6 F 3 2 23 20 1
8
The Average Completion Time is:
Average CT = ( 9 + 12 + 14 + 18 + 21 + 23 ) / 6
Average CT = 97 / 6
Average CT = 16.16667
Average WT = ( 0 + 8 + 11 + 13 + 16 + 18 ) /6
Average WT = 66 / 6
Average WT = 11
Average TAT = 89 / 6
Now, let us understand how they can be solved in Pre Emptive Approach
Now, let us solve this problem with the help of the Scheduling Algorithm named First Come First
Serve in a Pre Emptive Approach.
In Pre Emptive Approach we search for the best process which is available
1 P1 A 0 9 23 23 14
2 P2 B 1 3 8 7 4
3 P3 C 1 2 3 2 0
4 P4 D 1 4 15 14 10
5 P5 E 2 3 11 9 7
6 P6 F 3 2 5 2 0
To get rid of the problem of wasting the wake-up signals, Dijkstra proposed an approach which involves stori
the wake-up calls. Dijkstra states that, instead of giving the wake-up calls directly to the consumer, produc
store the wake-up call in a variable. Any of the consumers can read it whenever it needs to do so.
Semaphore is the variables which storesthe entire wake up calls that are being transferred from produ
consumer. It is a variable on which read, modify and update happens automatically in kernel mode.
Semaphore cannot be implemented in the user mode because race condition may always arise when two or
processes try to access the variable simultaneously. It always needs support from the operating system
implemented.
According to the demand of the situation, Semaphore can be divided into two categories.
1. Counting Semaphore
2. Binary Semaphore or Mutex
Counting Semaphore
There are the scenarios in which more than one processes need to execute in critical section
simultaneously. However, counting semaphore can be used when we need to have more than one
process in the critical section at the same time.
The programming code of semaphore implementation is shown below which includes the structure of
semaphore and the logic using which the entry and the exit can be performed in the critical section.
1. struct Semaphore
2. {
3. int value; // processes that can enter in the critical section simultaneously.
4. queue type L; // L contains set of processes which get blocked
5. }
6. Down (Semaphore S)
7. {
8. SS.value = S.value - 1; //semaphore's value will get decreased when a new
9. //process enter in the critical section
10. if (S.value< 0)
11. {
12. put_process(PCB) in L; //if the value is negative then
13. //the process will get into the blocked state.
14. Sleep();
15. }
16. else
17. return;
18. }
19. up (Semaphore s)
20. {
21. SS.value = S.value+1; //semaphore value will get increased when
22. //it makes an exit from the critical section.
23. if(S.value<=0)
24. {
25. select a process from L; //if the value of semaphore is positive
26. //then wake one of the processes in the blocked queue.
27. wake-up();
28. }
29. }
30. }
In this mechanism, the entry and exit in the critical section are performed on the basis of the value of
counting semaphore. The value of counting semaphore at any point of time indicates the maximum
number of processes that can enter in the critical section at the same time.
A process which wants to enter in the critical section first decrease the semaphore value by 1 and
then check whether it gets negative or not. If it gets negative then the process is pushed in the list of
blocked processes (i.e. q) otherwise it gets enter in the critical section.
When a process exits from the critical section, it increases the counting semaphore by 1 and then
checks whether it is negative or zero. If it is negative then that means that at least one process is
waiting in the blocked state hence, to ensure bounded waiting, the first process among the list of
blocked processes will wake up and gets enter in the critical section.
The processes in the blocked list will get waked in the order in which they slept. If the value of
counting semaphore is negative then it states the number of processes in the blocked state while if it
is positive then it states the number of slots available in the critical section.
The questions are being asked on counting semaphore in GATE. Generally the questions are very
simple that contains only subtraction and addition.
A Counting Semaphore was initialized to 12. then 10P (wait) and 4V (Signal) operations were
computed on this semaphore. What is the result?
1. S = 12 (initial)
2. 10 p (wait) :
3. SS = S -10 = 12 - 10 = 2
4. then 4 V :
5. SS = S + 4 =2 + 4 = 6
Henc Binary Semaphore or Mutex
In counting semaphore, Mutual exclusion was not provided because we has the set of processes
which required to execute in the critical section simultaneously.
However, Binary Semaphore strictly provides mutual exclusion. Here, instead of having more than 1
slots available in the critical section, we can only have at most 1 process in the critical section. The
semaphore can have only two values, 0 or 1.
1. StructBsemaphore
2. {
3. enum Value(0,1); //value is enumerated data type which can only have two values 0 or 1.
4. Queue type L;
5. }
6. /* L contains all PCBs corresponding to process
7. Blocked while processing down operation unsuccessfully.
8. */
9. Down (Bsemaphore S)
10. {
11. if (s.value == 1) // if a slot is available in the
12. //critical section then let the process enter in the queue.
13. {
14. S.value = 0; // initialize the value to 0 so that no other process can read it as 1.
15. }
16. else
17. {
18. put the process (PCB) in S.L; //if no slot is available
19. //then let the process wait in the blocked queue.
20. sleep();
21. }
22. }
23. Up (Bsemaphore S)
24. {
25. if (S.L is empty) //an empty blocked processes list implies that no process
26. //has ever tried to get enter in the critical section.
27. {
28. S.Value =1;
29. }
30. else
31. {
32. Select a process from S.L;
33. Wakeup(); // if it is not empty then wake the first process of the blocked queue.
34. }
35. }
Every process needs some resources to complete its execution. However, the resource is granted in a
sequential order.
A Deadlock is a situation where each of the computer process waits for a resource which is being
assigned to some another process. In this situation, none of the process gets executed since the
resource it needs, is held by some other process which is also waiting for some other resource to be
released.
Let us assume that there are three processes P1, P2 and P3. There are three different resources R1,
R2 and R3. R1 is assigned to P1, R2 is assigned to P2 and R3 is assigned to P3.
After some time, P1 demands for R1 which is being used by P2. P1 halts its execution since it can't
complete without R2. P2 also demands for R3 which is being used by P3. P2 also stops its execution
because it can't continue without R3. P3 also demands for R1 which is being used by P1 therefore P3
also stops its execution.
In this scenario, a cycle is being formed among the three processes. None of the process is
progressing and they are all waiting. The computer becomes unresponsive since all the processes got
blocked.
1 Deadlock is a situation where no process got Starvation is a situation where the low
blocked and no process proceeds priority process got blocked and the high
priority processes proceed.
4 The requested resource is blocked by the other The requested resource is continuously be
process. used by the higher priority processes.
5 Deadlock happens when Mutual exclusion, It occurs due to the uncontrolled priority
hold and wait, No preemption and circular and resource management.
wait occurs simultaneously.
1. Mutual Exclusion
A resource can only be shared in mutually exclusive manner. It implies, if two process cannot
use the same resource at the same time.
A process waits for some resources while holding another resource at the same time.
3. No preemption
The process which once scheduled will be executed till the completion. No other process can
be scheduled by the scheduler meanwhile.
4. Circular Wait
All the processes must be waiting for the resources in a cyclic manner so that the last process
is waiting for the resource which is being held by the first process.
1. Deadlock Ignorance
Deadlock Ignorance is the most widely used approach among all the mechanism. This is being used
by many operating systems mainly for end user uses. In this approach, the Operating system assumes
that deadlock never occurs. It simply ignores deadlock. This approach is best suitable for a single end
user system where User uses the system only for browsing and all other normal stuff.
There is always a tradeoff between Correctness and performance. The operating systems like
Windows and Linux mainly focus upon performance. However, the performance of the system
decreases if it uses deadlock handling mechanism all the time if deadlock happens 1 out of 100 times
then it is completely unnecessary to use the deadlock handling mechanism all the time.
In these types of systems, the user has to simply restart the computer in the case of deadlock.
Windows and Linux are mainly using this approach.
2. Deadlock prevention
Deadlock happens only when Mutual Exclusion, hold and wait, No preemption and circular wait
holds simultaneously. If it is possible to violate one of the four conditions at any time then the
deadlock can never occur in the system.
The idea behind the approach is very simple that we have to fail one of the four conditions but there
can be a big argument on its physical implementation in the system.
3. Deadlock avoidance
In deadlock avoidance, the operating system checks whether the system is in safe state or in unsafe
state at every step which the operating system performs. The process continues until the system is in
safe state. Once the system moves to unsafe state, the OS has to backtrack one step.
In simple words, The OS reviews each allocation so that the allocation doesn't cause the deadlock in
the system.
This approach let the processes fall in deadlock and then periodically check whether deadlock occur
in the system or not. If it occurs then it applies some of the recovery methods to the system to get rid
of deadlock.
Deadlock Prevention
If we simulate deadlock with a table which is standing on its four legs then we can also simulate four
legs with the four conditions which when occurs simultaneously, cause the deadlock.
However, if we break one of the legs of the table then the table will fall definitely. The same happens
with deadlock, if we can be able to violate one of the four necessary conditions and don't let them
occur together then we can prevent the deadlock.
1. Mutual Exclusion
Mutual section from the resource point of view is the fact that a resource can never be used by more
than one process simultaneously which is fair enough but that is the main reason behind the
deadlock. If a resource could have been used by more than one process at the same time then the
process would have never been waiting for any resource.
However, if we can be able to violate resources behaving in the mutually exclusive manner then the
deadlock can be prevented.
Spooling
For a device like printer, spooling can work. There is a memory associated with the printer which
stores jobs from each of the process into it. Later, Printer collects all the jobs and print each one of
them according to FCFS. By using this mechanism, the process doesn't have to wait for the printer
and it can continue whatever it was doing. Later, it collects the output when it is produced.
Although, Spooling can be an effective approach to violate mutual exclusion but it suffers from two
kinds of problems.
We cannot force a resource to be used by more than one process at the same time since it will not be
fair enough and some serious problems may arise in the performance. Therefore, we cannot violate
mutual exclusion for a process practically.
Hold and wait condition lies when a process holds a resource and waiting for some other resource to
complete its task. Deadlock occurs because there can be more than one process which are holding
one resource and waiting for other in the cyclic order.
However, we have to find out some mechanism by which a process either doesn't hold any resource
or doesn't wait. That means, a process must be assigned all the necessary resources before the
execution starts. A process must not wait for any resource once the execution has been started.
!(Hold and wait) = !hold or !wait (negation of hold and wait is, either you don't hold or you
don't wait)
This can be implemented practically if a process declares all the resources initially. However, this
sounds very practical but can't be done in the computer system because a process can't determine
necessary resources initially.
Process is the set of instructions which are executed by the CPU. Each of the instruction may demand
multiple resources at the multiple times. The need cannot be fixed by the OS.
3. No Preemption
Deadlock arises due to the fact that a process can't be stopped once it starts. However, if we take the
resource away from the process which is causing deadlock then we can prevent deadlock.
This is not a good approach at all since if we take a resource away which is being used by the process
then all the work which it has done till now can become inconsistent.
Consider a printer is being used by any process. If we take the printer away from that process and
assign it to some other process then all the data which has been printed can become inconsistent and
ineffective and also the fact that the process can't start printing again from where it has left which
causes performance inefficiency.
4. Circular Wait
To violate circular wait, we can assign a priority number to each of the resource. A process can't
request for a lesser priority resource. This ensures that not a single process can request a resource
which is being utilized by some other process and no cycle will be formed.
Among all the methods, violating Circular wait is the only approach that can be im Deadlock
avoidance
In deadlock avoidance, the request for any resource will be granted if the resulting state of the system
doesn't cause deadlock in the system. The state of the system will continuously be checked for safe
and unsafe states.
In order to avoid deadlocks, the process must tell OS, the maximum number of resources a process
can request to complete its execution.
The simplest and most useful approach states that the process should declare the maximum number
of resources of each type it may ever need. The Deadlock avoidance algorithm examines the resource
allocations so that there can never be a circular wait condition.
The resource allocation state of a system can be defined by the instances of available and allocated
resources, and the maximum instance of the resources demanded by the processes.
Resources Assigned
A 3 0 2 2
B 0 0 1 1
C 1 1 1 0
D 2 1 4 0
A 1 1 0 0
B 0 1 1 2
C 1 2 1 0
D 2 1 1 2
1. E = (7 6 8 4)
2. P = (6 2 8 3)
3. A = (1 4 0 1)
Above tables and vector E, P and A describes the resource allocation state of a system. There are 4
processes and 4 types of the resources in a system. Table 1 shows the instances of each resource
assigned to each process.
Table 2 shows the instances of the resources, each process still needs. Vector E is the representation
of total instances of each resource in the system.
Vector P represents the instances of resources that have been assigned to processes. Vector A
represents the number of resources that are not in use.
A state of the system is called safe if the system can allocate all the resources requested by all the
processes without entering into deadlock.
If the system cannot fulfill the request of all processes then the state of the system is called unsafe.
The key of Deadlock avoidance approach is when the request is made for resources then the request
must only be approved in the case if the resulting state is also a safe state.
Implemented practically.
The resource allocation graph is the pictorial representation of the state of a system. As its name
suggests, the resource allocation graph is the complete information about all the processes which are
holding some resources or waiting for some resources.
It also contains the information about all the instances of all the resources whether they are available
or being used by the processes.
In Resource allocation graph, the process is represented by a Circle while the Resource is represented
by a rectangle. Let's see the types of vertices and edges in detail.
Vertices are mainly of two types, Resource and process. Each of them will be represented by a
different shape. Circle represents process while rectangle represents resource.
A resource can have more than one instance. Each instance will be represented by a dot inside the
rectangle.
Edges in RAG are also of two types, one represents assignment and other represents the wait of a
process for a resource. The above image shows each of them.
A resource is shown as assigned to a process if the tail of the arrow is attached to an instance to the
resource and the head is attached to a process.
A process is shown as waiting for a resource if the tail of an arrow is attached to the process while
the head is pointing towards the resource.
Example
Let'sconsider 3 processes P1, P2 and P3, and two types of resources R1 and R2. The resources are
having 1 instance each.
According to the graph, R1 is being used by P1, P2 is holding R2 and waiting for R1, P3 is waiting
for R1 as well as R2.
The graph is deadlock free since no cycle is being formed in the graph.
In Case of Resource allocation graph with multi-instanced resource types, Cycle is a necessary
condition of deadlock but not the sufficient condition.
The following example contains three processes P1, P2, P3 and three resources R2, R2, R3. All the
resources are having single instances each.
If we analyze the graph then we can find out that there is a cycle formed in the graph since the
system is satisfying all the four conditions of deadlock.
Allocation Matrix
Allocation matrix can be formed by using the Resource allocation graph of a system. In Allocation
matrix, an entry will be made for each of the resource assigned. For Example, in the following
matrix, en entry is being made in front of P1 and below R3 since R3 is assigned to P1.
Process R1 R2 R3
P1 0 0 1
P2 1 0 0
P3 0 1 0
Request Matrix
In request matrix, an entry will be made for each of the resource requested. As in the following
example, P1 needs R1 therefore an entry is being made in front of P1 and below R1.
Process R1 R2 R3
P1 1 0 0
P2 0 1 0
P3 0 0 1
Avial = (0,0,0)
Neither we are having any resource available in the system nor a process going to release. Each of
the process needs at least single resource to complete therefore they will continuously be holding
each one of them.
We cannot fulfill the demand of at least one process using the available resources therefore the
system is deadlocked as determined earlier when we detected a cycle in the graph.
In this approach, The OS doesn't apply any mechanism to avoid or prevent the deadlocks. Therefore
the system considers that the deadlock will definitely occur. In order to get rid of deadlocks, The OS
periodically checks the system for any deadlock. In case, it finds any of the deadlock then the OS
will recover the system using some recovery techniques.
The main task of the OS is detecting the deadlocks. The OS can detect the deadlocks with the help of
Resource allocation graph.
In single instanced resource types, if a cycle is being formed in the system then there will definitely
be a deadlock. On the other hand, in multiple instanced resource type graph, detecting a cycle is not
just enough. We have to apply the safety algorithm on the system by converting the resource
allocation graph into the allocation matrix and request matrix.
In order to recover the system from deadlocks, either OS considers resources or processes.
For Resource
We can snatch one of the resources from the owner of the resource (process) and give it to the other
process with the expectation that it will complete the execution and will release this resource sooner.
Well, choosing a resource which will be snatched is going to be a bit difficult.
The moment, we get into deadlock, we will rollback all the allocations to get into the previous safe
state.
For Process
Kill a process
Killing a process can solve our problem but the bigger concern is to decide which process to kill.
Generally, Operating system kills a process which has done least amount of work until now.
This is not a suggestible approach but can be implemented if the problem becomes very serious.
Killing all process will lead to inefficiency in the system because all the processes will execute again
from starting.