SNN - Module 3 - Notes
SNN - Module 3 - Notes
SNN - Module 3 - Notes
Nayak
CHAPTER 7: DEADLOCKS
A process in operating system uses resources in the following way.
1) Requests a resource
2) Use the resource
3) Releases the resource
Deadlock is a situation where a set of processes are blocked because each process is holding a resource and
waiting for another resource acquired by some other process.
In the above diagram, the process 1 has resource 2 and needs to acquire resource 1. Similarly process 2 has
Sankhya Nayak,CSE,JNNCE
resource 1 and needs to acquire resource 2. Process 1 and process 2 are in deadlock as each of them needs
the other’s resource to complete their execution but neither of them is willing to relinquish their resources.
• Mutual Exclusion: At least one of the resources is non-sharable (that is; only a limited number of
processes can use it at a time and if it is requested by a process while it is being used by another one,
the requesting process has to wait until the resource is released.).
• Hold and Wait: There must be at least one process that is holding at least one resource and waiting
for other resources that are being hold by other processes.
• No Preemption: No resource can be preempted before the holding process completes its task with
that resource.
• Circular Wait: There exists a set of processes: {P1, P2, ..., Pn} such that
o P1 is waiting for a resource held by P2
o P2 is waiting for a resource held by P3
o ...
o Pn-1 is waiting for a resource held by Pn
• Pn is waiting for a resource held by P1
• If the resource allocation graph contains no cycles then there is no deadlock in the system at that
instance.
• If the resource allocation graph contains a cycle, then a deadlock may exist.
• If there is a cycle, and the cycle involves only resources which have a single instance, then a deadlock
has occurred.
Examples:
Deadlock Prevention
• This is done by restraining the ways a request can be made. Since deadlock occurs when all the above
four conditions are met, we try to prevent any one of them, thus preventing a deadlock.
Deadlock Avoidance
• When a process requests a resource, the deadlock avoidance algorithm examines the resource-allocation
state. If allocating that resource sends the system into an unsafe state, the request is got granted.
• Therefore, it requires additional information such as how many resources of each type is required by a
process. If the system enters an unsafe state, it has to take a step back to avoid deadlock.
Deadlock Ignorance
In the method, the system assumes that deadlock never occurs. Since the problem of deadlock situation is
not frequent, some systems simply ignore it. Operating systems such as UNIX and Windows follow this
approach. However, if a deadlock occurs, we can reboot our system and the deadlock is resolved
Sankhya Nayak,CSE,JNNCE
automatically.
Note: The above approach is an example of Ostrich Algorithm. It is a strategy of ignoring potential problems
on the basis that they are extremely rare.
• Mutual Exclusion: In general, we do not have systems with all resources being sharable. Some
resources like printers, processing units are non-sharable. So, it is not possible to prevent deadlocks
by denying mutual exclusion.
• Hold and Wait: One protocol to ensure that hold-and-wait condition never occurs says each process
must request and get all its resources before it begins execution.
Another protocol is “Each process can request resources only when it does not occupy any
resources.”
The second protocol is better. However, both protocols cause low resource utilization and starvation.
Many resources are allocated but most of them are unused for a long period of time. A process that
requests several commonly used resources cause many others to wait indefinitely.
• No Preemption: One protocol is “If a process that is holding some resources requests another
resource and that resource cannot be allocated to it, then it must release all resources that are
currently allocated to it.”
Another protocol is “When a process requests some resources, if they are available, allocate them. If
a resource it requested is not available, then we check whether it is being used or it is allocated to
some other process waiting for other resources. If that resource is not being used, then the OS
preempts it from the waiting process and allocate it to the requesting process. If that resource is
used, the requesting process must wait.” This protocol can be applied to resources whose states can
easily be saved and restored (registers, memory space). It cannot be applied to resources like
printers.
• Circular Wait: One protocol to ensure that the circular wait condition never holds is “Impose a linear
ordering of all resource types.” Then, each process can only request resources in an increasing order
of priority.
For example, set priorities for r1 = 1, r2 = 2, r3 = 3, and r4 = 4. With these priorities, if process P
wants to use r1 and r3, it should first request r1, then r3.
Another protocol is “Whenever a process requests a resource rj, it must have released all resources
rk with priority(rk) ≥ priority (rj).
Sankhya Nayak,CSE,JNNCE
Avoidance Algorithms
• Claim edge Pi → Rj indicated that process Pj may request resource Rj; represented by a dashed line
• Claim edge converts to request edge when a process requests a resource
• Request edge converted to an assignment edge when the resource is allocated to the process
• When a resource is released by a process, assignment edge reconverts to a claim edge.
• Resources must be claimed a priori in the system
Resource Allocation graph where P1 and P2 Unsafe State and request by P1 will not be
Sankhya Nayak,CSE,JNNCE
Available : One dimensional array of size m. It indicates the number of available resources of each type. For
example, if Available [i] is k, there are k instances of resource ri.
Max : Two dimensional array of size n*m. It defines the maximum demand of each process from each
resource type.
For example, if Max [i][j] is k, process pi may request at most k instances of resource type rj.
Allocation : Two dimensional array of size n*m. It defines the number of resources of each type currently
allocated to each process.
Need : Two dimensional array of size n*m. It indicates the remaining need of each process, of each resource
type.
If Need [i][j] is k, process pi may need k more instances of resource type rj.
The algorithm for finding out whether or not a system is in a safe state can be described as follows:
1) Let Work and Finish be vectors of length ‘m’ and ‘n’ respectively.
Finish[i] = true
4) if Finish [i] = true for all i , then the system is in a safe state
Let Requesti be the request array for process Pi. If Requesti [j] == k, then process Pi wants k instances of
resource type Rj. When a request for resources is made by process Pi, the following actions are taken:
Example:
• 5 processes P0 through P4;
• resource types:
A (10 instances), B (5instances), and C (7 instances)
• Snapshot at time T0:
Sankhya Nayak,CSE,JNNCE
What will happen if process P1 requests one additional instance of resource type A and two instances of
resource type C?
We must determine whether this new system state is safe. To do so, we again execute Safety algorithm on
the above data structures.
Sankhya Nayak,CSE,JNNCE
• Periodically invoke an algorithm that searches for a cycle in the graph. If there is a cycle, there exists
a deadlock.
• An algorithm to detect a cycle in a graph requires an order of n2 operations, where n is the number of
vertices in the graph. Sankhya Nayak,CSE,JNNCE
• Uses several time varying data structures similar to the ones used in Bankers algorithm.
• Available: A vector of length m indicates the number of available resources of each type
• Allocation: An n x m matrix defines the number of resources of each type currently allocated to
each process
• Request: An n x m matrix indicates the current request of each process. If Request [i][j] = k, then
process Pi is requesting k more instances of resource type Rj.
Detection Algorithm
1. Let Work and Finish be vectors of length m and n, respectively Initialize:
(a) Work = Available
if Allocationi 0, then
Finish[i] = false;
otherwise,
Finish[i] = true
Example:
Run-time complexity: O(m × n²).
Example: consider a system with 5 processes (P0 .. P4) and 3 resources types (A(7) B(2) C(6))
Sankhya Nayak,CSE,JNNCE
▪ how many and what type of resources has the process used?
▪ how many more resources will the process need to complete?
▪ how many processes will need to be terminated?
▪ is the process interactive or batch?
7.7.2 Resource Preemption
Incrementally preempt and re-allocate resources until the circular wait is broken. During pre emption the
following issues need to be addressed.
• selecting a victim (see above)
• rollback: what should be done with process which lost the resource?
clearly it cannot continue; must rollback to a safe state (???) => total rollback
• starvation: pick victim only small (finite) number of times; use number of rollbacks in decision
*********************************************************************************
Additional problems
Example of Banker’s Algorithm
Consider the following snapshot of a system:
1. calculate the content of the need matrix?
Allocation Max Available
Processes 2. Is the system in a safe state?
A B C A B C A B C
3. Determine the total amount of resources
P0 1 1 2 4 3 3 2 1 0 of each type?
P1 2 1 2 3 2 2
P2 4 0 1 9 0 2
P3 0 2 0 7 5 3
P4 1 1 2 1 1 2
Solution:
1. Content of the need matrix can be calculated by using the below formula
Need = Max – Allocation
Sankhya Nayak,CSE,JNNCE
7. Now again check for Process P3, Need = (7, 3, 3) Available = (9, 3, 5)
a. Need < Available = True Request of P3 is granted.
b. Available = Available +Allocation = (9, 3, 5) + (0, 2, 0) = (9, 5, 5)
8. Now again check for Process P0, = Need (3, 2, 1) Available (9, 5, 5)
a. Need < Available = True So, the request will be granted to P0.
A, B, C, D A, B, C, D A, B, C, D
P0 6 0 1 2 4 0 0 1 3 2 1 1
P1 2 7 5 0 1 1 0 0
P2 2 3 5 6 1 2 5 4
P3 1 6 5 3 0 6 3 3
Sankhya Nayak,CSE,JNNCE
P4 1 6 5 6 0 2 1 2
A = 4+1+1+3 = 9
B = 1+2+6+2+2 = 13
C = 5+3+1+1 = 10
D = 1+4+3+2+1 = 11
2) Need matrix :
-3) YES , system is in safe state Safe Sequence: P0, P2, P3, P4, P1
Sankhya Nayak,CSE,JNNCE
Figure 8.1 - A base and a limit register Figure 8.2 - Hardware address protection with base
define a logical address space and limit registers
addresses. However, if the load address changes at some later time, then the program will
have to be recompiled. DOS .COM programs use compile time binding.
o Load Time - If the location at which a program will be loaded is not known at compile time,
then the compiler must generate relocatable code, which references addresses relative to
the start of the program. If that starting address changes, then the program must be
reloaded but not recompiled.
o Execution Time - If a program can be moved around in memory during its execution, then
binding must be delayed until execution time. This requires special hardware, and is the
method implemented by most modern OSes.
• Figure 8.3 shows the various stages of the binding processes and the units involved in each stage:
Sankhya Nayak,CSE,JNNCE
• Note that user programs never see physical addresses. User programs work entirely in logical
address space, and any memory references or manipulations are done using purely logical addresses.
Only when the address gets sent to the physical memory chips is the physical memory address
generated.
1. With static linking library modules get fully included in executable modules, wasting both disk space
and main memory usage, because every program that included a certain routine from the library
would have to have their own copy of that routine linked into their executable code.
2. With dynamic linking, however, only a stub is linked into the executable module, containing
references to the actual library module linked in at run time.
o This method saves disk space, because the library routines do not need to be fully included in
the executable modules, only the stubs.
o If the code section of the library routines is reentrant, ( meaning it does not modify the code
while it runs, making it safe to re-enter it ), then main memory can be saved by loading only one copy
of dynamically linked routines into memory and sharing the code amongst all processes that are
concurrently using it.
8.2 Swapping
• A process must be loaded into memory to execute.
• If there is not enough memory available to keep all running processes in memory at the same time,
then some processes who are not currently using the CPU may have their memory swapped out to a
fast local disk called the backing store.
• If compile-time or load-time address binding is used, then processes must be swapped back into the
same memory location from which they were swapped out.
• If execution time binding is used, then the processes can be swapped back into any available
location.
• Swapping is a very slow process compared to other operations. For example, if a user process
occupied 10 MB and the transfer rate for the backing store were 40 MB per second, then it would
take 1/4 second ( 250 milliseconds ) just to do the data transfer. Adding in a latency lag of 8
milliseconds and ignoring head seek time for the moment, and further recognizing that swapping
involves moving old data out as well as new data in, the overall transfer time required for this swap is
512 milliseconds, or over half a second. For efficient processor scheduling the CPU time slice should
be significantly longer than this lost transfer time.
• To reduce swapping transfer overhead, it is desired to transfer as little information as possible, which
requires that the system know how much memory a process is using, as opposed to how much
it might use. Programmers can help with this by freeing up dynamic memory that they are no longer
using.
• It is important to swap processes out of memory only when they are idle, or more to the point,
only when there are no pending I/O operations. ( Otherwise, the pending I/O operation could write
into the wrong process's memory space. ) The solution is to either swap only totally idle processes or
do I/O operations only into and out of OS buffers, which are then transferred to or from process's
main memory as a second step.
• Most modern OSes no longer use swapping, because it is too slow and there are faster alternatives
available. ( e.g. Paging. )
Sankhya Nayak,CSE,JNNCE
• When the CPU scheduler selects a process for execution, the dispatcher loads the relocation and
limit-registers with the correct values.
• Because every address generated by the CPU is checked against these registers, we can protect the
OS from the running-process.
• The relocation-register scheme provides an effective way to allow the OS size to change dynamically.
• Transient OS code: Code that comes & goes as needed to save memory-space and overhead for
unnecessary swapping.
1.
Fixed-sized Partitioning 2. Variable-sized Partitioning
• The memory is divided into fixed-sized • The OS keeps a table indicating which parts of
partitions. memory are available and which parts are occupied.
• Each partition may contain exactly one • A hole is a block of available memory.
process. Normally, memory contains a set of holes of various
• The degree of multiprogramming is sizes.
bound by the number of partitions. • Initially, all memory is available for user-
• When a partition is free, a process is processes and considered one large hole.
selected from the input queue and • When a process arrives, the process is allocated
loaded into the free partition. memory from a large hole.
When the process terminates, the • If we find the hole, we allocate only as much
partition becomes available for memory as is needed and keep the remaining
another process. memory available to satisfy future requests.
Sankhya Nayak,CSE,JNNCE
Three strategies used to select a free hole from the set of available holes:
1. First Fit: Allocate the first hole that is big enough. Searching can start either at the beginning of the set of
holes or at the location where the previous first-fit search ended.
2. Best Fit: Allocate the smallest hole that is big enough. We must search the entire list, unless the list is
ordered by size. This strategy produces the smallest leftover hole.
3. Worst Fit: Allocate the largest hole. Again, we must search the entire list, unless it is sorted by size. This
strategy produces the largest leftover hole.
• First-fit and best fit are better than worst fit in terms of decreasing time and storage utilization.
8.3.3 Fragmentation
Two types of memory fragmentation:
1. Internal fragmentation 2. External fragmentation
Internal Fragmentation External Fragmentation
• The general approach is to break the • External fragmentation occurs when there is enough
physical-memory into fixed-sized blocks total memory-space to satisfy a request but the
and allocate memory in units based on available-spaces are not contiguous. (i.e. storage is
block size. fragmented into a large number of small holes).
• The allocated-memory to a process • Both the first-fit and best-fit strategies for memory-
may be slightly larger than the allocation suffer from external fragmentation.
requested-memory. • Statistical analysis of first-fit reveals that given N
• The difference between requested- allocated blocks, another 0.5 N blocks will be lost to
memory and allocated-memory is fragmentation. This property is known as the 50-
called internal fragmentation i.e. percent rule.
Unused memory that is internal to a
partition.
Sankhya Nayak,CSE,JNNCE
Sankhya Nayak,CSE,JNNCE
8.4 Paging
• Paging is a memory-management scheme which permits the physical-address space of a process to be
non-contiguous.
• This also solves the considerable problem of fitting memory-chunks of varying sizes onto the backing-
store.
• Traditionally: Support for paging has been handled by hardware.
• Recent designs: The hardware & OS are closely integrated.
called frames and breaking logical memory into blocks of the same size called pages.
• When a process is to be executed, its pages are loaded into any available memory frames from the
backing store.
• The backing store is divided into fixed-sized blocks that are of the same size as the memory frames.
The hardware support for paging is shown in Figure 8.7.
Address generated by CPU is divided into 2 parts (Figure 8.8):
1. Page-number (p) is used as an index to the page-table. The page-table contains the base-
address of each page in physical-memory.
2. Offset (d) is combined with the base-address to define the physical-address. This physical-
address is sent to the memory-unit.
• The page table maps the page number to a frame number, to yield a physical address
which also has two parts: The frame number and the offset within that frame.
• The number of bits in the frame number determines how many frames the system can address, and
the number of bits in the offset determines the size of each frame.
• If the size of logical address space is 2m and a page size is 2n addressing units (bytes or words), then
the high-order m – n bits of a logical address designate the page number, and the n low-order bits
designate the page offset.
Thus, the logical address is as follows:
• When a process requests memory (e.g. when its code is loaded in from disk), free frames are
allocated from a free-frame list, and inserted into that process's page table.
• Processes are blocked from accessing anyone else's memory because all their memory requests are
mapped through their page table. There is no way for them to generate an address that maps into
any other process's memory space.
• The operating system must keep track of each individual process's page table, updating it whenever
the process's pages get moved in and out of memory, and applying the correct page table when
processing system calls for a particular process. This all increases the overhead involved when
swapping processes in and out of the CPU.
Figure 8.10: Free frames (a) before allocation and (b) after allocation.
Working:
• When a logical-address is generated by the CPU, its page-number is presented to the TLB.
• If the page-number is found (TLB hit), its frame-number is immediately available and used to access
memory
• If page-number is not in TLB (TLB miss), a memory-reference to page table must be made. The
obtained frame-number can be used to access memory (Figure 8.11).
• In addition, we add the page-number and frame-number to the TLB, so that they will be
• found quickly on the next reference.
• If the TLB is already full of entries, the OS must select one for replacement.
• Percentage of times that a particular page-number is found in the TLB is called hit ratio.
8.4.3.Protection
• Memory-protection is achieved by protection-bits for each frame kept in the page-table.
• One protection-bit can define a page to be read-write or read-only.
Sankhya Nayak,CSE,JNNCE
• Every reference to memory goes through the page-table to find the correct frame number.
• Firstly, the physical-address is computed. At the same time, the protection-bit is checked to verify
that no writes are being made to a read-only page.
• An attempt to write to a read-only page causes a hardware-trap to the OS (or memory protection
violation).
Valid Invalid Bit: This bit is attached to each entry in the page-table.
• Valid bit: “valid” indicates that the associated page is in the process’ logical address space, and is
thus a legal page
• Invalid bit: “invalid” indicates that the page is not in the process’ logical address space Illegal
addresses are trapped by use of valid-invalid bit.
• The OS sets this bit for each page to allow or disallow access to the page.
Sankhya Nayak,CSE,JNNCE
• Each entry in the hash-table contains a linked-list of elements that hash to the same location (to
handle collisions).
Each element consists of 3 fields:
1. Virtual page-number
2. Value of the mapped page-frame and
3. Pointer to the next element in the linked-list.
The algorithm works as follows:
• The virtual page-number is hashed into the hash-table.
• The virtual page-number is compared with the first element in the linked-list.
• If there is a match, the corresponding page-frame (field 2) is used to form the desired
physical-address.
• If there is no match, subsequent entries in the linked-list are searched for a matching virtual page-
number.
Sankhya Nayak,CSE,JNNCE
8.6. Segmentation
• If offset is not between 0 & segment-limit, then we trap to the OS(logical-addressing attempt beyond
end of segment).
• If offset is legal, then it is added to the segment-base to produce the physical-memory address.
long.
Advantages of Segmentation
• No Internal fragmentation.
• Segment Table consumes less space in comparison to Page table in paging.
Disadvantage of Segmentation
• As processes are loaded and removed from the memory, the free memory space is broken into little
pieces, causing External fragmentation.