Platform Technologies Module 4
Platform Technologies Module 4
Platform Technologies Module 4
Module 4 of 4
PLATFORM TECHNOLOGIES
Brueckner B. Aswigue
This module will discuss deadlocks where problems that can only arise in a
system with multiple active asynchronous processes. You will learn the three basic
approaches to deadlock: prevention, avoidance, and detection. It can be useful to pose
a deadlock problem in human terms and ask why human systems never deadlock. It
will also allocate the resources to prevent deadlock, known as the Banker’s Algorithm.
This module presents the overview of deadlock and memory management of the
Operating Systems. Ebook, powerpoint and movies presentation also included for you
to give you more details of the module and also as your references.
The number of hours allotted for this module shall be for 11 hours. You are
expected to finish the module in 7 weeks.
LEARNING OUTCOMES
PRE-TEST
The following questions cover general areas of this module. You may not
know the answers to all questions, but please attempt to answer them without
asking others or referring to books.
Choose the best answer for each question and write the letter of your choice
after the number.
1. A situation that occurs in Operating System when any process enter a waiting state
because another waiting process is holding the demanded resources.
a. Starvation
b. Deadlock
c. Request
d. Use
2. A resource can be release only voluntarily by the process holding it, after that process
has completed its task
a. Mutual exclusion
b. Hold and wait
c. No preemption
d. Circular wait
4. If means -
a. Process is holding an instances of Resources
1
b. Process request instance of Resources
c. Process is pointing to Resources
d. Process moves to Resources
5. Helps you to dynamics access the resource – allocation state so that there can never
be a circular wait situation
a. Avoidance Algorithm
b. Prevention Algorithm
c. Hold and wait
d. Multiple instances
6. Consist of a large array of bytes that have an each with its own.
a. Memory
b. CPU
c. Instruction
d. Cache
10. Fast disk large enough to accommodate copies of all memory images for all users
a. Store
b. Front store
c. Backing store
d. Alternative store
LESSON 1: Deadlocks
Objectives:
At the end of the lesson, you should be able to:
1. identify accurately the meaning of platform technologies;
2. discuss comprehensively the degree of abstraction, bundling, interoperability
and evolution of the platform;
3. identify accurately the meaning of what OS all about; and,
4. discuss comprehensively the OS and its features
Let’s Engage.
A deadlock is a situation in which two computer programs sharing the same
resource are effectively preventing each other from accessing the resource, resulting in
both programs ceasing to function.
2
The earliest computer operating systems ran only one program at a time. All of
the resources of the system were available to this one program. Later, operating systems
ran multiple programs at once, interleaving them. Programs were required to specify in
advance what resources they needed so that they could avoid conflicts with other
programs running at the same time. Eventually some operating systems offered dynamic
allocation of resources. Programs could request further allocations of resources after
they had begun running. This led to the problem of the deadlock. Here is the simplest
Both threads (road highway) are blocked; each is waiting for an event which will
never occur. Traffic gridlock is an everyday example of a deadlock situation. In order
for deadlock to occur, four conditions must be true. Mutual exclusion - Each resource
is either currently allocated to exactly one process or it is available.
1. System Model
3
For example, a system may have two printers. These two printers may be defined
to be in the same resource class if no one cares which printer prints which output.
However, if one printer is on the ninth floor and the other is in the basement, then
people on the ninth floor may not see both printers as equivalent, and separate resource
classes may need to be defined for each printer.
A process must request a resource before using it and must release the resource
after using it. A process may request as many resources as it requires to carry out its
designated task. Obviously, the number of resources requested may not exceed the total
number of resources available in the system. In other words, a process cannot request
three printers if the system has only two. Under the normal mode of operation, a process
may utilize a resource in only the following sequence:
1. Request. The process requests the resource. If the request cannot be granted
immediately (for example, if the resource is being used by another process),
then the requesting process must wait until it can acquire the
resource.
2. Use. The process can operate on the resource (for example, if the resource is a
printer, the process can print on the printer).
3. Release. The process releases the resource.
2. Deadlock Characterization
Deadlock is a situation that occurs in OS when any process enters a waiting state
because another waiting process is holding the demanded resource. Deadlock is a
common problem in multi-processing where several processes share a specific type of
mutually exclusive resource known as a soft lock or software.
Example of Deadlock
• A real-world example would be traffic, which is going only in one direction.
• Here, a bridge is considered a resource.
• So, when Deadlock happens, it can be easily resolved if one car backs up
(Preempt resources and rollback).
• Several cars may have to be backed up if a deadlock situation occurs.
• So starvation is possible.
4
For example, Process A is allocated Resource B as it is requesting
Resource A. In the same way, Process B is allocated Resource A, and it is
requesting Resource B. This creates a circular wait loop.
Resource-Allocation Graph
A set of vertices V and a set of edges E.
• V is partitioned into two types:
o P = {P1, P2, …, Pn}, the set consisting of all the processes in the
system
o R = {R1, R2, …, Rm}, the set consisting of all resource types in the
system
▪ request edge – directed edge Pi → Rj
▪ assignment edge – directed edge Rj → Pi
5
Figure 4. Example of Resource-allocation graph
P1 → R1 → P2 → R3 → P3 → R2 → P1
P2 → R3 → P3 → R2 → P2
▪ Processes P1, P2, and P3 are deadlocked. Process P2 is waiting for the
resource R3, which is held by process P3. Process P3 is waiting for
either process P1 or process P2 to release resource R2. In addition,
process P1 is waiting for process P2 to release resource R1.
6
Figure 6. Resource-allocation graph with a cycle but no deadlock
(Operating System. http://www.os-book.com).
Basic facts:
• If graph contains no cycles no deadlock
• If graph contains a cycle
o if only one instance per resource type, then deadlock
o if several instances per resource type, possibility of deadlock
Deadlock Prevention:
It's important to prevent a deadlock before it can occur. The system checks every
transaction before it is executed to make sure it doesn't lead the deadlock situations.
Such that even a small change to occur dead that an operation which can lead to
Deadlock in the future it also never allowed process to execute.
It is a set of methods for ensuring that at least one of the conditions cannot hold.
Deadlock Detection
A deadlock occurrence can be detected by the resource scheduler. A resource
scheduler helps OS to keep track of all the resources which are allocated to different
processes. So, when a deadlock is detected, it can be resolved using the below-given
methods:
No preemptive action:
No Preemption - A resource can be released only voluntarily by the process
holding it after that process has finished its task
• If a process which is holding some resources request another resource that
can't be immediately allocated to it, in that situation, all resources will be
released.
• Preempted resources require the list of resources for a process that is
waiting.
• The process will be restarted only if it can regain its old resource and a new
one that it is requesting.
• If the process is requesting some other resource, when it is available, then it
was given to the requesting process.
• If it is held by another process that is waiting for another resource, we
release it and give it to the requesting process.
7
Mutual Exclusion:
Mutual Exclusion is a full form of Mutex. It is a special type of binary
semaphore which used for controlling access to the shared resource. It includes a
priority inheritance mechanism to avoid extended priority inversion problems. It
allows current higher priority tasks to be kept in the blocked state for the shortest
time possible.
Resources shared such as read-only files never lead to deadlocks, but
resources, like printers and tape drives, needs exclusive access by a single process.
Circular Wait:
It imposes a total ordering of all resource types. Circular wait also requires that
every process request resources in increasing order of enumeration.
Deadlock Avoidance
It is better to avoid a deadlock instead of taking action after the Deadlock has
occurred. It needs additional information, like how resources should be used.
Deadlock avoidance is the simplest and most useful model that each process
declares the maximum number of resources of each type that it may need.
Avoidance Algorithms
The deadlock-avoidance algorithm helps you to dynamically assess the
resource-allocation state so that there can never be a circular-wait situation.
A single instance of a resource type.
• Use a resource-allocation graph
• Cycles are necessary which are sufficient for Deadlock
Multiples instances of a resource type.
• Cycles are necessary but never sufficient for Deadlock.
• Uses the banker's algorithm
The deadlock situation occurs when Starvation is a situation where all the
one of the processes got blocked. low priority processes got blocked, and
the high priority processes execute.
Advantages of Deadlock
Here, are pros/benefits of using Deadlock method
• This situation works well for processes which perform a single burst of
activity
• No preemption needed for Deadlock.
• Convenient method when applied to resources whose state can be saved and
restored easily
• Feasible to enforce via compile-time checks
8
• Needs no run-time computation since the problem is solved in system
design
Summary:
• Deadlock Definition: It is a situation that occurs in OS when any process
enters a waiting state because another waiting process is holding the
demanded resource
• Circular waiting happens when one process is waiting for the resource,
which is held by the second process, which is also waiting for the resource
held by the third process etc.
• A deadlock occurrence can be detected by the resource scheduler.
• It's important to prevent a deadlock before it can occur.
• A resource can be released only voluntarily by the process holding it after
that process has finished its task.
• Mutual Exclusion is a full form of Mutex. It is a special type of binary
semaphore which used for controlling access to the shared resource.
• Hold and wait is a condition where processes must be stopped from holding
single or multiple resources while simultaneously waiting for one or more
others.
• Deadlock avoidance is the simplest and most useful model that each process
declares the maximum number of resources of each type that it may need.
• The deadlock-avoidance algorithm helps you to dynamically assess the
resource-allocation state so that there can never be a circular-wait situation.
• Deadlock is an infinite process, whereas starvation is a long waiting but not
an infinite process.
Deadlock Avoidance
Requires that the system has some additional a priori information
available
• Simplest and most useful model requires that each process declare the
maximum number of resources of each type that it may need
• The deadlock-avoidance algorithm dynamically examines the resource-
allocation state to ensure that there can never be a circular-wait condition
• Resource-allocation state is defined by the number of available and allocated
resources, and the maximum demands of the processes
Safe State
• When a process requests an available resource, system must decide if
immediate allocation leaves the system in a safe state
• System is in safe state if there exists a sequence <P1, P2, …, Pn> of ALL the
processes in the systems such that for each Pi, the resources that Pi can
still request can be satisfied by currently available resources + resources
held by all the Pj, with j < I
• That is:
o If Pi resource needs are not immediately available, then Pi can wait
until all Pj have finished
o When Pj is finished, Pi can obtain needed resources, execute, return
allocated resources, and terminate
o When Pi terminates, Pi +1 can obtain its needed resources, and so on
• If a system is in safe state no deadlocks
• If a system is in unsafe state possibility of deadlock
• Avoidance ensure that a system will never enter an unsafe state.
9
Safe, Unsafe, Deadlock State
Avoidance Algorithms
• Claim edge Pi → Rj indicated that process Pj may request resource Rj;
represented by a dashed line
• Claim edge converts to request edge when a process requests a resource
• Request edge converted to an assignment edge when the resource is
allocated to the process
• When a resource is released by a process, assignment edge reconverts to
a claim edge
• Resources must be claimed a priori in the system
Banker’s Algorithm
Safety Algorithm
1. Let Work and Finish be vectors of length m and n, respectively. Initialize:
Work = Available
Finish [i] = false for i = 0, 1, …, n- 1
2. Find an i such that both:
(a) Finish [i] = false
(b) Needi Work
If no such i exists, go to step 4
3. Work = Work + Allocationi
Finish[i] = true
go to step 2
4. If Finish [i] == true for all i, then the system is in a safe state
10
Resource-Request Algorithm for Process Pi
Requesti = request vector for process Pi. If Requesti [j] = k then process Pi
wants k instances of resource type Rj
1. If Requesti Needi go to step 2. Otherwise, raise error condition, since
process has exceeded its maximum claim
2. If Requesti Available, go to step 3. Otherwise Pi must wait, since
resources are not available
3. Pretend to allocate requested resources to Pi by modifying the state as
follows:
Available = Available – Requesti;
Allocationi = Allocationi + Requesti;
Needi = Needi – Requesti;
• If safe the resources are allocated to Pi
• If unsafe Pi must wait, and the old resource-allocation state is restored
DEADLOCK DETECTION
• Allow system to enter deadlock state
• Detection algorithm
11
• Recovery scheme
Detection Algorithm
1. Let Work and Finish be vectors of length m and n, respectively Initialize:
(a) Work = Available
(b) For i = 1,2, …, n, if Allocationi 0, then
Finish[i] = false; otherwise, Finish[i] = true
2. Find an index i such that both:
(a) Finish[i] == false
(b) Requesti Work
If no such i exists, go to step 4
3. Work = Work + Allocationi
Finish[i] = true
go to step 2
4. If Finish[i] == false, for some i, 1 i n, then the system is in deadlock
state. Moreover, if Finish[i] == false, then Pi is deadlocked
12
P3 211 100
P4 002 002
• Sequence <P0, P2, P3, P1, P4> will result in Finish[i] = true for all i
Detection-Algorithm Usage
13
3. Consider the following snapshot of a system:
Objectives:
At the end of the lesson, you should be able to:
1. identify comprehensively the description of organizing memory hardware; and,
2. analyze extensively the memory-management techniques, paging and
segmentation.
Let’s Engage.
Main Memory refers to a physical memory that is the internal memory to the
computer. The word main is used to distinguish it from external mass storage devices
such as disk drives. Main memory is also known as RAM. The computer is able to change
only data that is in main memory. Therefore, every program we execute and every file
we access must be copied from a storage device into main memory.
This lesson will discuss several issues that are pertinent to managing memory:
basic hardware, the binding of symbolic memory addresses to actual physical
addresses, and the distinction between logical and physical addresses.
14
Memory is central to the operation of a modern computer system. Memory
consists of a large array of bytes, each with its own address. The CPU fetches
instructions from memory according to the value of the program counter. These
instructions may cause additional loading from and storing to specific memory
addresses.
Background
• Program must be brought (from disk) into memory and placed within a process
for it to be run
• Main memory and registers are only storage CPU can access directly
• Memory unit only sees a stream of addresses + read requests, or address + data
and write requests
• Register access in one CPU clock (or less)
• Main memory can take many cycles, causing a stall
• Cache sits between main memory and CPU registers
• Protection of memory required to ensure correct operation
Figure 9. A base and a limit register define a logical address space (Operating
System. http://www.os-book.com).
15
• Figure 10 shows the hardware address protection to prevent a user program from
(accidentally or deliberately) modifying the code or data structures of either the
operating system or other users.
• The base and limit registers can be loaded only by the operating system, which
uses a special privileged instruction. Since privileged instructions can be
executed only in kernel mode, and since only the operating system executes in
kernel mode, only the operating system can load the base and limit registers .
Figure 10. Hardware address protection with base and limit registers
Address Binding
• Programs on disk, ready to be brought into memory to execute form an input
queue
o Without support, must be loaded into address 0000
• Inconvenient to have first user process physical address always at 0000
o How can it not be?
• Further, addresses represented in different ways at different stages of a
program’s life
o Source code addresses usually symbolic
o Compiled code addresses bind to relocatable addresses
▪ i.e. “14 bytes from beginning of this module”
o Linker or loader will bind relocatable addresses to absolute addresses
▪ i.e. 74014
o Each binding maps one address space to another
16
Figure 11. Multistep Processing of a User Program (Operating System.
http://www.os-book.com).
17
o Implemented through program design
o OS can help by providing libraries to implement dynamic loading
Dynamic Linking
• Static linking – system libraries and program code combined by the loader
into the binary program image
• Dynamic linking –linking postponed until execution time
• Small piece of code, stub, used to locate the appropriate memory-resident
library routine
• Stub replaces itself with the address of the routine, and executes the routine
• Operating system checks if routine is in processes’ memory address
o If not in address space, add to address space
• Dynamic linking is particularly useful for libraries
• System also known as shared libraries
• Consider applicability to patching system libraries
o Versioning may be needed
Swapping
• A process can be swapped temporarily out of memory to a backing store, and then
brought back into memory for continued execution
o Total physical memory space of processes can exceed physical memory
• Backing store – fast disk large enough to accommodate copies of all memory
images for all users; must provide direct access to these memory images
• Roll out, roll in – swapping variant used for priority-based scheduling algorithms;
lower-priority process is swapped out so higher-priority process can be loaded and
executed
• Major part of swap time is transfer time; total transfer time is directly proportional
to the amount of memory swapped
• System maintains a ready queue of ready-to-run processes which have memory
images on disk
• Does the swapped out process need to swap back in to same physical addresses?
• Depends on address binding method
o Plus consider pending I/O to / from process memory space
• Modified versions of swapping are found on many systems (i.e., UNIX, Linux, and
Windows)
o Swapping normally disabled
o Started if more than threshold amount of memory allocated
o Disabled again once memory demand reduced below threshold
18
Figure 8.5 Swapping of two processes using a disk as a backing store
(Operating System. http://www.os-book.com).
CONTIGUOUS ALLOCATION
• Main memory must support both OS and user processes
• Limited resource, must allocate efficiently
• Contiguous allocation is one early method
• Main memory usually into two partitions:
o Resident operating system, usually held in low memory with interrupt vector
19
o User processes then held in high memory
o Each process contained in single contiguous section of memory
• Relocation registers used to protect user processes from each other, and from
changing operating-system code and data
o Base register contains value of smallest physical address
o Limit register contains range of logical addresses – each logical address
must be less than the limit register
o MMU maps logical address dynamically
o Can then allow actions such as kernel code being transient and kernel
changing size
Multiple-partition allocation
• Multiple-partition allocation
o Degree of multiprogramming limited by number of partitions
o Variable-partition sizes for efficiency (sized to a given process’ needs)
o Hole – block of available memory; holes of various size are scattered
throughout memory
o When a process arrives, it is allocated memory from a hole large
enough to accommodate it
o Process exiting frees its partition, adjacent free partitions combined
o Operating system maintains information about:
a) allocated partitions b) free partitions (hole)
Fragmentation
• External Fragmentation – total memory space exists to satisfy a request,
but it is not contiguous
• Internal Fragmentation – allocated memory may be slightly larger than
requested memory; this size difference is memory internal to a partition, but
not being used
• First fit analysis reveals that given N blocks allocated, 0.5 N blocks lost to
fragmentation
o 1/3 may be unusable -> 50-percent rule
• Reduce external fragmentation by compaction
o Shuffle memory contents to place all free memory together in one
large block
o Compaction is possible only if relocation is dynamic, and is done at
execution time
o I/O problem
▪ Latch job in memory while it is involved in I/O
▪ Do I/O only into OS buffers
20
• Now consider that backing store has same fragmentation problems
Segmentation
• Memory-management scheme that supports user view of memory
• A program is a collection of segments
o A segment is a logical unit such as:
▪ main program
▪ procedure
▪ function
▪ method
▪ object
▪ local variables, global variables
▪ common block
▪ stack
▪ symbol table
▪ arrays
Segmentation Architecture
• Logical address consists of a two tuple:
• <segment-number, offset>,
• Segment table – maps two-dimensional physical addresses; each table
entry has:
o base – contains the starting physical address where the segments
reside in memory
o limit – specifies the length of the segment
o Segment-table base register (STBR) points to the segment table’s
location in memory
• Segment-table length register (STLR) indicates number of segments used
by a program;
o segment number s is legal if s < STLR
• Protection
o With each entry in segment table associate:
▪ validation bit = 0 illegal segment
▪ read/write/execute privileges
• Protection bits associated with segments; code sharing occurs at segment
level
• Since segments vary in length, memory allocation is a dynamic storage-
allocation problem
• A segmentation example is shown in the following diagram
21
Figure . Segmentation hardware (Operating System. http://www.os-book.com).
Paging
• Physical address space of a process can be noncontiguous; process is allocated
physical memory whenever the latter is available
o Avoids external fragmentation
o Avoids problem of varying sized memory chunks
• Divide physical memory into fixed-sized blocks called frames
o Size is power of 2, between 512 bytes and 16 Mbytes
• Divide logical memory into blocks of same size called pages
• Keep track of all free frames
• To run a program of size N pages, need to find N free frames and load program
• Set up a page table to translate logical to physical addresses
• Backing store likewise split into pages
• Still have Internal fragmentation
22
• Page-table base register (PTBR) points to the page table
• Page-table length register (PTLR) indicates size of the page table
• In this scheme every data/instruction access requires two memory accesses
o One for the page table and one for the data / instruction
• The two memory access problem can be solved by the use of a special fast-
lookup hardware cache called associative memory or translation look-aside
buffers (TLBs)
• Some TLBs store address-space identifiers (ASIDs) in each TLB entry –
uniquely identifies each process to provide address-space protection for that
process
o Otherwise need to flush at every context switch
• TLBs typically small (64 to 1,024 entries)
• On a TLB miss, value is loaded into the TLB for faster access next time
o Replacement policies must be considered
o Some entries can be wired down for permanent fast access
Associative Memory
• Associative memory – parallel search
Page # Frame #
Memory Protection
• Memory protection implemented by associating protection bit with each frame
to indicate if read-only or read-write access is allowed
o Can also add more bits to indicate page execute-only, and so on
• Valid-invalid bit attached to each entry in the page table:
o “valid” indicates that the associated page is in the process’ logical
address space, and is thus a legal page
o “invalid” indicates that the page is not in the process’ logical address
space
o Or use page-table length register (PTLR)
• Any violations result in a trap to the kernel
Shared Pages
• Shared code
o One copy of read-only (reentrant) code shared among processes (i.e., text
editors, compilers, window systems)
23
o Similar to multiple threads sharing the same process space
o Also useful for interprocess communication if sharing of read-write pages
is allowed
• Private code and data
o Each process keeps a separate copy of the code and data
o The pages for the private code and data can appear anywhere in the
logical address space
where p1 is an index into the outer page table, and p2 is the displacement within
the page of the inner page table
• Known as forward-mapped page table
Direction: Read the passage carefully and plan what you will write. Place your answer
in the pad paper whether yellow or white to be submitted. Each question has 10 points
each. The Essay rubrics have a correspond points that will guide you in your essay.
Features 9-10 points 7-8 points 4-6 points 1-3 points
Expert Accomplished Capable Beginner
Understanding Writing shows Writing shows a Writing shows Writing shows
strong clear adequate little
understanding understanding understanding understanding
24
Quality of Piece was Piece was written Piece had little Piece had no style
Writing written in an in an interesting style
extraordinary style
style
Gives no new
Very informative Somewhat Gives some new information and
and well- informative and information but very poorly
organized organized poorly organized organized
Grammar, Virtually no Few spelling and A number of So many spelling,
Usage & spelling, punctuation spelling, punctuation and
Mechanics punctuation or errors, minor punctuation or grammatical
grammatical grammatical grammatical errors that it
errors errors errors interferes with the
meaning
QUESTIONS:
1. List three examples of deadlocks that are not related to a computer system
environment.
2. Suppose that a system is in an unsafe state. Show that it is possible for
the processes to complete their execution without entering a deadlocked
state.
DIRECTION: SOLVE THE PROBLEM USING BANKER’S ALGORITHM.
3. Consider the following snapshot of a system:
POST ASSESSMENT
Directions: This will assess you what you been learned on this module. Select the
letters only. Write your answer in a separate clean paper for submission.
1. Only one process at a time can use a resource
a. Mutual exclusion
b. Hold and wait
c. No Preemption
d. Circular wait
2. The system checks every transaction before it is executed to make sure it doesn’t
lead the deadlock situations
a. Deadlock prevention
b. Deadlock no prevention
c. Deadlock no action
d. Avoidance
3. It imposes a total ordering of all resources type.
a. Hold and wait
b. Circular wait
c. Deadlock avoidance
d. Avoiding algorithm
25
4. if means -
a. Process is holding an instances of Resources
b. Process request instance of Resources
c. Process is pointing to Resources
d. Process moves to Resources
5. A full form of Mutex is called
a. Mutual exclusion
b. Deadlock
c. Hold and wait
d. Resource
6. Hold the smallest legal physical memory address
a. Base register
b. Limit register
c. Cache
d. Memory access
7. Program on disk, ready to be brought into memory to execute form an input queue
a. Trap
b. Address Binding
c. Address error
d. System monitor
8. Set of all physical addresses generated by a program
a. Logical address space
b. Physical address space
c. Virtual address space
d. All of the above
9. System libraries and program code combined by the loader into the binary program
image.
a. No linking
b. Shared Linking
c. Dynamic Linking
d. Static linking
10. Block of available is called
a. Hole
b. Variable
c. Store
d. Transient
REFERENCES
Silberschatz, A. et. al. (2013). Operating System Concepts. 9th Edition. Hoboken, New
Jersey, USA: John Wiley & Sons, Inc.
Stallings, William (2012). Operating System Internals and Design Principles. 7th edition.
Upper Saddle River, New Jersey: Pearson Education, Inc.
26