Page Replacement Algorithms

Download as pdf or txt
Download as pdf or txt
You are on page 1of 31

Page Replacement Algorithms

Page Replacement Algorithms


• In an operating system that uses paging for
memory management, a page replacement
algorithm is needed to decide which page needs
to be replaced when new page comes in.
• Page replacement is done when the requested page
is not found in the main memory called page
fault.
Types of Page Replacement Algorithms
• There are various page replacement algorithms.
Each algorithm has a different method by which
the pages can be replaced.
• The target for all algorithms is to reduce the
number of page faults.
• FIFO:-in this algorithm, a queue is maintained.
The page which is assigned the frame first will be
replaced first. In other words, the page which
resides at the rare end of the queue will be
replaced on the every page fault.
Numerical on FIFO, LRU and Optimal
• Consider a reference string: 4, 7, 6, 1, 7, 6, 1, 2,
7, 2. The number of frames in the memory is 3.
Find out the number of page faults respective
to:
• FIFO Page Replacement Algorithm
• LRU Page Replacement Algorithm
• Optimal Page Replacement Algorithm
FIFO Page Replacement Algorithm

• Number of Page Faults in FIFO = 6


• Consider a main memory with five page frames and the following sequence of
page references: 3, 8, 2, 3, 9, 1, 6, 3, 8, 9, 3, 6, 2, 1, 3. which one of the following
is true with respect to page replacement policies First-In-First-out (FIFO) and
Least Recently Used (LRU)?
FIFO
• According to FIFO, the page which first comes in the memory will first goes out.

• Number of Page Faults = 9


• Number of hits = 6
Optimal Page Replacement algorithm
• Optimal Page Replacement algorithm:
This algorithms replaces the page which will not
be referred for so long in future. Although it can
not be practically implementable but it can be
used as a benchmark. Other algorithms are
compared to this in terms of optimality.
Optimal Page Replacement Algorithm

Number of Page Faults in Optimal Page


Replacement Algorithm = 5
Least recent used (LRU) page replacement
algorithm
• Least recent used (LRU) page replacement
algorithm → this algorithm replaces the page
which has not been referred for a long time.
This algorithm is just opposite to the optimal
page replacement algorithm. In this, we look at
the past instead of staring at future.
LRU Page Replacement Algorithm

• Number of Page Faults in LRU = 6


LRU:
• According to LRU, the page which has not been
requested for a long time will get replaced with the
new one.

• Number of Page Faults = 9


Number of Hits = 6
Page Faults vs Number of Frames
Allocation of frames
• An important aspect of operating systems, virtual
memory is implemented using demand paging.
• Demand paging necessitates the development of
a page-replacement algorithm and a frame
allocation algorithm.
• Frame allocation algorithms are used if you have
multiple processes; it helps decide how many frames
to allocate to each process.
• Frame allocation algorithms :
The three algorithms commonly used to allocate
frames to a process are:
1. Equal allocation
2. Proportional allocation (Weighted )
3. priority allocation.
1. Equal allocation:
• Each process gets the equal number of frames.
• In a system with x frames and y processes, each
process gets equal number of frames, i.e. x/y. For
instance, if the system has 48 frames and 9
processes, each process will get 5 frames.
• The three frames which are not allocated to any
process can be used as a free-frame buffer pool.
• Disadvantage: In systems with processes of varying
sizes, it does not make much sense to give each
process equal frames.
• Allocation of a large number of frames to a small
process will eventually lead to the wastage of a large
number of allocated unused frames.
• For example2, given no. of frames: 6
• No. of processes available: 3
• Therefore, each process will get 2 frames

• Not very much useful as not every process will require equal
number of frames; some process may require extra frames
whereas some process may require less number of frames.
2. Proportional allocation
• Frames are allocated to each process according to
the process size.
For a process pi of size si, the number of allocated
frames is ai = (si/S)*m, where S is the sum of the
sizes of all the processes and m is the number of
frames in the system.
• For instance, in a system with 62 frames, if there is
a process of 10KB and another process of 127KB,
then the first process will be allocated (10/137)*62
= 4 frames and the other process will get
(127/137)*62 = 57 frames.
• Advantage: All the processes share the available
frames according to their needs, rather than equally.
• For exemple 2: avalable processes of size P1: 20 Pages, P2: 30
Pages, P3: 50 Pages
• Avalable frames: 10
• Requièrent: P1= 20/100*10=2 (P1+P2+P3:20+30+50=100)
• P2= 30/100*10=3, P3=50/100*10=5
3. Priority allocation
• Disadvantage of proportional allocation is
doesn't distinguish between high priority and
low priority.
• Based on the priority the frames will be
allocated to the process.
• The process which is having higher priority will
be allocated more number of frames.
• The process which is having low priority will
be allocated less number of frames.
Global Versus Local Allocation
• We can classify page-replacement algorithms
into two broad categories:
global replacement and local replacement.
• Global replacement allows a process to select a
replacement frame from the set of all frames,
even if that frame is currently allocated to some
other process; one process can take a frame
from another.
• Local replacement requires that each process
selects from only its own set of allocated
frames.
Thrashing
• The system spends most of its time shuttling
pages between main memory and secondary
memory due to frequent page faults. This
behavior is known as thrashing.
• A process is thrashing if it is spending more time
paging than executing instructions. This leads to:
– low CPU utilization.
– The operating system thinks that it needs to increase
the degree of multiprogramming.
Thrashing(Cont.)
Locality Model
• A locality is a set of pages that are actively used
together. The locality model states that as a
process executes, it moves from one locality to
another. A program is generally composed of
several different localities which may overlap.
• For example when a function is called, it
defines a new locality where memory
references are made to the instructions of the
function call, it’s local and global variables, etc.
Similarly, when the function is exited, the
process leaves this locality.
Example:
int d; //global variable
function T()
{
a,b,c; // four set of pages : 1,2,3,4 requirement
….
}
function H()
{
i,j; // three set of pages : 1,5,6 requirement
….
}
Techniques to handle Thrashing:
1. Working set model.
2. Page fault frequency.
1.Working Set Model
• This model is based on the above-stated concept
of the Locality Model.
• The basic principle states that if we allocate
enough frames to a process to accommodate its
current locality, it will only fault whenever it
moves to some new locality. But if the allocated
frames are lesser than the size of the current
locality, the process is bound to thrash.
• According to this model, based on a parameter A,
the working set is defined as the set of pages in
the most recent ‘A’ page references. Hence, all
the actively used pages would always end up
being a part of the working set.
• The accuracy of the working set is dependant on
the value of parameter A. If A is too large, then
working sets may overlap. On the other hand, for
smaller values of A, the locality might not be
covered entirely.
• If D is the total demand for frames and WSSi is
the working set size for a process i,
• D= Σ WSSi
• Now, if ‘m’ is the number of frames available in the
memory, there are 2 possibilities:
– (i) D>m i.e. total demand exceeds the number of frames,
then thrashing will occur as some processes would not get
enough frames.
– (ii) D<=m, then there would be no thrashing.
• If there are enough extra frames, then some more
processes can be loaded in the memory. On the other
hand, if the summation of working set sizes exceeds
the availability of frames, then some of the processes
have to be suspended(swapped out of memory).
• This technique prevents thrashing along with ensuring
the highest degree of multiprogramming possible.
Thus, it optimizes CPU utilization.
Working-Set model (Cont.)
• The pages used by a process with in a window of
time are called its working set(Δ).
• The working set model is based on the assumption
of locality , Δ=10 frames
Disadvantage:
• Keeping track of working set.
• Working set is a moving window.
2. Page-Fault Frequency
• A more direct approach to handle thrashing is
the one that uses Page-Fault Frequency concept.
• The problem associated with Thrashing is the high page
fault rate and thus, the concept here is to control the
page fault rate.
• If the page fault rate is too high, it indicates that the
process has too few frames allocated to it. On the
contrary, a low page fault rate indicates that the process
has too many frames.
• Upper and lower limits can be established on the desired
page fault rate as shown in the diagram.
• If the page fault rate falls below the lower limit, frames
can be removed from the process. Similarly, if the page
fault rate exceeds the upper limit, more number of
frames can be allocated to the process.
• In other words, the graphical state of the
system should be kept limited to the
rectangular region formed in the given
diagram.
• Here too, if the page fault rate is high with no
free frames, then some of the processes can be
suspended and frames allocated to them can be
reallocated to other processes. The suspended
processes can then be restarted later.

You might also like