OS-UNIT-2
OS-UNIT-2
OS-UNIT-2
Unit 2
When the CPU scheduler selects a process for execution, the dispatcher loads the relocation
and limit registers with the correct values as part of the context switch. Because every address
generated by a CPU is checked against these registers, we can protect both the operating
system and the other users' programs and data from being modified by this running process.
programs and data from being modified by this running process. The relocation-register
scheme provides an effective way to allow the operating system's size to change dynamically.
This flexibility is desirable in many situations. For example, the operating system contains
code and buffer space for device drivers. If a device driver (or other operating-system
service) is not commonly used, we do not want to keep the code and data in memory, as we
might be able to use that space for other purposes. Such code is sometimes called transient
operating-system code; it comes and goes as needed. Thus, using this code changes the size
of the operating system during program execution.
Memory Allocation
One of the simplest methods for allocating memory is to divide memory into several fixed-
sized partitions. Each partition may contain exactly one process. Thus, the degree of
multiprogramming is bound by the number of partitions. In this partition method when a
partition is free, a process is selected from the input queue and is loaded into the free
partition. When the process terminates, the partition becomes available for another process.
This method was originally used by the IBM OS/360 operating system (called MFT); it is no
longer in use. The method described next is a generalization of the fixed-partition scheme
(called MVT); it is used primarily in batch environments. Many of the ideas presented here
are also applicable to a time-sharing environment in which pure segmentation is used for
memory management
In the variable partition scheme scheme, the operating system keeps a table indicating
which parts of memory are available and which are occupied. Initially, all memory is
available for user processes and is considered one large block of available memory.
Eventually as you will see, memory contains a set of holes of various sizes. As processes
enter the system, they are put into an input queue. The operating system takes into account
the memory requirements of each process and the amount of available memory space in
determining which processes are allocated memory. When a process is allocated space, it is
loaded into memory, and it can then compete for CPU time. When a process terminates, it
releases its memory which the operating system may then fill with another process from the
input queue.
the memory blocks available comprise a set of holes of various sizes scattered throughout
memory. When a process arrives and needs memory, the system searches the set for a hole
that is large enough for this process. If the hole is too large, it is split into two parts. One part
is allocated to the arriving process; the other is returned to the set of holes. When a process
terminates, it releases its block of memory, which is then placed back in the set of holes. If
the new hole is adjacent to other holes, these adjacent holes are merged to form one larger
hole. At this point, the system may need to check whether there are processes waiting for
memory and whether this newly freed and recombined memory could satisfy the demands of
any of these waiting processes.
The first- fit, best- fit, worst fit and strategies are the ones most commonly used to select a
free hole from the set of available holes.
● First fit. Allocate the first hole that is big enough. Searching can start either at the
beginning of the set of holes or at the location where the previous first-fit search
ended. We can stop searching as soon as we find a free hole that is large enough.
● Best fit. Allocate the smallest hole that is big enough. We must search the entire list,
unless the list is ordered by size. This strategy produces the smallest leftover hole.
● Worst fit. Allocate the largest hole. Again, we must search the entire list,unless it is
sorted by size. This strategy produces the largest leftover hole, which may be more
useful than the smaller leftover hole from a best-fit approach.
Simulations have shown that both first fit and best fit are better than worst fit in terms of
decreasing time and storage utilization. Neither first fit nor best fit is clearly better than the
other in terms of storage utilization, but first fit is generally faster.
Fragmentation
Both the first-fit and best-fit strategies for memory allocation suffer from external
fragmentation. As processes are loaded and removed from memory,the free memory space
is broken into little pieces. External fragmentation exists when there is enough total memory
space to satisfy a request but the available spaces are not contiguous; storage is fragmented
into a large number of small holes. This fragmentation problem can be severe. In the worst
case, we could have a block of free (or wasted) memory between every two processes. If all
these small pieces of memory were in one big free block instead, we might be able to run
several more processes.
Whether we are using the first-fit or best-fit strategy can affect the amount of fragmentation.
(First fit is better for some systems, whereas best fit is better for others.) Another factor is
which end of a free block is allocated. (Which is the leftover piece-the one on the top or the
one on the bottom?) No matter which algorithm is used, however, external fragmentation will
be a problem.
The general approach to avoiding this problem is to break the physical memory into fixed-
sized blocks and allocate memory in units based on block size. With this approach, the
memory allocated to a process may be slightly larger than the requested memory. The
difference between these two numbers is internal fragmentation internal memory that is
internal to a partition.Another possible solution to the external-fragmentation problem is to
permit the logical address space of the processes to be noncontiguous, thus allowing a
process to be allocated physical memory wherever such memory is available. Two
complementary techniques achieve this solution: paging and segmentation.
Paging
PAGING is a memory-management scheme that permits the physical address space a
process to be noncontiguous. Paging avoids external fragmentation and the need for
compaction. It also solves the considerable problem of fitting memory chunks of varying
sizes onto the backing store; most memory management schemes used before the
introduction of paging suffered from this problem. The problem arises because, when some
code fragments or data residing in main memory need to be swapped out, space must be
found on the backing store. The backing store has the same fragmentation problems
discussed in connection with main memory, but access is much slower, so compaction is
impossible. Because of its advantages over earlier methods, paging in its various forms is
used in most operating systems.
The basic method for implementing paging involves breaking physical memory into fixed-
sized blocks called FRAMES and breaking logical memory into blocks of the same size
called PAGES.When a process is to be executed, its pages are loaded into any available
memory frames from their source (a file system or the backing store). The backing store is
divided into fixed-sized blocks that are of the same size as the memory frames.
The hardware support for paging is illustrated . Every address generated by the CPU is
divided into two parts: PAGE NUMBER (p) and PAGE OFFSET(d) . The page number is
used as an index into a The page table containing the base address of each page in physical
memory. This base address is combined with the page offset to define the physical memory
address that is sent to the memory unit.
where p is an index into the page table and d is the displacement within the page.You may
have noticed that paging itself is a form of dynamic relocation. Every logical address is
bound by the paging hardware to some physical address. Using paging is similar to using a
table of base (or relocation) registers, one for each frame of memory.
When a process arrives in the system to be executed, its size, expressed in pages, is
examined. Each page of the process needs one frame. Thus, if the process requires 11
pages, at least 11 frames must be available in memory. If n frames are available, they are
allocated to this arriving process. The first page of the process is loaded inJo one of the
allocated frames, and the frame number is put in the page table for this process. The next
page is loaded into another frame, its frame number is put into the page table.
The use of registers for the page table is satisfactory if the page table is reasonably small
(for example, 256 entries). Most contemporary computers, however, allow the page table to
be very large (for example, 1 million entries). For these machines, the use of fast registers to
implement the page table is not feasible. Rather, the page table is kept in main memory, and
PAGE TABLE BASE REGISTER points to the page table. Changing page tables requires
changing only this one register, substantially reducing context-switch time.
Consider that the size of main memory (physical memory) is 512 MB = 229
(i.e. physical memory can be represented with 29 bits)
This physical memory is divided into a number of frames where each frame
size = 4 KB = 212 (i.e. frame size can be represented with 12 bits)
The process is now divided into a number of pages and we know that page
size is equal to frame size = 4 KB = 212
Now in page table the page number provided by CPU in logical address act
as an index which leads you to the frame number where this page is stored
in main memory. This means each entry of the page table has the frame
number.
= 220 * 2
= 221
=2 MB
So can you store the page table of size 2 MB in a frame of the main
memory where frame size is 4 KB? It is impossible.
So we need to divide the page table this division of page table can be
accomplished in several ways. You can perform two-level division, three-
level division on page table and so on.
The hashed page table is a convenient way to structure the page table
where logical address space is beyond 32 bits. The hash table has several
entries where each entry has a link list. Each link list has a set of linked
elements where each element hash to the same location. Each element
has three entries page number, frame number and pointer to the next
element. We would understand the working of this page table better with
the help of an example.
The CPU generates a logical address for the page it needs. Now, this
logical address needs to be mapped to the physical address. This logical
address has two entries i.e. a page number (P3 in our case) and an offset.
The page number from the logical address is directed to the hash function.
The hash function produces a hash value corresponding to the page
number. This hash value directs to an entry in the hash table. As we have
studied earlier, each entry in the hash table has a link list. Here the page
number is compared with the first element’s first entry if a match is found
then the second entry is checked.
In our example, the logical address includes page number P3 which does
not match the first element of the link list as it includes page number P1. So
we will move ahead and check the next element; now this element has a
page number entry i.e. P3 so further we will check the frame entry of the
element which is fr5. To this frame number, we will append the offset
provided in the logical address to reach the page’s physical address.
So, this is how the hashed page table works to map the logical address to
the physical address.
Consider we have six processes in execution. So, six processes will have
some or the other of their page in the main memory which would compel
their page tables also to be in the main memory consuming a lot of space.
This is the drawback of the paging concept.
The inverted page table is the solution to this wastage of memory. The
concept of an inverted page table involves the existence of single-page
table which has entries of all the pages (may they belong to different
processes) in main memory along with the information of the process to
which they are associated. To get a better understanding consider the
figure below of inverted page table.
The CPU generates the logical address for the page it needs to access.
This time the logical address consists of three entries process id, page
number and the offset. The process id identifies the process, of which the
page has been demanded, page number indicates which page of the
process has been asked for and the offset value indicates the displacement
required.
Though the inverted page table reduces the wastage of memory but it
increases the search time. This is because the entries in an inverted page
table are sorted on the basis of physical address whereas the lookup is
performed using logical address. It happens sometimes that the entire table
is searched to find the match.
So these are the three techniques that can be used to structure a page
table that helps the operating system in mapping the logical address of the
page required by CPU to its physical address.
Segmentation
In Operating Systems, Segmentation is a memory management technique in which
the memory is divided into the variable size parts. Each part is known as a segment
which can be allocated to a process.
The details about each segment are stored in a table called a segment table.
Segment table is stored in one (or many) of the segments.
Operating system doesn't care about the User's view of the process. It may divide the
same function into different pages and those pages may or may not be loaded at the
same time into the memory. It decreases the efficiency of the system.
It is better to have segmentation which divides the process into the segments. Each
segment contains the same type of functions such as the main function can be
included in one segment and the library functions can be included in the other
segment.
Translation of Logical address into physical address
by segment table
CPU generates a logical address which contains two parts:
1. Segment Number
2. Offset
For Example:
Suppose a 16 bit address is used with 4 bits for the segment number and 12 bits for
the segment offset so the maximum segment size is 4096 and the maximum
number of segments that can be refereed is 16.
When a program is loaded into memory, the segmentation system tries to locate
space that is large enough to hold the first segment of the process, space
information is obtained from the free list maintained by memory manager. Then it
tries to locate space for other segments. Once adequate space is located for all the
segments, it loads them into their respective areas.
The operating system also generates a segment map table for each program.
With the help of segment map tables and hardware assistance, the operating system
can easily translate a logical address into physical address on execution of a
program.
The Segment number is mapped to the segment table. The limit of the respective
segment is compared with the offset. If the offset is less than the limit then the
address is valid otherwise it throws an error as the address is invalid.
In the case of valid addresses, the base address of the segment is added to the
offset to get the physical address of the actual word in the main memory.
The above figure shows how address translation is done in case of segmentation.
Advantages of Segmentation
1. No internal fragmentation
2. Average Segment Size is larger than the actual page size.
3. Less overhead
4. It is easier to relocate segments than entire address space.
5. The segment table is of lesser size as compared to the page table in paging.
Disadvantages
Virtual memory
A computer can address more memory than the amount
physically installed on the system. This extra memory is actually
called virtual memory and it is a section of a hard disk that's set
up to emulate the computer's RAM.
When a process is to be swapped in, the pager guesses which pages will be used before the
process is swapped out again. Instead of swapping in a whole process, the pager brings
only those pages into memory. Thus, it avoids reading into memory pages that will not be
used anyway, decreasing the swap time and the amount of physical memory needed.
With this scheme, we need some form of hardware support to distinguish between the pages
that are in memory and the pages that are on the disk. The valid -invalid bit scheme
described can be used for this purpose. This time, however, when this bit is set to "valid/' the
associated page is both legal and in n1.emory. If the bit is set to "invalid/' the page either is
not valid (that is, not in the logical address space of the process) or is valid but is currently
on the disk. The page-table entry for a page that is brought into memory is set as usual but
the page-table entry for a page that is not currently in memory is either simply marked invalid
or contains the address of the page on disk.
if we guess right and page in all and only those pages that are actually needed, the process
will run exactly as though we had brought in all pages. While the process executes and
accesses pages that are memory resident execution proceeds normally.
But what happens if the process tries to access a page that was not brought into memory?
Access to a page marked invalid causes a PAGE FAULT.The paging hardware, in
translating the address through the page table, will notice that the invalid bit is set, causing a
trap to the operating system. This trap is the result of the operating system's failure to bring
the desired page into memory. The procedure for handling this page fault is straightforward :
● We check an internal table (usually kept with the process control block) for this
process to determine whether the reference was a valid or an invalid memory
access.
● If the reference was invalid, we terminate the process. If it was valid, but we have not
yet brought in that page, we now page it in.
● We find a free frame (by taking one from the free-frame list, for example).
● We schedule a disk operation to read the desired page into the newly
allocated frame.
● When the disk read is complete, we modify the internal table kept with the process
and the page table to indicate that the page is now in memory.
● We restart the instruction that was interrupted by the trap. The process can now
access the page as though it had always been in memory.
The hardware to support demand paging is the same as the hardware for paging and
swapping:
Page table. This table has the ability to mark an entry invalid through a
valid -invalid bit or a special value of protection bits.
Secondary memory. This memory holds those pages that are not present
in main memory. The secondary memory is usually a high-speed disk. It is
known as the swap device, and the section of disk used for this purpose is
known as SWAP SPACE.
If we fault when we try to store inC (because C is in a page not currently in memory),
we will have to get the desired page, bring it in, correct the page table, and restart the
instruction. The restart will require fetching the instruction again, decoding it again,
fetching the two operands again, and then adding again. However, there is not much
repeated work (less than one complete instruction), and the repetition is necessary
only when a page fault occurs.
Page Replacement
The main memory is divided into small blocks called pages. Now, each of the pages
contains the process which is retrieved into main memory and it is stored in one
frame of memory.
It is very important to have pages and frames which are of equal sizes which are very
useful for mapping and complete utilization of memory.
By giving the user the impression that there is memory available to load the process,
this approach allows them to load larger size programs than the primary memory
that is accessible.
The Operating System loads the many components of several processes in the main
memory as opposed to loading a single large process there.
By doing this, the level of multiprogramming will be enhanced, which will increase
CPU consumption.
Demand Paging
The Demand Paging is a condition which is occurred in the Virtual Memory. We know
that the pages of the process are stored in secondary memory. The page is brought
to the main memory when required. We do not know when this requirement is going
to occur. So, the pages are brought to the main memory when required by the Page
Replacement Algorithms.
So, the process of calling the pages to main memory to secondary memory upon
demand is known as Demand Paging.
The important jobs of virtual memory in Operating Systems are two. They are:
Frame Allocation
Page Replacement.
A Physical Address is required by the Central Processing Unit (CPU) for the frame
creation and the physical Addressing provides the actual address to the frame
created. For each page a frame must be created.
The Frames that can be allocated cannot be greater than total number of
frames.
Each process should be given a set minimum amount of frames.
When fewer frames are allocated then the page fault ration increases and the
process execution becomes less efficient
There ought to be sufficient frames to accommodate all the many pages that
a single instruction may refer to
There are three types of Frame Allocation Algorithms in Operating Systems. They
are:
Here, in this Frame Allocation Algorithm we take number of frames and number of
processes at once. We divide the number of frames by number of processes. We get
the number of frames we must provide for each process.
This means if we have 36 frames and 6 processes. For each process 6 frames are
allocated.
It is not very logical to assign equal frames to all processes in systems with
processes of different sizes. A lot of allocated but unused frames will eventually be
wasted if a lot of frames are given to a little operation.
According to the quantity of frame allocations and the processes, priority frame
allocation distributes frames. Let's say a process has a high priority and needs more
frames; in such case, additional frames will be given to the process. Processes with
lower priorities are then later executed in future and first only high priority processes
are executed first.
This is the first basic algorithm of Page Replacement Algorithms. This algorithm is
basically dependent on the number of frames used. Then each frame takes up the
certain page and tries to access it. When the frames are filled then the actual
problem starts. The fixed number of frames is filled up with the help of first frames
present. This concept is fulfilled with the help of Demand Paging
After filling up of the frames, the next page in the waiting queue tries to enter the
frame. If the frame is present then, no problem is occurred. Because of the page
which is to be searched is already present in the allocated frames.
If the page to be searched is found among the frames then, this process is known as
Page Hit.
If the page to be searched is not found among the frames then, this process is
known as Page Fault.
When Page Fault occurs this problem arises, then the First In First Out Page
Replacement Algorithm comes into picture.
The First In First Out (FIFO) Page Replacement Algorithm removes the Page in the
frame which is allotted long back. This means the useless page which is in the frame
for a longer time is removed and the new page which is in the ready queue and is
ready to occupy the frame is allowed by the First In First Out Page Replacement.
Let us understand this First In First Out Page Replacement Algorithm working with
the help of an example.
Example:
Points to Remember
Reference String:
The Ratio of Page Hit to the Page Fault = 8 : 12 - - - > 2 : 3 - - - > 0.66
The Page Fault Percentage = 100 - Page Hit Percentage = 100 - 40 = 60%
Explanation
First, fill the frames with the initial pages. Then, after the frames are filled we need to
create a space in the frames for the new page to occupy. So, with the help of First in
First Out Page Replacement Algorithm we remove the frame which contains the
page is older among the pages. By removing the older page we give access for the
new frame to occupy the empty space created by the First in First out Page
Replacement Algorithm.
After filling up of the frames, the next page in the waiting queue tries to enter the
frame. If the frame is present then, no problem is occurred. Because of the page
which is to be searched is already present in the allocated frames.
If the page to be searched is found among the frames then, this process is known as
Page Hit.
If the page to be searched is not found among the frames then, this process is
known as Page Fault.
When Page Fault occurs this problem arises, then the OPTIMAL Page Replacement
Algorithm comes into picture.
Replace the Page which is not used in the Longest Dimension of time in future
This principle means that after all the frames are filled then, see the future pages
which are to occupy the frames. Go on checking for the pages which are already
available in the frames. Choose the page which is at last.
Example:
0, 3, 4, 6, 0, 2, 1, 2, 1, 2, 0, 3, 2, 1, 2, 0
6, 1, 2 are in the frames occupying the frames.
Now we need to enter 0 into the frame by removing one page from the page
From the sub sequence 0, 3, 4, 6, 0, 2, 1 we can say that 1 is the last occurring page
number. So we can say that 0 can be placed in the frame body by removing 1 from
the frame.
Let us understand this OPTIMAL Page Replacement Algorithm working with the help
of an example.
Example:
Points to Remember
Reference String:
The Ratio of Page Hit to the Page Fault = 8 : 12 - - - > 2 : 3 - - - > 0.66
The Page Fault Percentage = 100 - Page Hit Percentage = 100 - 40 = 60%
Explanation
First, fill the frames with the initial pages. Then, after the frames are filled we need to
create a space in the frames for the new page to occupy.
Here, we would fill the empty spaces with the pages we and the empty frames we
have. The problem occurs when there is no space for occupying of pages. We have
already known that we would replace the Page which is not used in the Longest
Dimension of time in future.
There comes a question what if there is absence of page which is in the frame.
0, 2, 4, 6, 0, 2, 1, 2, 1, 2, 0, 3, 2, 1, 2, 0
Here, we can see that page number 5 is not present in the Reference String. But the
number 5 is present in the Frame. So, as the page number 5 is absent we remove it
when required and other page can occupy that position.
This is the last basic algorithm of Page Replacement Algorithms. This algorithm is
basically dependent on the number of frames used. Then each frame takes up the
certain page and tries to access it. When the frames are filled then the actual
problem starts. The fixed number of frames is filled up with the help of first frames
present. This concept is fulfilled with the help of Demand Paging
After filling up of the frames, the next page in the waiting queue tries to enter the
frame. If the frame is present then, no problem is occurred. Because of the page
which is to be searched is already present in the allocated frames.
If the page to be searched is found among the frames then, this process is known as
Page Hit.
If the page to be searched is not found among the frames then, this process is
known as Page Fault.
When Page Fault occurs this problem arises, then the Least Recently Used (LRU)
Page Replacement Algorithm comes into picture.
The Least Recently Used (LRU) Page Replacement Algorithms works on a certain
principle. The principle is:
Replace the page with the page which is less dimension of time recently used page
in the past.
Example:
6, 1, 1, 2, 0, 3, 4, 6, 0
The pages with page numbers 6, 1, 2 are in the frames occupying the frames.
Now, we need to travel back into the past to check which page can be replaced.
Let us understand this Least Recently Used (LRU) Page Replacement Algorithm
working with the help of an example.
Example:
Points to Remember
Reference String:
Number of Page Hits = 7
The Page Fault Percentage = 100 - Page Hit Percentage = 100 - 35 = 65%
Explanation
First, fill the frames with the initial pages. Then, after the frames are filled we need to
create a space in the frames for the new page to occupy.
Here, we would fill the empty spaces with the pages we and the empty frames we
have. The problem occurs when there is no space for occupying of pages. We have
already known that we would replace the Page which is not used in the Longest
Dimension of time in past or can be said as the Page which is very far away in the
past.
Tharshing
What is Thrash?
In computer science, thrash is the poor performance of a virtual memory (or paging)
system when the same pages are being loaded repeatedly due to a lack of main
memory to keep them in memory. Depending on the configuration and algorithm, the
actual throughput of a system can degrade by multiple orders of magnitude.
To know more clearly about thrashing, first, we need to know about page fault and
swapping.
Page fault: We know every program is divided into some pages. A page fault
occurs when a program attempts to access data or code in its address space
but is not currently located in the system RAM.
Swapping: Whenever a page fault happens, the operating system will try to
fetch that page from secondary memory and try to swap it with one of the
pages in RAM. This process is called swapping.
Thrashing is when the page fault and swapping happens very frequently at a higher
rate, and then the operating system has to spend more time swapping these pages.
This state in the operating system is known as thrashing. Because of thrashing, the
CPU utilization is going to be reduced or negligible.
The basic concept involved is that if a process is allocated too few frames, then
there will be too many and too frequent page faults. As a result, no valuable work
would be done by the CPU, and the CPU utilization would fall drastically.
The long-term scheduler would then try to improve the CPU utilization by loading
some more processes into the memory, thereby increasing the degree of
multiprogramming. Unfortunately, this would result in a further decrease in the CPU
utilization, triggering a chained reaction of higher page faults followed by an increase
in the degree of multiprogramming, called thrashing.
Since global page replacement can bring any page, it tries to bring more pages
whenever thrashing is found. But what actually will happen is that no process gets
enough frames, and as a result, the thrashing will increase more and more.
Therefore, the global page replacement algorithm is not suitable when thrashing
happens.
Unlike the global page replacement algorithm, local page replacement will select
pages which only belong to that process. So there is a chance to reduce the
thrashing. But it is proven that there are many disadvantages if we use local page
replacement. Therefore, local page replacement is just an alternative to global page
replacement in a thrashing scenario.
Causes of Thrashing
Thrashing has some negative impacts on hard drive health and system performance.
Therefore, it is necessary to take some actions to avoid it. To resolve the problem of
thrashing, here are the following methods, such as:
Adjust the swap file size:If the system swap file is not configured correctly,
disk thrashing can also happen to you.
Increase the amount of RAM: As insufficient memory can cause disk
thrashing, one solution is to add more RAM to the laptop. With more memory,
your computer can handle tasks easily and don't have to work excessively.
Generally, it is the best long-term solution.
Decrease the number of applications running on the computer: If there are
too many applications running in the background, your system resource will
consume a lot. And the remaining system resource is slow that can result in
thrashing. So while closing, some applications will release some resources so
that you can avoid thrashing to some extent.
Replace programs: Replace those programs that are heavy memory occupied
with equivalents that use less memory.
The Local Page replacement is better than the Global Page replacement, but local
page replacement has many disadvantages, so it is sometimes not helpful.
Therefore below are some other techniques that are used to handle thrashing:
1. Locality Model
A locality is a set of pages that are actively used together. The locality model states
that as a process executes, it moves from one locality to another. Thus, a program is
generally composed of several different localities which may overlap.
For example, when a function is called, it defines a new locality where memory
references are made to the function call instructions, local and global variables, etc.
Similarly, when the function is exited, the process leaves this locality.
2. Working-Set Model
The accuracy of the working set is dependent on the value of parameter A. If A is too
large, then working sets may overlap. On the other hand, for smaller values of A, the
locality might not be covered entirely.
If D is the total demand for frames and WSSi is the working set size for process i,
D = ⅀ WSSi
Now, if 'm' is the number of frames available in the memory, there are two
possibilities:
D>m, i.e., total demand exceeds the number of frames, then thrashing will
occur as some processes would not get enough frames.
D<=m, then there would be no thrashing.
If there are enough extra frames, then some more processes can be loaded into the
memory. On the other hand, if the summation of working set sizes exceeds the
frames' availability, some of the processes have to be suspended (swapped out of
memory).
This technique prevents thrashing along with ensuring the highest degree of
multiprogramming possible. Thus, it optimizes CPU utilization.
A more direct approach to handle thrashing is the one that uses the Page-Fault
Frequency concept.
The problem associated with thrashing is the high page fault rate, and thus, the
concept here is to control the page fault rate.
If the page fault rate is too high, it indicates that the process has too few frames
allocated to it. On the contrary, a low page fault rate indicates that the process has
too many frames.
Upper and lower limits can be established on the desired page fault rate, as shown in
the diagram.
If the page fault rate falls below the lower limit, frames can be removed from the
process. Similarly, if the page faults rate exceeds the upper limit, more frames can
be allocated to the process.
In other words, the graphical state of the system should be kept limited to the
rectangular region formed in the given diagram.
If the page fault rate is high with no free frames, some of the processes can be
suspended and allocated to them can be reallocated to other processes. The
suspended processes can restart later.