Unit Iv 1
Unit Iv 1
Unit Iv 1
Operating Systems
III SEM
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
Operating Systems
Unit-IV
Unit – IV
Memory-Management Strategies: Introduction, Swapping,
Contiguous memory allocation, Paging, Segmentation.
Virtual Memory Management: Introduction, Demand paging, Copy
on-write, Page replacement, Frame allocation, Thrashing, Memory-
mapped files, Kernel memory allocation.
File Systems: Files, Directories, File system implementation,
management and optimization
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
Swappin
g
► Swapping is the technique of temporarily removing inactive programs from the main
memory of a computer system. An inactive program is one that is
neither executing nor performing an I/O operation.
► This may also happen when it is desired to place a higher-priority process in the
memory. A lower priority process may be swapped out so that higher-priority
process may be loaded and executed.
► Standard swapping involves moving processes between main memory and a backing
store.
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
1. Fixed-partition
contiguous storage
allocation
► processes with small address space use small
partitions and processes with large address
space use large partitions. This is known as
fixed partition contiguous storage allocation.
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
2. Variable - partition
contiguous storage allocation
• This notion is derived from parking of vehicles
on the sides of streets where the one who
manages to enter will get the space. Two
vehicles can leave a space between them that
cannot be used by any vehicle. This means that
whenever a process needs memory, a search for
the space needed by it, is done. If
contiguous space is available to accommodate
that process, then the process is loaded into
memory.
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
External Fragmentation
► Thisphenomenon of entering and leaving
the memory can cause the formation of
unusable memory holes (like the unused
space between two vehicles). This is
known as External Fragmentation.
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
Three strategies that can be used
to allocate memory to Variable -
partition
1. Best - Fit
2. Worst -
fit 3. First
- fit
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
First - fit - chooses the first
partition whose size is greater than equal to
k.
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
Best - Fit - chooses a partition that is smallest
and whose size is greater than equal to k. It leaves
small-sized unusable partitions or holes.
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
Worst_fit:
chooses the largest partition and allocates it to
process p.
It can leave bigger unusable partitions.
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
Non-contiguous
Storage
Allocation
► To resolve the problem of external
fragmentation and to enhance the
degree of multiprogramming to a
greater extent, it was decided to sacrifice the
simplicity of allocating contiguous memory to
every process. It was decided
to
have a non-contiguous physical address space
of a process so that a process could be
allocated memory wherever it was available.
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
Fragmentation
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
Internal
Fragmentation
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
External
Fragmentation
Problem with
Compaction
The efficiency of the
system is decreased in
the case of compaction
due to the fact that all
the free spaces will be
transferred from several
places to a single place.
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
Non- contiguous Storage Allocation
There are 2 techniques for non-
contiguous allocation:
1. Paging
2. Segmentation
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
Paging
• Physical memory is divided into fixed-size
blocks called frames.
• Logical memory is divided into the fixed-
sized blocks called pages.
• The size of a page is same as that of frame.
• One page of the process is to be stored in one
of the frames of the memory.
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
paging
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
paging
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
It works as:
Translate logical address 13 = 01101 in
binary
Offset = 01 in binary or 1 in decimal Page
no. = 011 in binary or 3 in decimal
Physical address = pageframe*pagesize +
offset i.e. 2*4+1 = 9 (which is the real
address of page containing character “n” in
the main memory)
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
Problem with
paging
► Ituses a page table per process for translating
logical to physical address space. A basic
issue associated with the use of page table is,
that the look up in the page table should be
very fast (which is generally slow), as it is
done on every memory reference.
► So to make the look up time less, caching is
used. In this case, the specific cache device is
called a translation look aside buffer (TLB).
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
Translation look aside buffer (TLB).
► The TLB contains a set of entries, each of
which contains a page number, the
corresponding frame number, and the
protection bits.
► So if we use TLB, then mapping is done by
TLB and we generally find the entry in TLB
but in case the entry is not found in TLB
than we have a TLB miss and finally we go
to page table.
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
Segmentation
► segmentation is another technique for the
noncontiguous storage allocation. It is
different from paging as it supports users’ view of
his program.
► For a programmer it might be more relevant to
divide the logical address space of his program into
variable sized segments (with respect to his view of
main program, subroutines, data, etc.) than to divide
it into fixed size pages. Such
variable sized segments, which are a collection of
logically related information, are the basis of
segmentation technique.
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
Segmentation example
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
Segmentation
CPU generates a logical address which contains two parts:
1.Segment Number
2.Offset
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
segmentation
Advantages of Segmentation
1.No internal fragmentation
2.Average Segment Size is larger than the actual
page size.
3.Less overhead
4.It is easier to relocate segments than entire
address space.
5.The segment table is of lesser size as compared to
the page table in paging.
Disadvantages
1.It can have external fragmentation.
2.it is difficult to allocate contiguous memory to
variable sized partition.
3.Costly memory management algorithms.
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
Virtual Memory
Virtual Memory is a storage scheme that
provides user an illusion of having a very big
main memory. This is done by treating a part
of secondary memory as the main memory.
• In this scheme, User can load the bigger size processes than the
available main memory by having the illusion that the memory is
available to load the process.
Address
Logical Address is generated by CPU while a program is
running. The logical address is virtual address as it does not
exist physically, therefore, it is also known as Virtual Address.
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
Address
Logical Address is generated by CPU while a program is
running. The logical address is virtual address as it does not
exist physically, therefore, it is also known as Virtual Address.
Physical Address identifies a physical location of required
data in a memory.
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
Virtual
Memory
► Virtual memory is a technique of executing program
instructions that may not fit entirely in system
memory. This is done by calling
instructions as and when the application requires it.
Virtual memory is implemented by using secondary
storage to augment the main memory. Data is
transferred from secondary to main storage as and
when necessary and the data modified is written
back to the secondary storage according to a
predetermined algorithm.
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
Virtual Memory
• Advantages of Virtual Memory
1.The degree of Multiprogramming will be increased.
2.User can run large application with less real RAM.
3.There is no need to buy more memory RAMs.
S sushma,IT Department
Demand ADITYA ENGINEERING COLLEGE(A)
Paging
Demand Paging is a popular method of virtual memory
management. In demand paging, the pages of a process
which are least used, get stored in the secondary memory.
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
Copy On Write
► Copy-on-Write(CoW) is mainly a resource management
technique that allows the parent and child process to share
the same pages of the memory initially. If any process
either parent or child modifies the shared page, only then
the page is copied.
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
Copy On Write
► The main intention behind the CoW technique is that
whenever a parent process creates a child process both
parent and child process initially will share the same
pages in the memory.
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
Copy On Write
► Let us take an example where Process A creates a new
process that is Process B, initially both these processes
will share the same pages of the memory.
► And these free pages are allocated typically when the stack/heap for
a process must expand or when there are copy-on-write pages to
manage.
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
Page
Replacement
► Once the main memory fills up, a page
must be swapped out to make room for
any pages to be swapped in. This is
known as page replacement.
► We have page replacement algorithms for
page replacement purpose, which are as
follows:
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
pagereplacement algorithm
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
FIFO Page Replacement
Belady’s anomaly:page-fault rate may increase as the number of allocated frames increases.
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
LRU Page Replacement
LRU replacement associates with each page the time of that page’s last use.
When a page must be replaced, LRU chooses the page that has not been used
for the longest period of time
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
Optimal Page Replacement
The algorithm that has the lowest page-fault rate of all algorithms and will never suffer from
Belady’s anomaly. Such an algorithm does exist and has been called OPT or MIN. It is simply
this:Replace the page that will not be used for the longest period of time
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
•Disadvantage: In systems with processes of varying sizes, it does not make much sense to
give each process equal frames. Allocation of a large number of frames to a small process will
eventually lead to the wastage of a large number of allocated unused frames.
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
Example:
With proportional allocation, we would split 62 frames between two processes, one of 10 pages and one of 127 pages,
by allocating 4 frames and 57 frames
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
Thrashing
• In case, if the page fault and swapping happens very frequently at a higher rate, then
the operating system has to spend more time swapping these pages. This state in the
operating system is termed thrashing. Because of thrashing the CPU utilization is
going to be reduced.
• During thrashing, the CPU spends less time on some actual productive work spend
more time swapping.
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
Thrashing
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
Thrashing
• Thrashing affects the performance of execution in the Operating system. Also,
thrashing results in severe performance problems in the Operating system.
• When the utilization of CPU is low, then the process scheduling mechanism tries to load many processes
into the memory at the same time due to which degree of Multiprogramming can be increased. Now in
this situation, there are more processes in the memory as compared to the available number of frames
in the memory. Allocation of the limited amount of frames to each process.
• Whenever any process with high priority arrives in the memory and if the frame is not freely available at
that time then the other process that has occupied the frame is residing in the frame will move to
secondary storage and after that this free frame will be allocated to higher priority process.
• We can also say that as soon as the memory fills up, the process starts spending a lot of time for the
required pages to be swapped in. Again the utilization of the CPU becomes low because most of the
processes are waiting for pages.
• Thus a high degree of multiprogramming and lack of frames are two main causes of thrashing in the
Operating system.
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
Thrashing
• To prevent thrashing, we must provide a process with as many frames as it needs.
There are two strategies:
• working-set strategy
• Page-Fault Frequency
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
Thrashing
• To prevent thrashing, we must provide a process with as many frames as it needs.
There are two strategies:
• working-set strategy The working set model involves identifying the set of pages that a process actively uses during a specific
time window called its "working set." By keeping these pages in memory, the system reduces the likelihood of page faults. Adjusting the system's resources to
accommodate the working sets of active processes can help prevent trashing.
It starts by looking at how many frames a process is actually using. This approach defines the locality
model of process execution.
The locality model states that, as a process executes, it moves from locality to locality. A locality is a set of pages that
are actively used together . A program is generally composed of several different localities, which may overlap.
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
Thrashing
• To prevent thrashing, we must provide a process with as many frames as it needs.
There are two strategies:
• Page-Fault Frequency
The working-set model is successful, and knowledge of the working set can be useful for prepaging , but it seems a
clumsy way to control thrashing. A strategy that uses the page-fault frequency (PFF) takes a more direct approach.
The specific problem is how to prevent thrashing. Thrashing has a high page-fault rate. Thus, we want to control the
page-fault rate. When it is too high,weknow that the process needs more frames. Conversely, if the page-fault rate is too
low, then the process may have too many frames. We can establish
upper and lower bounds on the desired page-fault rate
Optimize Page Replacement Algorithms:
• Using efficient page replacement algorithms (like Least Recently Used - LRU or Second Chance) can help reduce the frequency of page faults. These algorithms determine
which pages to replace when a new page needs to be brought into memory. By selecting pages intelligently, the system can reduce unnecessary swapping and,
consequently, trashing.
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
Thrashing
• Page-Fault Frequency
If the actual page-fault rate exceeds the upper limit, we allocate the process another frame. If the page-fault
rate falls below the lower limit, we remove a frame from the process. Thus, we can directly measure and
control the page-fault rate to prevent thrashing.
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
• When a process running in user mode requests additional memory, pages are allocated
from the list of free page frames maintained by the kernel.
• This list is typically populated using a page-replacement algorithm
1.Buddy System
2. slab allocation
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
• Buddy system allocates memory from a fixed-size segment consisting of physically contiguous pages. Memory
is allocated from this segment using a power-of-2 allocator, which satisfies requests in units sized as a power of
2 (4 KB, 8 KB, 16 KB, and so forth).
Example: Assume the size of a memory segment is initially 256 KB and the kernel requests 21 KB of memory.
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
FILES
• To keep track of files, file systems normally have directories or folders.
Single-Level Directory Systems
• The simplest form of directory system is having one directory containing all the files. Sometimes it is called the root
directory.
Create,
delete,opendir,closedir,readdir,rename
..
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
File-system Implementati on
• on-disk and in-memory structures
• These structures vary depending on the operating system and the file system.
• On disk, the file system may contain information about how to boot an operating system stored there, the
total number of blocks, the number and location of free blocks, the directory structure, and individual files.
• A boot control block (per volume) can contain information needed by the system to boot an operating system
from that volume. If the disk does not contain an operating system, this block can be empty. It is typically the
first block of a volume. In UFS, it is called the boot block. In NTFS, it is the partition boot sector.
• A volume control block (per volume) contains volume (or partition) details, such as the number of blocks in the
partition, the size of the blocks, a free-block count and free-block pointers, and a free-FCB count and FCB
pointers. In UFS, this is called a superblock. In NTFS, it is stored in the
master file table.
• A directory structure (per file system) is used to organize the files. In UFS, this includes file names and
associated inode numbers. In NTFS, it is stored in the master file table.
• A per-file FCB contains many details about the file. It has a unique identifier number to allow association with a
directory entry. In NTFS, this information is actually stored within the master file table, which uses a relational
database structure,with a row per file.
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
File-system Implementati on
• on-disk and in-memory structures
• These structures vary depending on the operating system and the file system.
The in-memory information is used for both file-system management and performance improvement via caching.
The data are loaded at mount time, updated during file-system operations, and discarded at dismount. Several
types of structures may be included.
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
File-system Implementati on
Partitions and Mounting
• Physical disks are commonly divided into smaller units called partitions. They can also be combined into
larger units, but that is most commonly done for RAID installations and is left for later chapters.
• Partitions can either be used as raw devices ( with no structure imposed upon them ), or they can be formatted
to hold a filesystem ( i.e. populated with FCBs and initial directory structures as appropriate. )
• Raw partitions are generally used for swap space, and may also be used for certain programs such as
databases that choose to manage their own disk storage system. Partitions containing filesystems can
generally only be accessed using the file system structure by ordinary users, but can often be accessed as a
raw device also by root.
• The boot block is accessed as part of a raw partition, by the boot program prior to any operating system being
loaded. Modern boot programs understand multiple OSes and filesystem formats, and can give the user a
choice of which of several available systems to boot.
• The root partition contains the OS kernel and at least the key portions of the OS needed to complete the boot
process.
• At boot time the root partition is mounted, and control is transferred from the boot program to the kernel
found there.
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
File-system Implementati on
Virtual File Systems
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
File-system Implementati on
Directory Implementation
Directories need to be fast to search, insert, and delete, with a minimum of wasted disk space.
Linear List
• A linear list is the simplest and easiest directory structure to set up, but it does have some drawbacks.
• Finding a file ( or verifying one does not already exist upon creation ) requires a linear search.
• Deletions can be done by moving all entries, flagging an entry as deleted, or by moving the last entry
into the newly vacant position.
• Sorting the list makes searches faster, at the expense of more complex insertions and deletions.
• A linked list makes insertions and deletions into a sorted list easier, with overhead for the links.
Hash Table
A hash table can also be used to speed up searches.
Hash tables are generally implemented in addition to a linear or other structure
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
File-system Implementati on
Allocation Methods
There are three major methods of storing files on disks: contiguous, linked, and indexed.
Contiguous Allocation
Linked Allocation
Indexed Allocation
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
File-system Implementati on
Allocation Methods 1.Contiguous Allocation
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
File-system Implementati on
Allocation Methods 3. Indexed Allocation
Indexed Allocation combines all of the indexes for accessing each file
into a common block ( for that file ), as opposed to spreading them all
over the disk or storing them in a FAT table.
Some disk space is wasted ( relative to linked lists or FAT tables )
because an entire index block must be allocated for each file, regardless
of how many data blocks the file contains. This leads to questions of
how big the index block should be, and how it should be implemented.
There are several approaches:
• Linked Scheme - An index block is one disk block, which can be read
and written in a single disk operation. The first index block contains
some header information, the first N block addresses, and if
necessary a pointer to additional linked index blocks.
• Multi-Level Index - The first index block contains a set of pointers to
secondary index blocks, which in turn contain pointers to the actual
data blocks.
• Combined Scheme - This is the scheme used in UNIX inodes, in which
the first 12 or so data block pointers are stored directly in the inode,
and then singly, doubly, and triply indirect pointers provide access
S sushma,IT to
Department