Unit Iv 1

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 79

ADITYA ENGINEERING COLLEGE(A)

Operating Systems
III SEM

S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
Operating Systems
Unit-IV
Unit – IV
Memory-Management Strategies: Introduction, Swapping,
Contiguous memory allocation, Paging, Segmentation.
Virtual Memory Management: Introduction, Demand paging, Copy
on-write, Page replacement, Frame allocation, Thrashing, Memory-
mapped files, Kernel memory allocation.
File Systems: Files, Directories, File system implementation,
management and optimization

S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)

Logical vs. physical address space

► A logical address is the address of an


instruction or data as used by a program.

► A physical address is the effective memory


address of an instruction or data i.e. it is the
address obtained after binding of logical
addresses has been done.

S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
Swappin
g
► Swapping is the technique of temporarily removing inactive programs from the main
memory of a computer system. An inactive program is one that is
neither executing nor performing an I/O operation.
► This may also happen when it is desired to place a higher-priority process in the
memory. A lower priority process may be swapped out so that higher-priority
process may be loaded and executed.
► Standard swapping involves moving processes between main memory and a backing
store.

S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)

Logical vs. physical address space

S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)

Contiguous Storage Allocation

► Contiguous storage allocation implies that a


program’s data and instructions are assured to
occupy a single contiguous memory area.
► It is further subdivided into Fixed-partition
storage allocation strategy and variable-
partition storage allocation strategy.

S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
1. Fixed-partition
contiguous storage
allocation
► processes with small address space use small
partitions and processes with large address
space use large partitions. This is known as
fixed partition contiguous storage allocation.

S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)

S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
2. Variable - partition
contiguous storage allocation
• This notion is derived from parking of vehicles
on the sides of streets where the one who
manages to enter will get the space. Two
vehicles can leave a space between them that
cannot be used by any vehicle. This means that
whenever a process needs memory, a search for
the space needed by it, is done. If
contiguous space is available to accommodate
that process, then the process is loaded into
memory.

S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)

External Fragmentation
► Thisphenomenon of entering and leaving
the memory can cause the formation of
unusable memory holes (like the unused
space between two vehicles). This is
known as External Fragmentation.

S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
Three strategies that can be used
to allocate memory to Variable -
partition

They are the following:

1. Best - Fit
2. Worst -
fit 3. First
- fit
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
First - fit - chooses the first
partition whose size is greater than equal to
k.

S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
Best - Fit - chooses a partition that is smallest
and whose size is greater than equal to k. It leaves
small-sized unusable partitions or holes.

S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
Worst_fit:
chooses the largest partition and allocates it to
process p.
It can leave bigger unusable partitions.

S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
Non-contiguous
Storage
Allocation
► To resolve the problem of external
fragmentation and to enhance the
degree of multiprogramming to a
greater extent, it was decided to sacrifice the
simplicity of allocating contiguous memory to
every process. It was decided
to
have a non-contiguous physical address space
of a process so that a process could be
allocated memory wherever it was available.
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)

Fragmentation

• Users keep loading and unloading processes from the


main memory. Usually, processes are stored in blocks
in the memory.
• Sometimes the available memory blocks are not
sufficient to store a process that requires contiguous
memory spaces. This condition is known as
fragmentation

S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
Internal
Fragmentation

S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
External
Fragmentation

The solution for external fragmentation is


compaction or shuffle memory contents.
Also, external fragmentation can be resolved by paging
S sushma,IT Department
or segmentation mechanisms.
Compaction means to move theENGINEERING
ADITYA processes in the
COLLEGE(A)
memory in such a way that scattered pieces of unused
(free) memory can be placed together so that any
other process demanding contiguous memory for it
can use it.

Problem with
Compaction
The efficiency of the
system is decreased in
the case of compaction
due to the fact that all
the free spaces will be
transferred from several
places to a single place.

S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
Non- contiguous Storage Allocation
There are 2 techniques for non-
contiguous allocation:
1. Paging
2. Segmentation

S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
Paging
• Physical memory is divided into fixed-size
blocks called frames.
• Logical memory is divided into the fixed-
sized blocks called pages.
• The size of a page is same as that of frame.
• One page of the process is to be stored in one
of the frames of the memory.

S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
paging

S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
paging

Page Table is a data structure used by


the virtual memory system to store
the mapping between logical
S sushma,IT Department addresses and physical addresses.
ADITYA ENGINEERING COLLEGE(A)
Paging example

S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)

It works as:
Translate logical address 13 = 01101 in
binary
Offset = 01 in binary or 1 in decimal Page
no. = 011 in binary or 3 in decimal
Physical address = pageframe*pagesize +
offset i.e. 2*4+1 = 9 (which is the real
address of page containing character “n” in
the main memory)

S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
Problem with
paging
► Ituses a page table per process for translating
logical to physical address space. A basic
issue associated with the use of page table is,
that the look up in the page table should be
very fast (which is generally slow), as it is
done on every memory reference.
► So to make the look up time less, caching is
used. In this case, the specific cache device is
called a translation look aside buffer (TLB).

S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
Translation look aside buffer (TLB).
► The TLB contains a set of entries, each of
which contains a page number, the
corresponding frame number, and the
protection bits.
► So if we use TLB, then mapping is done by
TLB and we generally find the entry in TLB
but in case the entry is not found in TLB
than we have a TLB miss and finally we go
to page table.

S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
Segmentation
► segmentation is another technique for the
noncontiguous storage allocation. It is
different from paging as it supports users’ view of
his program.
► For a programmer it might be more relevant to
divide the logical address space of his program into
variable sized segments (with respect to his view of
main program, subroutines, data, etc.) than to divide
it into fixed size pages. Such
variable sized segments, which are a collection of
logically related information, are the basis of
segmentation technique.

S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)

Segmentation example

S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
Segmentation
CPU generates a logical address which contains two parts:
1.Segment Number
2.Offset

S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)

segmentation
Advantages of Segmentation
1.No internal fragmentation
2.Average Segment Size is larger than the actual
page size.
3.Less overhead
4.It is easier to relocate segments than entire
address space.
5.The segment table is of lesser size as compared to
the page table in paging.

Disadvantages
1.It can have external fragmentation.
2.it is difficult to allocate contiguous memory to
variable sized partition.
3.Costly memory management algorithms.

S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)

Virtual Memory
Virtual Memory is a storage scheme that
provides user an illusion of having a very big
main memory. This is done by treating a part
of secondary memory as the main memory.
• In this scheme, User can load the bigger size processes than the
available main memory by having the illusion that the memory is
available to load the process.

• Instead of loading one big process in the main memory, the


Operating System loads the different parts of more than one
process in the main memory.

• By doing this, the degree of multiprogramming will be increased


and therefore, the CPU utilization will also be increased.
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)

Address
Logical Address is generated by CPU while a program is
running. The logical address is virtual address as it does not
exist physically, therefore, it is also known as Virtual Address.

Physical Address identifies a physical location of required


data in a memory.
The address that is loaded into the memory-
address register of the memory is commonly referred to as
a Physical address. A physical address cannot be accessed
by the user directly but the user can calculate the physical
address with the help of a Logical address.

S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
Address
Logical Address is generated by CPU while a program is
running. The logical address is virtual address as it does not
exist physically, therefore, it is also known as Virtual Address.
Physical Address identifies a physical location of required
data in a memory.

S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
Virtual
Memory
► Virtual memory is a technique of executing program
instructions that may not fit entirely in system
memory. This is done by calling
instructions as and when the application requires it.
Virtual memory is implemented by using secondary
storage to augment the main memory. Data is
transferred from secondary to main storage as and
when necessary and the data modified is written
back to the secondary storage according to a
predetermined algorithm.

S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)

Virtual Memory
• Advantages of Virtual Memory
1.The degree of Multiprogramming will be increased.
2.User can run large application with less real RAM.
3.There is no need to buy more memory RAMs.

• Disadvantages of Virtual Memory


1.The system becomes slower since swapping takes time.
2.It takes more time in switching between applications.
3.The user will have the lesser hard disk space for its use.

S sushma,IT Department
Demand ADITYA ENGINEERING COLLEGE(A)

Paging
Demand Paging is a popular method of virtual memory
management. In demand paging, the pages of a process
which are least used, get stored in the secondary memory.

► In virtual memory system, demand paging is a type of


swapping in which pages of programs are not copied from
disk to main memory until they are needed for execution.
► In demand paging, virtual address (logical address) is
accessed by CPU, the corresponding page number is looked
up in the page table and if it shows that currently this page
is not in main memory, then this page must be brought into
the main-memory.

S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)

► A page fault occurs when an invalid page is


addressed. Page fault must be followed by swapping-in
the page (demanded just now by the CPU) from disk to
main-memory or a trap should be generated to the
operating system if the page being demanded is not
within the logical address space of the process. To
determine whether the reference to the requested page
is within the logical address space or not, an internal
table may be consulted

S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)

S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)

S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
Copy On Write
► Copy-on-Write(CoW) is mainly a resource management
technique that allows the parent and child process to share
the same pages of the memory initially. If any process
either parent or child modifies the shared page, only then
the page is copied.

S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
Copy On Write
► The main intention behind the CoW technique is that
whenever a parent process creates a child process both
parent and child process initially will share the same
pages in the memory.

► These shared pages between parent and child process will


be marked as copy-on-write which means that if the
parent or child process will attempt to modify the shared
pages then a copy of these pages will be created and the
modifications will be done only on the copy of pages by
that process and it will not affect other processes.

S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
Copy On Write
► Let us take an example where Process A creates a new
process that is Process B, initially both these processes
will share the same pages of the memory.

Above figure indicates parent and child process sharing the


same pages S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
Copy On Write
► Now, let us assume that process A wants to modify a page
in the memory. When the Copy-on-write(CoW) technique
is used, only those pages that are modified by either
process are copied; all the unmodified pages can be easily
shared by the parent and child process.

After Page SZsushma,IT Department


is modified by Process A
ADITYA ENGINEERING COLLEGE(A)
Copy On Write
► Whenever it is determined that a page is going to be duplicated
using the copy-on-write technique, then it is important to note the
location from where the free pages will be allocated. There is a pool
of free pages for such requests; provided by many operating
systems.

► And these free pages are allocated typically when the stack/heap for
a process must expand or when there are copy-on-write pages to
manage.

► These pages are typically allocated using the technique that is


known as Zero-fill-on-demand. And the Zero-fill-on-demand
pages are zeroed-out before being allocated and thus erasing the
previous content.

S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
Page
Replacement
► Once the main memory fills up, a page
must be swapped out to make room for
any pages to be swapped in. This is
known as page replacement.
► We have page replacement algorithms for
page replacement purpose, which are as
follows:

S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
pagereplacement algorithm

1. The optimal page


replacement algorithm
2. The first-in, first-out (FIFO)
page replacement
algorithm
3. The second chance page
replacement algorithm
4. The least recently used (LRU)
page replacement algorithm
5. Simulating LRU in Software (Not
frequently used algorithm)
6. The Working Set
S sushma,IT Page
Department
ADITYA ENGINEERING COLLEGE(A)
pagereplacement

1. Find the location of the desired page on the disk.


2. Find a free frame:
a. If there is a free frame, use it.
b. If there is no free frame, use a page-replacement algorithm to
select
a victim frame.
c. Write the victim frame to the disk; change the page and frame
tables
accordingly.
3. Read the desired page into the newly freed frame; change the
page and
frame tables.
4. Continue the user process from where the page fault occurred

S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
FIFO Page Replacement

The simplest page-replacement algorithm is a first-in, first-out ( FIFO) algorithm.


A FIFO replacement algorithm associates with each page the time when that
page was brought into memory. When a page must be replaced, the oldest
page is chosen.

Belady’s anomaly:page-fault rate may increase as the number of allocated frames increases.
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
LRU Page Replacement

LRU replacement associates with each page the time of that page’s last use.
When a page must be replaced, LRU chooses the page that has not been used
for the longest period of time

S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
Optimal Page Replacement

The algorithm that has the lowest page-fault rate of all algorithms and will never suffer from
Belady’s anomaly. Such an algorithm does exist and has been called OPT or MIN. It is simply
this:Replace the page that will not be used for the longest period of time

S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)

Frame Allocati on:


1.Equal allocation
In a system with x frames and y processes, each process gets equal number of frames, i.e. x/y.
For instance, if the system has 48 frames and 9 processes, each process will get 5 frames. The
three frames which are not allocated to any process can be used as a free-frame buffer pool.

•Disadvantage: In systems with processes of varying sizes, it does not make much sense to
give each process equal frames. Allocation of a large number of frames to a small process will
eventually lead to the wastage of a large number of allocated unused frames.

S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)

Frame Allocati on:


•2.Proportional allocation: Frames are allocated to each process according to the process size.
•Advantage: All the processes share the available frames according to their needs, rather than
equally.

Example:
With proportional allocation, we would split 62 frames between two processes, one of 10 pages and one of 127 pages,
by allocating 4 frames and 57 frames

S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)

Frame Allocati on:


Global vs Local Allocation –
The number of frames allocated to a process can also dynamically change depending on
whether you have used global replacement or local replacement for replacing pages in
case of a page fault.
i.Local replacement: When a process needs a page which is not in the memory, it can bring in the new
page and allocate it a frame from its own set of allocated frames only.
ii.Global replacement: When a process needs a page which is not in the memory, it can bring in the new
page and allocate it a frame from the set of all frames, even if that frame is currently allocated to some
other process; that is, one process can take a frame from another.

S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)

Thrashing
• In case, if the page fault and swapping happens very frequently at a higher rate, then
the operating system has to spend more time swapping these pages. This state in the
operating system is termed thrashing. Because of thrashing the CPU utilization is
going to be reduced.

• During thrashing, the CPU spends less time on some actual productive work spend
more time swapping.

S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)

Thrashing

S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)

Thrashing
• Thrashing affects the performance of execution in the Operating system. Also,
thrashing results in severe performance problems in the Operating system.
• When the utilization of CPU is low, then the process scheduling mechanism tries to load many processes
into the memory at the same time due to which degree of Multiprogramming can be increased. Now in
this situation, there are more processes in the memory as compared to the available number of frames
in the memory. Allocation of the limited amount of frames to each process.
• Whenever any process with high priority arrives in the memory and if the frame is not freely available at
that time then the other process that has occupied the frame is residing in the frame will move to
secondary storage and after that this free frame will be allocated to higher priority process.
• We can also say that as soon as the memory fills up, the process starts spending a lot of time for the
required pages to be swapped in. Again the utilization of the CPU becomes low because most of the
processes are waiting for pages.
• Thus a high degree of multiprogramming and lack of frames are two main causes of thrashing in the
Operating system.

S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)

Thrashing
• To prevent thrashing, we must provide a process with as many frames as it needs.
There are two strategies:
• working-set strategy
• Page-Fault Frequency

S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)

Thrashing
• To prevent thrashing, we must provide a process with as many frames as it needs.
There are two strategies:
• working-set strategy The working set model involves identifying the set of pages that a process actively uses during a specific
time window called its "working set." By keeping these pages in memory, the system reduces the likelihood of page faults. Adjusting the system's resources to
accommodate the working sets of active processes can help prevent trashing.
It starts by looking at how many frames a process is actually using. This approach defines the locality
model of process execution.
The locality model states that, as a process executes, it moves from locality to locality. A locality is a set of pages that
are actively used together . A program is generally composed of several different localities, which may overlap.

S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)

Thrashing
• To prevent thrashing, we must provide a process with as many frames as it needs.
There are two strategies:
• Page-Fault Frequency
The working-set model is successful, and knowledge of the working set can be useful for prepaging , but it seems a
clumsy way to control thrashing. A strategy that uses the page-fault frequency (PFF) takes a more direct approach.
The specific problem is how to prevent thrashing. Thrashing has a high page-fault rate. Thus, we want to control the
page-fault rate. When it is too high,weknow that the process needs more frames. Conversely, if the page-fault rate is too
low, then the process may have too many frames. We can establish
upper and lower bounds on the desired page-fault rate
Optimize Page Replacement Algorithms:
• Using efficient page replacement algorithms (like Least Recently Used - LRU or Second Chance) can help reduce the frequency of page faults. These algorithms determine
which pages to replace when a new page needs to be brought into memory. By selecting pages intelligently, the system can reduce unnecessary swapping and,
consequently, trashing.

S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)

Thrashing
• Page-Fault Frequency

If the actual page-fault rate exceeds the upper limit, we allocate the process another frame. If the page-fault
rate falls below the lower limit, we remove a frame from the process. Thus, we can directly measure and
control the page-fault rate to prevent thrashing.

S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)

Memory Mapped Files

• A memory-mapped file contains the contents of a file in virtual memory.


• It allows a partof the virtual address space to be logically associated with the file.
• Memory mapping a file is accomplished by mapping a disk block to a page (or pages) in
memory. Initial access to the file proceeds through ordinary demand paging, resulting in a
page fault.
• Some operating systems provide memory mapping only through a specific system call and
use the standard system calls to perform all other file I/O.
• Multiple processes may be allowed to map the same file concurrently, to allow sharing of
data. Writes by any of the processes modify the data in virtual memory and can be seen
by all others that map the same section of the file.

S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)

Memory Mapped Files

The virtual memory map of each sharing


process points to the same page of physical
memory—the page that holds a copy of the
disk block.

S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)

Memory Mapped Files

The memory-mapped file serves as the


region of shared memory between the
communicating processes.

S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)

Allocati ng Kernel Memory

• When a process running in user mode requests additional memory, pages are allocated
from the list of free page frames maintained by the kernel.
• This list is typically populated using a page-replacement algorithm

Two strategies for managing free memory

1.Buddy System
2. slab allocation

S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)

Allocati ng Kernel Memory


1.Buddy System

• Buddy system allocates memory from a fixed-size segment consisting of physically contiguous pages. Memory
is allocated from this segment using a power-of-2 allocator, which satisfies requests in units sized as a power of
2 (4 KB, 8 KB, 16 KB, and so forth).
Example: Assume the size of a memory segment is initially 256 KB and the kernel requests 21 KB of memory.

The segment is initially divided into two buddies—which we


will call AL and AR—each 128 KB in size. One of these buddies is
further divided into two 64-KB buddies—
BL and BR. However, the next-highest power of 2 from21 KB is
32 KB so either BL or BR is again divided into two 32-KB buddies,
CL and CR. One of these buddies is used to satisfy the 21-KB
request.

S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)

Allocati ng Kernel Memory


1.Buddy System
• Buddy system allocates memory from a fixed-size segment consisting of physically contiguous pages. Memory
is allocated from this segment using a power-of-2 allocator, which satisfies requests in units sized as a power of
2 (4 KB, 8 KB, 16 KB, and so forth).
Example: Assume the size of a memory segment is initially 256 KB and the kernel requests 21 KB of memory.

The segment is initially divided into two buddies—which we


will call AL and AR—each 128 KB in size. One of these buddies is
further divided into two 64-KB buddies—
BL and BR. However, the next-highest power of 2 from21 KB is
32 KB so either BL or BR is again divided into two 32-KB buddies,
CL and CR. One of these buddies is used to satisfy the 21-KB
request.
An advantage of the buddy system is how quickly adjacent
buddies can be combined to form larger segments using a
technique known as coalescing.
Drawback to the buddy system is that rounding up to the
next highest power of 2 is very likely to cause fragmentation
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)

Allocati ng Kernel Memory


2. Slab Allocation
• A slab is made up of one or more physically contiguous pages. A cache consists of
one or more slabs
• There is a single cache for each unique kernel data structure.
• Each cache is populated with objects that are instantiations of the
• kernel data structure the cache represents.
• The relationship among slabs, caches, and objects is shown
in figure

• For example, the cache representing semaphores stores


instances of semaphore objects, the cache representing
process descriptors stores instances of process
descriptor objects, and so forth.

S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
FILES
• To keep track of files, file systems normally have directories or folders.
Single-Level Directory Systems
• The simplest form of directory system is having one directory containing all the files. Sometimes it is called the root
directory.

Hierarchical Directory Systems Directory Operations

Create,
delete,opendir,closedir,readdir,rename
..

S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
File-system Implementati on
• on-disk and in-memory structures
• These structures vary depending on the operating system and the file system.
• On disk, the file system may contain information about how to boot an operating system stored there, the
total number of blocks, the number and location of free blocks, the directory structure, and individual files.
• A boot control block (per volume) can contain information needed by the system to boot an operating system
from that volume. If the disk does not contain an operating system, this block can be empty. It is typically the
first block of a volume. In UFS, it is called the boot block. In NTFS, it is the partition boot sector.
• A volume control block (per volume) contains volume (or partition) details, such as the number of blocks in the
partition, the size of the blocks, a free-block count and free-block pointers, and a free-FCB count and FCB
pointers. In UFS, this is called a superblock. In NTFS, it is stored in the
master file table.

• A directory structure (per file system) is used to organize the files. In UFS, this includes file names and
associated inode numbers. In NTFS, it is stored in the master file table.
• A per-file FCB contains many details about the file. It has a unique identifier number to allow association with a
directory entry. In NTFS, this information is actually stored within the master file table, which uses a relational
database structure,with a row per file.
S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
File-system Implementati on
• on-disk and in-memory structures
• These structures vary depending on the operating system and the file system.
The in-memory information is used for both file-system management and performance improvement via caching.
The data are loaded at mount time, updated during file-system operations, and discarded at dismount. Several
types of structures may be included.

An in-memory mount table contains information about each mounted volume.


The system-wide open-file table contains a copy of the FCB of each open file, as well as other information.
The per-process open-file table contains a pointer to the appropriate entry in the system-wide open-file table, as well as
other information.

S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
File-system Implementati on
Partitions and Mounting

• Physical disks are commonly divided into smaller units called partitions. They can also be combined into
larger units, but that is most commonly done for RAID installations and is left for later chapters.
• Partitions can either be used as raw devices ( with no structure imposed upon them ), or they can be formatted
to hold a filesystem ( i.e. populated with FCBs and initial directory structures as appropriate. )
• Raw partitions are generally used for swap space, and may also be used for certain programs such as
databases that choose to manage their own disk storage system. Partitions containing filesystems can
generally only be accessed using the file system structure by ordinary users, but can often be accessed as a
raw device also by root.
• The boot block is accessed as part of a raw partition, by the boot program prior to any operating system being
loaded. Modern boot programs understand multiple OSes and filesystem formats, and can give the user a
choice of which of several available systems to boot.
• The root partition contains the OS kernel and at least the key portions of the OS needed to complete the boot
process.
• At boot time the root partition is mounted, and control is transferred from the boot program to the kernel
found there.

S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
File-system Implementati on
Virtual File Systems

Virtual File Systems, VFS, provide a common interface to multiple


different filesystem types. In addition, it provides for a unique
identifier ( vnode ) for files across the entire space, including
across all filesystems of different types. ( UNIX inodes are unique
only across a single filesystem, and certainly do not carry across
networked file systems. ).
The VFS in Linux is based upon four key
object types:
o The inode object, representing an individual file
o The file object, representing an open file.
o The superblock object, representing a filesystem.
o The dentry object, representing a directory entry.

S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
File-system Implementati on
Directory Implementation

Directories need to be fast to search, insert, and delete, with a minimum of wasted disk space.
Linear List
• A linear list is the simplest and easiest directory structure to set up, but it does have some drawbacks.
• Finding a file ( or verifying one does not already exist upon creation ) requires a linear search.
• Deletions can be done by moving all entries, flagging an entry as deleted, or by moving the last entry
into the newly vacant position.
• Sorting the list makes searches faster, at the expense of more complex insertions and deletions.
• A linked list makes insertions and deletions into a sorted list easier, with overhead for the links.
Hash Table
A hash table can also be used to speed up searches.
Hash tables are generally implemented in addition to a linear or other structure

S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)

S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
File-system Implementati on
Allocation Methods

There are three major methods of storing files on disks: contiguous, linked, and indexed.

Contiguous Allocation
Linked Allocation
Indexed Allocation

S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
File-system Implementati on
Allocation Methods 1.Contiguous Allocation

• Contiguous Allocation requires that all blocks of a file be


kept together contiguously.
• Performance is very fast, because reading successive blocks
of the same file generally requires no movement of the disk
heads, or at most one small step to the next adjacent
cylinder.
• Storage allocation involves issues for the allocation of
contiguous blocks of memory ( first fit, best fit,
fragmentation problems, etc. ) The distinction is that the
high time penalty required for moving the disk heads from
spot to spot may now justify the benefits of keeping files
contiguously when possible.
• Problems can arise when files grow, or if the exact size of a
file is unknown at creation time.
• A variation is to allocate file space in large contiguous
chunks, called extents. When a file outgrows its original
S sushma,IT Department
extent, then an additional one is allocated
ADITYA ENGINEERING COLLEGE(A)
File-system Implementati on
Allocation Methods 2.Linked Allocation

• Disk files can be stored as linked lists, with the expense of


the storage space consumed by each link. ( E.g. a block may
be 508 bytes instead of 512. )
• Linked allocation involves no external fragmentation, does
not require pre-known file sizes, and allows files to grow
dynamically at any time.
• Unfortunately linked allocation is only efficient for
sequential access files, as random access requires starting at
the beginning of the list for each new location access.
• Allocating clusters of blocks reduces the space wasted by
pointers, at the cost of internal fragmentation.
• Another big problem with linked allocation is reliability if a
pointer is lost or damaged. Doubly linked lists provide some
protection, at the cost of additional overhead and wasted
space.

S sushma,IT Department
ADITYA ENGINEERING COLLEGE(A)
File-system Implementati on
Allocation Methods 3. Indexed Allocation

Indexed Allocation combines all of the indexes for accessing each file
into a common block ( for that file ), as opposed to spreading them all
over the disk or storing them in a FAT table.
Some disk space is wasted ( relative to linked lists or FAT tables )
because an entire index block must be allocated for each file, regardless
of how many data blocks the file contains. This leads to questions of
how big the index block should be, and how it should be implemented.
There are several approaches:
• Linked Scheme - An index block is one disk block, which can be read
and written in a single disk operation. The first index block contains
some header information, the first N block addresses, and if
necessary a pointer to additional linked index blocks.
• Multi-Level Index - The first index block contains a set of pointers to
secondary index blocks, which in turn contain pointers to the actual
data blocks.
• Combined Scheme - This is the scheme used in UNIX inodes, in which
the first 12 or so data block pointers are stored directly in the inode,
and then singly, doubly, and triply indirect pointers provide access
S sushma,IT to
Department

You might also like