0% found this document useful (0 votes)
12 views92 pages

L11 MMU and VirtualMemory

The document discusses memory management and virtual memory. It describes the memory management unit (MMU) and its role in dynamic memory allocation and address translation. It also covers concepts like paging, segmentation, and logical versus physical addresses.

Uploaded by

Rajdeep Bora
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views92 pages

L11 MMU and VirtualMemory

The document discusses memory management and virtual memory. It describes the memory management unit (MMU) and its role in dynamic memory allocation and address translation. It also covers concepts like paging, segmentation, and logical versus physical addresses.

Uploaded by

Rajdeep Bora
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 92

Memory Management

Unit (MMU)
and
Virtual Memory
Memory Management Unit (MMU)
• A Computer system can be Uni-programming or
Multiprogramming depending upon whether it allows only
one or multiple Application programs to the executed
concurrently.
• In a Uni-programming system - main memory divided into two
parts:
• One for the Operating System (OS) programs
• The other for the Application Program currently being executed
• In a Multiprogramming system – main memory is divided into
two parts:
• One for the OS
• The other for application programs
• The part for application programs is subdivided to accommodate
the multiple programs under execution or the application
processes.
• The task of subdivision of memory to accommodate multiple
programs is carried out dynamically by the OS and is managed
with support from the MMU.
Swapping
▪ There may be many application programs to be executed – OS tries to
accommodate these in the main memory
• However, it may not be possible to accommodate all the programs in the
memory
• Programs that are not accommodated, wait for their turn in a queue
▪ When one program completes, the memory area it occupied is released and
a program in the queue is loaded in that space.
▪ When a program initiates an I/O operation it
is required to wait for the operation to
complete before it becomes ready for
executing further.
– it goes into dormant state – the waiting
time can be substantial
• Such a program may be temporarily moved
out of the main memory to the swap space in
the online secondary storage, to
accommodate some other program in the
queue
• This is called swapping
Logical Address/ Physical Address
• A swapped program is moved back, when some memory space
becomes available, to continue processing
• The program may not be moved back to the same area in memory
that it had occupied earlier
• Thus, the addresses for instructions and data of a program may
change every time it is swapped back-and-forth
• This calls for relocation of the program every time after a swap and
that can be pretty costly, in terms of time consumed
• The idea of Logical address is used to solve this problem
• The physical address is the actual location of a data/ instruction
in main memory
• A logical address is the location if a data/ instruction that is
relative to the beginning of the program.
• The processor generates only the logical addresses of the instruction
or the data.
• MMU converts the logical address to physical address by adding
the current starting physical location of the program (base
address) to the logical address.
Logical Address/ Physical Address
Main Memory

Processor
Relocation
Register
Logical Address (715)
Base Address
50000
+ 50000
Application
Physical Address (50715) Program

▪ MMU hardware converts the logical address to


physical address.
• One way to do so is by adding the current starting
physical location of the program, the Base
Address stored in a Relocation Register, to the
logical address.
▪ The Physical Address is not visible to the program or
the processor
Memory Management Unit (MMU)

▪ Goals of memory management


• Convenient abstraction for programming
• Isolation between processes
• Allocate scarce memory resources to competing
processes
• Maximize performance (minimize overhead)
▪ Mechanisms
• Address translation hardware
• Address limit check for address space protection
• Access control
Memory Management Unit (MMU)

OR
Questions??
Logical Address to Physical Address

▪ Contiguous Memory Space Allocation


• Relocation Register scheme (limit reg + relocation register)
• Fixed Partition
• Variable Partition
▪ Non-contiguous Memory Space Allocation
• Paging
• Segmentation
Contiguous Allocation
• Main memory usually divided into two parts
Resident
• Resident Operating System (ROS), usually held in the lower Operating System
address part of the memory (LowMEM) with interrupt vector
table (IV Table).
User Process 1
• User processes are held in the higher address part of the
memory (HighMEM).
User Process 2
▪ Single partition allocation
• Relocation register scheme is used to protect user processes
from each other, and from changing OS code and data.
• Relocation register contains the base address of the physical
memory;
• Limit register contains the range of logical addresses - a User Process N
logical address must be less than the limit register content.
Hardware Support for memory protection
with Relocation and Limit Registers
Question?
Partitioning
• Fixed partitions – Entire
memory area is divided into a
set of partitions such that
each partition can be occupied
by one program
• Two types-
• Equal size partitions – MMU
allocates any of the available
partitions to the process
• Unequal size partitions –
when a program comes in
MMU allocates the smallest
available partition that can
accommodate it
• Disadvantage
Memory space wastage
Partitioning
• Variable sized partitioning - When a program is
brought into memory, it is allocated exactly as much
memory as the program requires and no more
• Disadvantage – Over time it leads to a situation in
which there are a lot of fragmented free spaces called
holes in memory that cannot be utilized by the
processes – this is called fragmentation
• One solution is compaction - From time to time, the
OS shifts processes in memory to place all the free
memory together in one block.
• Time-consuming procedure, wasteful of processor
time.
Partitioning

Effect of Variable
Sized Partitioning
Dynamic Allocation of Memory Partitions
▪ How to satisfy a request of a given size from a list
of free holes.
• First-fit
• allocate the first hole that is big enough
• Best-fit
• Allocate the smallest hole that is big enough; must search
entire list, unless ordered by size.
• Produces the smallest leftover hole.
• Worst-fit
• Allocate the largest hole; must also search entire list.
• Produces the largest leftover hole.
• First-fit and best-fit are better than worst-fit in terms
of speed and storage utilization.
Fragmentation
▪ Free memory space is scattered in a fragmented manner in
large number of Holes
• External fragmentation
• Total memory space exists to satisfy a request, but it is not
contiguous.
• Internal fragmentation
• Allocated memory partition is larger than the requested space
• Excess area in the partition remains unused.
• Reduce external fragmentation by compaction
• Shuffle memory contents to place all free memory together in
one large block
• Compaction is done at execution time
• Possible only if relocation is dynamic.
• I/O problem - (1) latch job in memory while it is in I/O
(2) Do I/O only into OS buffers.
Questions?
Non-contiguous Memory Allocation
– The Paging Approach
• Both unequal fixed-size and variable-size partitions are
inefficient in the use of memory
• Paging is the approach where –
• Memory is partitioned into relatively small, equal fixed-size chunks
– called frames;
• Each program is also divided into chunks – called pages;
• Size of the pages are kept same as that for a frame (and it is a
power of 2);
• A page of a program can be loaded into any of the frames in
memory;
• OS maintins a page table for each program under execution with
one entry for each page;
• These Page Table entries hold the frame numbers of the Frames in
the main memory holding the corresponding pages.
Paging – Address Translation
• OS maintins one page table per process with one entry per page
of the program.
• A page table entry holds the frame number of the frame to which
the page loaded
• The Logical address
consists of the page Logical address
number and the
relative address of the
location within the
page (offset)
• The Physical address
consists of a frame
number and relative
address (offset)
• It is the job of the
MMU to translate the
Logical Address
provided by the CPU
to Physical Address
Paging – Address Translation
Logical/ Virtual Address
Page Number Word Offset

3 100

Page Table
0
1 Concatenation
2 Operation
3 27
4

P-1

Note: 27 100
Length of Logical/ Virtual Frame Number Word Offset
Address need not be same
as that of Physical Address.
Physical Address
Example:
26-bit Address for 64MB Logical/ Virtual Memory Space
16-bit Page Number 10-bit Word Offset

0000000000000011 0000000100

Page Table
0
1 Concatenation
2 Operation
3 000000011011
4

P-1

000000011011 0000000100
12-bit Frame Number 10-bit Word Offset
22-bit Address for 4MB Physical Memory Space
Paging for Multiple Processes
▪ Every process has its own page table
▪ For every process there is-
• A page-table base register (PTBR) points to the page table.
• A Page-table length register (PTLR) that indicates the size (no. of
entries) of the page table.

Page Table Logical address space Physical address space

Logical address space


Paging Example
Ex. A computer system uses a processor that has 4GB address space. The system has
8GB of physical memory. The MMU uses pages of 1KB. What will be the following:
• No. of bits in-
• the Byte Offset field of address,
• the page no. field of logical address,
• the frame no. field of physical address
• No. of entries in the page table, width of each entry and the total size of the
page table?
• Logical Address Space is 4GB => Logical Address Length is 32-bits
• Page length is 1KB => Byte Offset field length is 10-bit
=> Page No. field length is- (32-10) bits = 22-bits
• Physical Address Space is 8GB => Physical Address Length is 33-bits
=> Frame No. field length is- (33-10) bits = 23-bits
• No. of entries in the Page Table = No. of Pages in the Virtual address Space = 222
• Width of Page Table Entry = Length of Frame No. 23-bits
• Size of the Page Table = Width of Entry x No. of Entries = 23 x 222 bits
= 23 x 4 x 220 bits = 11½ MB
Paging Characteristics

▪ No external fragmentation
• All frames (physical memory) can be used by processes
▪ Internal fragmentation is possible
• On average 1/2 page per process (the last page)
▪ Physical memory space used by a process need not be
contiguous
▪ Logical memory space of a process is contiguous
Free Frames

Before allocation After allocation


Control Bits in Page Table Entries

Frame Number V M AC

The Page Table entries require the following Control


Bits in addition to the Frame Number
a. Loaded/ Valid (V) Bit:
• To indicate if the page is loaded in the physical memory
b. Dirty/ Modified (M) Bit:
• Required in case of data pages and when Write-back
policy is used;
• To indicate whether the page has been modified
c. Access Control (AC) Bits:
• To specify Read/ Write/ execute access rights of a
process to the page.
Paging Advantages
▪ Physical Memory is easy to allocate
• To allocate a frame, just remove it from the Free List and enter the
frame number in the page table entry.
▪ No possibility for External Fragmentation
• All the frames are equal to the size of the pages – any page can be
loaded to any of the frames.
• Internal Fragmentation may occur for the last page of the process
if it is smaller than the page frame length.
Paging Shortcomings
▪ Access Over Heads:
• Two lookups (1 page table, 1 memory)
▪ Page Table size:
• May not be possible to maintain the page tables within the MMU. With
a page table per process even in the Main Memory will be strained.
• With page tables maintained in the main memory, multiple memory
accesses are required to perform one read/write operation
• Leads to significant increase in memory access time.
▪ Limitations due to strict page boundaries:
• Pages are created by shopping the process space into equal sized
chunks without any concern for the object (routines, data structures
etc.) in the process.
• An object may spread over multiple pages or parts of multiple objects
may be placed in the same page.
• Selective separation of objects in the process space is not possible.
• Selective individual object level sharing, specification of access writes
to the objects is not possible.
Questions?
Reducing Memory Access
Overhead in Paging
Translation Lookaside Buffer (TLAB)
• To avoid two physical memory
accesses for every memory
reference, use a special cache for
page table entries in the MMU–
called translation lookaside buffer
(TLB)
• This is an associative buffer
• Holds the recently used Frame#
along with the mapped Page#.
• The TLB entries comprise (Page#,
Frame#) pairs
• For every memory reference the
page# is searched in the TLB.
• If a hit occurs, the corresponding
Frame# is used to construct the
physical address.
• If a miss occurs, the reference goes
to the Page Table in memory to
fetch the Frame#.
Effective Access time with TLAB

• TLAB Cache Lookup time = Tc


• Assume Memory Access time = Tm
• TLAB Hit Ratio = h
• Effective access time,
Teff = (Tm + Tc) h + (2Tm + Tc) (1- h)
= (2-h)Tm + Tc

For Tm = 100ns, Tc = 5ns, h=0.95


Teff = (2 – 0.95)x100 + 5 ns = 110ns
Questions?
Alternative Page Table Structures

▪ Hierarchical Paging

▪ Hashed Page Tables

▪ Inverted Page Tables


Hierarchical Page Tables

▪ Organize the page table as a multi-level hierarchical


structure
• Lowest level of the structure is the Page Table with entries to
hold the numbers of the frames in physical memory that hold
the corresponding page
• Higher level table entries point to immediate lower-level
page table modules
• Typically one page table module is made to fit into a page
frame.
▪ Objective is to eliminate the requirement of a single large
memory chunk for the page table by enabling distribution of
the page table modules in non-contiguous page frames
Two-Level Page-Table Scheme
▪ The simplest of hierarchical page
tables is a two-level page table
▪ The lower-level of the structure is
the Page Table, and the upper-level
is called the Outer Page Table.
▪ The page number field of logical
address is divided into 2 sub-fields:
• Index of outer page table (p1)
• Index of the page table modules (p2)
▪ The outer page table has 2p1 entries
containing pointers to the page table
modules in memory. Page number Page offset
p1 p2 d
▪ Each module in the Page Table 2p2
entries containing pointers to the The Logical address fields
frames holding the corresponding
pages in physical memory.
Address-Translation Scheme
Two Level Paging Example
▪ Consider a system with 32bit logical address and 4GB physical memory
using 4KB pages.
• For 4KB pages the page offset be 12bits long
• The remaining 20bits in the logical address will be the page number
• The page number field is divided into two 10bit fields-
• p1 for the index of the outer page table
• p2 for the offset within the page of a page table module.
• p1 allows 1024 entries in the outer page table for a total of 1024 page table
modules
• p2 allows 1024 entries in each page table module, one for each logical page.
• Provides for the total 1M entries for the 1M pages in the logical address
space.
• A module of the page table may be loaded in any of the physical frames.
Page number Page offset
p p d
1 2
10 10 12

The Logical address fields


Multilevel Paging

Fields in 64 bit Logical Address Fields in 64 bit Logical Address


Memory for 2-level Page Memory for 3-level Page
Table Table

▪ In a n-level hierarchical table the page number field of logical address will
be divided into n sub-fields:
▪ Each level is a separate table in memory
▪ Converting a logical address to a physical one may require more memory
accesses.
▪ Caching (TLAB) can help performance remain reasonable.
• Consider 3-level page with memory access time of 100ns, cache look-up
time of 5ns and TLAB hit rate is 95%.
• Memory access with TLAB miss will require 4 memory references.
• Effective Access time = 0.95 x 105 + .05 x 405 = 120 ns
• This is only a 20% slowdown in memory access
Questions?
Hashed Page Tables
▪ Objective is to Reduce Page Table Size and the Access Overhead
▪ Maintains a single Page Table common to all the processes
▪ Common to have address space with address length > 32 bits
▪ The logical page number is
hashed into a page table
▪ Multiple page numbers
may hash to the same
location in the table.
• A chain will be
maintained for these
page numbers that hash
to the same location
• Logical page numbers are
compared in this chain
searching for a match
• If a match is found, the
corresponding physical
frame is extracted
Questions?
Inverted Page Table
▪ One entry for each page
frame in physical memory
• Entry consists of logical
address of page in physical
memory with information
about process that owns the
page. (page# + process_id)
• Decreases memory needed to
store page table
• Increases time to search table
when a page reference occurs
• Table sorted by physical
address, lookup by virtual
address
• Hashing may be used to limit
search to one (maybe few)
page table entries.
Questions?
Segmentation
• A program consists of the procedures and the data Program
structures. These we may call the objects, the collection Code Segment-1
of which constitute the program. Code Segment-2
• In the Segmentation method the program is partitioned
into Segments, where each segment consists of one or
Code Segment-3
more of the objects in the program.
Code Segment-4
• The segments may vary in length depending upon the
size of the object(s) in it. There will however be an upper Data Segment-1
and a lower limit on the length of the segments.
• One or more segments of a program under execution
Data Segment-2
may be loaded into the main memory. The space
occupied in memory by a program segment constitutes a
memory segment.
• Access rights (e. g. Read Only, Read and Write, shared
etc.) may be specified for each of the segments
individually to ensure proper protection of the objects in
it in a multi-user environment.
Logical view of segmentation

Program
Code Segment-1
Code Segment-2

C1
Code Segment-3
C2
Code Segment-4
D1
C1
C4
Data Segment-1 C4
C3 C2

Data Segment-2 D1 C3

D2 D2

User Space Physical Memory


Segmentation
• In the segmentation method the logical address space of a
program is not linear, it is segmented;
• The logical address of a location in the program shall
comprise the combination of the segment number and the
offset of the location in the segment;
• The MMU maintains a Segment Table for each program
with one entry in it for each of the segments in the
program
• The segment table entries hold the start addresses of the
memory segment where the respective program segments
are loaded and the lengths and the access rights of these
segments.
• A Physical Memory Address therefore comprises the
arithmetic sum of memory segment start address and the
offset;
• Unlike in the Paging method, it is arithmetic sum as the
length of a segment need not be a power of 2.
Address Translation with Segmentation
Logical Address
Segment Register Program Counter
(Segment Number) (Offset/ Displacement)

3 500
Segment
Descriptor Table
Start Segment A
Address Length
L A<L? Yes
0
1 No
2 Abort
3
4
13500 1000
+

S-1 14000
Physical Address
Shared Segments

▪ Code and data can be shared among processes


• Reentrant (non self-modifying) code can be shared.
• Shared Data
• Map them into segments with common segment
frame mappings
• Single copy of read-only code - compilers, editors etc..
▪ Private code and data
• Each process keeps a separate copy of code and data
Shared Segments

Logical Memory Space


Process P1
Limit Base
0 25286 43602
editor 1 4425 68348
43062
segment 0 Segment Table
data 1 editor
process P1 68348
segment 1 data 1
72773
data 1

editor Limit Base 90003

0 25286 43602 data 2


segment 0 98553
1 8850 90003
data 2
Segment Table
segment 1 process P2

Logical Memory Space


Process P2
Segmentation
• Advantages:
• Allows Protection and Access Control mechanisms for the
program segments
• The objects with similar access rights can be grouped in a
segment;
• Objects shared by a group of processes may be placed in the
same segment.

• Disadvantages:
• As the segment lengths vary, leads to memory
Fragmentation;
• A entire program segment needs to be loaded into the
physical memory when reference to any of its locations is
made
• Leads to low degree of multiprogramming.
Segmentation Vs Paging
Paging Approach Segmentation Approach
Partitions the program space into Partitions the program space in to
equal sized pages, irrespective the segments containing one or more
content. objects such as, procedures, data
structures.
Since the pages are not object The segments are object specific.
specific, no object specific access Therefore object specific access
rights, sharing or protection rights, sharing or protection
mechanisms can be implemented. mechanisms can be implemented.
External fragmentation does not External fragmentation leads to
occur. poor utilization of memory.
Page size is a power of 2. Therefore Segment size is variable and not a
the Physical address generation power of 2. Therefore the Physical
requires Concatenation of 2 bit address generation requires
strings, not Arithmetic Summing – Arithmetic Summing – Costlier.
Less Costly.
Combining Segmentation with Paging
▪ Partition the Program into segments and partition these
segments into equal sized pages;
▪ Partition the Physical Memory into Frames that are equal to
the size of the Pages;
▪ Any page of a segment may be loaded into any of the frames
in the physical memory;
▪ All the pages of a program segment may not be loaded into
the physical memory together;
▪ Only the demanded pages of a segment, as per locality of
reference, may only be loaded;
▪ Similarly all segments of a program also may not be loaded
into the main memory – can be loaded on demand;
▪ Combines the strengths of both Paging and Segmentation.
Segmentation with Paging
Logical Address
Segment Register Program Counter
(Segment Number) Page Number Word Offset
3 50 432
Segment
Descriptor Table P
Segment Start Page Table
Length Address
P<L? Y Frame Number
0 L
1 N


100
P
2 Abor
3 55 13500 t + 150
345
4

S-1

345 432
Physical Address
Questions?
Demand Paging/ Segmentation
Demand Paging – TLB
• A large amount of physical memory is required for the page
tables
• To read a word from memory –
• Translate logical address to physical address using page table
• Page number + offset -> frame address + offset
• Every memory reference requires two physical memory
accesses-
1. To fetch the appropriate page table entry to obtain the frame
address for computing the physical address
2. To fetch the desired data from the physical address
• To overcome this – use a special cache for page table entries
– called translation lookaside buffer (TLB)
• Similar to memory cache
• Contains page table entries that have been recently used
• Maintain it in the MMU
Demand Paging – TLB with Cache Virtual memory mechanism is required
to interact with MM cache as follows:
1. CPU requests for some logical address
(page number + offset)
2. MMU consults the TLB to see if
requested page table entry is present

3. If present, the physical address (frame number + offset) is


obtained
4. If not, the entry is accessed from a page table in memory
5. Once physical address is generated, cache is consulted to
see if the block containing the word is present
6. If so, it is returned to the processor
7. If not, the word is retrieved from main memory
Question?
Demand Paging
• In simple paging – when a program is brought into
memory, all its pages are loaded into free frames in the
memory
• In demand paging – each page of a program is brought in
only when it is needed, that is, on demand
• Entire program may not physically be in memory during execution
– only a few pages are loaded as demanded by CPU
• If the program references data on a page not in memory, the MMU
detects it and raises a page fault signal which act as an interrupt –
the OS then locates the page in the online storage and loads it onto
a memory frame
• Pages not referred are not needed to be loaded
• Less I/O
• Less Memory needed
• Higher degree of multiprogramming
Demand Paging – Virtual Memory
• With demand paging, it is not necessary to load an entire process
into main memory, allowing-
• Memory space made available for a program to be less than the
size of the program + data
• More number of programs to be in the memory
• Program + data size can be larger than the size of the main
memory
• The details of loading the appropriate pages of a program under
execution into the physical memory is left to the OS and the
hardware
• The programmer/user simply perceives a very large memory
space – part of which actually is provided in the online storage
• This illusion of large memory space created is referred to as
virtual memory.
• Virtual memory allows-
• For very effective multiprogramming
• Relieves the programmers of the constraints due to limited main
memory
Example: Virtual Memory
Ex. A computer system uses a processor that has 32GB address
space. The system has 1GB of physical memory. The MMU
uses pages of 1KB. What will be the following:
• No. of bits in-
• the Byte Offset field of address,
• the page no. field of logical address,
• the frame no. field of physical address
• No. of entries in the page table, width of each entry and the total size of the
page table?
• Virtual Address Space is 32GB => Virtual Address Length is 35-bits
• Page length is 1KB => Byte Offset field length is 10-bit
=> Page No. field length is- (35-10) bits = 25-bits
• Physical Address Space is 1GB => Physical Address Length is 30-bits
=> Frame No. field length is- (30-10) bits = 20-bits
• No. of entries in the Page Table = No. of Pages in the Virtual address Space = 225
• Width of Page Table Entry = Length of Frame No. 20-bits
• Size of the Page Table = Width of Entry x No. of Entries = 20 x 225 bits = 5 x 4 x 225
= 5 x 8 x 224 Bits = 5 x 224 Bytes = 5 x 16 x 220 Bytes = 80 MB
Use of the Valid bit in Page Table

▪ The Valid bit associated with a


page table entry is set as-
• V = 1  Page is in memory,
• V = 0  Page is not in
memory
▪ Initially, valid-invalid bit is set
to 0 on all entries.
• During address translation, if
MMU finds Valid bit in page
table entry is 0 – it raises a
Page Fault interrupt.
Handling a Page Fault

Step 1: Page fault interrupt occurs – process suspended and control


transferred to Page-Fault Handler in OS.
Step 2: Check if the virtual memory address is valid
Abort the process if invalid reference.
If (valid reference), and (page not in memory),
Continue.
Step 3: Find a free page frame,
Fetch the disk block(s) corresponding to the page into page frame.
If (disk read has completed),
Update the page table entry with the frame# and
Set Valid Bit to 1.
Step 4: Restart instruction interrupted by illegal address trap. The process
will continue as if page had always been in memory.
Steps in Handling a Page Fault
Absolute Paging

▪ If a process starts executing with no pages of it in memory


• OS will fault to bring in the pages as necessary and eventually
requiring no more faults any longer.
• This is called absolute or pure demand paging.
Performance of Demand Paging
▪ Page Fault Ratio - 0  p  1
• If p = 0, no page faults
• If p = 1, every reference is a page fault

▪ Effective Memory Access Time


Teff = (1-p)*Tm + p*Tf
where
Tm is memory access time
Tf is page fault service time
= Page fault overhead
+ Swap out time for replaced page (if modified)
+ Swap in time for new page in
+ Process resumption overhead)
Demand Paging Example
Consider a system with Memory access time = 200 ns and
Average Page-fault service time = 8ms
EAT, Teff = (1 – p) x 200 + p (8ms)
= (1 – p) x 200 + p x 8,000,000 ns
= 200 + p x 7,999,800ns
• If one access out of 1,000 causes a page fault, then p = 0.001
Teff = 200 + 0.001 x 7,999,800 = 8.2 microseconds.

• This is a slowdown by a factor of 40!!


• During Page Fault Handling the process to user process is put to
sleep.
• This large access time does not cause CPU idling.
• It also does not become visible to the process and hence not
considered to affect the average memory access time process.
• It does become a system overhead.
Questions?
Virtual Memory
▪ With demand paging, it is not necessary to load all the
pages of the program into main memory. Allows-
• Memory space made available to a program to be less than the
size of the program + data
• More number of programs to be in the memory
• Program + data size can be larger than the size of the main
memory
• The logical address can be longer than the physical address
▪ The details of loading the appropriate pages of a
program under execution into the physical memory is
left to the OS and the hardware
• The programmer/user simply perceives a very large memory
space – which, actually, is provided in the online storage
• This illusion of large memory space created is referred to as
Virtual Memory.
Virtual Memory
▪ Virtual memory allows-
• Higher degree of multiprogramming
• Relieves the programmers from the constraints of
limited main memory space
▪ Virtual Memory can be implemented using
Demand Paging or Demand Segmentation
• Need to allow pages to be swapped in and out.
• When a new page is brought in, it must replace an
existing page, if no free page frame is available – this is
page replacement
• The phenomenon of locality of reference is used in
choosing which page to be replaced
• LRU is a popular page replacement policy
Virtual Memory that is Larger than
Physical Memory


Paging/ Segmentation Policies
Policies need to be adopted to minimize overhead due to page faults
and maximize utilization through high degree of multiprogramming
▪ Fetch Policy
• When should a page or segment be brought into primary memory from
secondary (disk) storage?
• Demand Fetch
• Anticipatory Fetch
▪ Placement Policy
• When a page or segment is brought into memory, where is it to be put?
• Paging – trivial
• Segmentation – significant problem
• Paged Segments – trivial
▪ Replacement Policy
• Which page/segment should be replaced when there is not enough room for a
required page/segment?
▪ Allocation Policy
• How many page frames/ physical memory space should be allocated to
process?
Questions?
Page Replacement
▪ When a new page is to be brought in and if no free page frame
is available, an existing page needs to be swapped out to make
way for loading of the new page in the freed frame.
• There is need for selection of the page in memory to be swapped out.
• A page swapped out may need to be swapped in when a reference
occurs to the page later. This will cause a page fault and the
associated overhead.
• Hence the page to be selected for replacement needs suitable policy
to minimize the number of page faults
▪ A swapped-out page needs to be copied back to the virtual
memory space (online storage image) of the process only if the
page is modified (the Modified/ Dirty bit is set).
Need For Page Replacement
Basic Page Replacement
▪ Find the location of the desired
page on disk
▪ Find a free frame:
• If there is a free frame, use it
• If there is no free frame, use
a page replacement
algorithm to select a victim
frame
▪ Bring the desired page into the
(newly) free frame; update the
page and frame tables
▪ Resume execution of the
process
Page Replacement Policies
▪ The Principle of Optimality
• Replace the page that will not be used again in the farthest time
into the future.
▪ Random Page Replacement
• Choose a page randomly
▪ FIFO - First in First Out
• Replace the page that has been in memory the longest.
▪ LRU - Least Recently Used
• Replace the page that has not been used for the longest time.
▪ LFU - Least Frequently Used
• Replace the page that is used least often.
▪ NUR - Not Used Recently
• An approximation to LRU

➢ Working Set Principle


▪ Keep in memory those pages that the process is actively using
• Active Locality – Phenomenon of Locality of Reference
First-In-First-Out (FIFO) Policy
▪ Reference String: 1,2,3,4,1,2,5,1,2,3,4,5
▪ Assume x frames ( x pages can be in memory at a time per
process).

Frame 1 1 4 5 1,2,3,4,1,2,5,1,2,3,4,5
x=3, i.e. 3 frames Frame 2 2 1 3
9 Page faults
Frame 3 3 2 4

Frame 1 1 5 4 1,2,3,4,1,2,5,1,2,3,4,5
x=4, i.e. 4 frames Frame 2 2 1 5
10 Page faults
Frame 3 3 2
Frame 4 4 3

Belady’s Anomaly -- More frames does not mean less page faults
Optimal Algorithm
▪ Replace page that will
An Example:
not be used for longest
Allocated frames: 4
period of time.
• Requires knowledge of Reference String: 1,2,3,4,1,2,5,1,2,3,4,5
future reference string
Ref Page# 1 2 3 4 1 2 5 1 2 3 4 5
• Hence not feasible in
practice. Replace - - - - - - 4 - - - 1 -

▪ Generally used to Fault# 1 2 3 4 - - 5 - - - 5 -


measure how well an
algorithm performs. Page Faults: 6
Least Recently Used (LRU) Policy
▪ Choose the page that has not been used for the longest period.
• Use recent past as an approximation of near future.
• May require hardware assistance to implement.

Example: Reference String: 1,2,3,4,1,2,5,1,2,3,4,5

Frame 1 1 1 1 1 5
4 frames Frame 2 2 2 2 2 2 8 Page faults
Frame 3 3 5 5 4 4
Frame 4 4 4 3 3 3
Implementation of LRU algorithm

▪ Counter Implementation
• Every page entry has a counter; every time page is referenced
through this entry, copy the clock into the counter.
• When a page needs to be replaced, look at the counters to
determine which page to change (page with smallest time value).
▪ Stack Implementation
• Keeps a stack of page number in a doubly linked form
• Page referenced
• move it (most referenced page) to the top
• required 6 pointers to be changed
• No search required for replacement
Use of a Stack to Record the Most
Recent Page References
LRU Approximation Algorithms
▪ Reference Bit
• With each page, associate a bit, initially = 0.
• When page is referenced, bit is set to 1.
• Replace any one of the pages for which reference bit is 0 (if any
exists).
▪ Additional Reference Bits Algorithm
• Record reference bits at regular intervals.
• Maintain an unsigned number (say, 8 bits) in a table for each page in
memory.
• Periodically, shift the reference bit into high-order bit, i.e. shift other
bits to the right, dropping the lowest bit.
• During page replacement, consider the page with the lowest number
as the LRU page.
LRU Approximation Algorithms

▪ Second Chance
• FIFO (clock) replacement algorithm
• Need a reference bit.
• When a page is selected, inspect the
reference bit.
• If the reference bit = 0, replace
the page.
• If page to be replaced (in clock
order) has reference bit = 1, then
• Set reference bit to 0
• Leave page in memory
• Replace next page (in clock
order) subject to same rules.
LRU Approximation Algorithms
▪ Enhanced Second Chance
• Need a reference bit and a modified bit as an ordered pair.
(ref bit, modified bit)
• 4 situations are possible:
• (0,0) - Neither recently used nor modified – best page to replace.
• (0,1) - Not recently used, but modified – not quite as good, because
the page will need to be written out before replacement.
• (1,0) - Recently used but clean – likely be used again soon.
• (1,1) - Probably will be used again, will need to write out before
replacement.
• Used in the Macintosh virtual memory management scheme
Counting Algorithms
▪ Keep a counter of the number of references
that has been made to each page.
• LFU (least frequently used) algorithm
• Replaces page with smallest count.
• Rationale : frequently used page should have a large
reference count.
• Variation - shift bits right, exponentially decaying count.
• MFU (most frequently used) algorithm
• Replaces page with highest count.
• Based on the argument that the page with the smallest count
was probably just brought in and has yet to be used.
Page Buffering Algorithm
▪ Keep pool of free frames
• Solution 1
• When a page fault occurs, choose victim frame.
• Desired page is read into free frame from pool before victim is
written out.
• Allows process to restart soon, victim is later written out and
added the frame to the free frame pool.
• Solution 2
• Maintain a list of modified pages. When paging device is idle,
write modified pages to disk and clear modify bit.
• Solution 3
• Keep frame contents in pool of free frames and remember which
page was in frame. If desired page is in free frame pool, no need
to page in.
Control Bits
Page Protection

Segmentation Protection

Reference - Page has been accessed


Valid - Page exists
Resident - Page is cached in primary memory
Dirty - Page has been changed since page in
Questions?
Thank You

You might also like