Lecture On Virtual Memory
Lecture On Virtual Memory
Lecture On Virtual Memory
Operating System Concepts – 10th Edition 10.2 Silberschatz, Galvin and Gagne
Objectives
Operating System Concepts – 10th Edition 10.3 Silberschatz, Galvin and Gagne
Background
Code needs to be in memory to execute, but entire
program rarely used
Entire program code not needed at same time
Consider ability to execute partially-loaded program
Program no longer constrained by limits of
physical memory
Each program takes less memory while running ->
more programs run at the same time
Increased CPU utilization and throughput with
no increase in response time or turnaround
time
Less I/O needed to load or swap programs into
memory -> each user program runs faster
Operating System Concepts – 10th Edition 10.4 Silberschatz, Galvin and Gagne
Background (Cont.)
Virtual memory – separation of user logical memory
from physical memory
Only part of the program needs to be in memory for
execution
Logical address space can therefore be much larger
than physical address space
Allows address spaces to be shared by several
processes
Allows for more efficient process creation
More programs running concurrently
Less I/O needed to load or swap processes
Operating System Concepts – 10th Edition 10.5 Silberschatz, Galvin and Gagne
Background (Cont.)
Virtual address space – logical view of how
process is stored in memory
Usually start at address 0, contiguous addresses
until end of space
Meanwhile, physical memory organized in page
frames
MMU must map logical to physical
Virtual memory can be implemented via:
Demand paging
Demand segmentation
Operating System Concepts – 10th Edition 10.6 Silberschatz, Galvin and Gagne
Virtual Memory That is Larger Than Physical Memory
Operating System Concepts – 10th Edition 10.7 Silberschatz, Galvin and Gagne
Virtual-address Space
Usually design logical address
space for stack to start at Max
logical address and grow “down”
while heap grows “up”
Maximizes address space
use
Unused address space
between the two is hole
No physical memory
needed until heap or
stack grows to a given
new page
Enables sparse address spaces
with holes left for growth,
dynamically linked libraries, etc
System libraries shared via
mapping into virtual address
space
Shared memory by mapping
pages read-write into virtual
address space
Operating System Concepts – 10th Edition 10.8 Silberschatz, Galvin and Gagne
Shared Library Using Virtual Memory
Operating System Concepts – 10th Edition 10.9 Silberschatz, Galvin and Gagne
Demand Paging
Could bring entire process into
memory at load time - Swapping
Or bring a page into memory
only when it is needed – Demand
Paging
Less I/O needed, no
unnecessary I/O
Less memory needed
Faster response
More users
Similar to paging system with
swapping (diagram on right)
Page is needed reference to it
invalid reference abort
not-in-memory bring to
memory
Lazy swapper – never swaps a
page into memory unless page
will be needed
Swapper that deals with
pages is a pager
Operating System Concepts – 10th Edition 10.10 Silberschatz, Galvin and Gagne
Basic Concepts
With swapping, pager guesses which pages will be
used before swapping out again
Instead, pager brings in only those pages into memory.
Paging involves dividing a process's memory into fixed-
size pages and storing them in physical memory or
secondary storage.
When a process needs to access a page that is not
currently in memory, the operating system fetches it
from secondary storage and places it in physical
memory. This is known as a page fault.
How to determine that set of pages?
Need new MMU functionality to implement demand
paging
If page needed and not memory resident
Need to detect and load the page into memory from
storage
Without changing program behavior
Without programmer
– 10th Edition
Operating System Concepts 10.11needing to change code Galvin and Gagne
Silberschatz,
Valid-Invalid Bit
With each page table entry a valid–invalid bit is
associated
(v in-memory – memory resident, i not-in-memory)
Initially valid–invalid bit is set to i on all entries
Example of a page table snapshot:
Operating System Concepts – 10th Edition 10.12 Silberschatz, Galvin and Gagne
Page Table When Some Pages Are Not in Main Memory
Operating System Concepts – 10th Edition 10.13 Silberschatz, Galvin and Gagne
Page Fault
Operating System Concepts – 10th Edition 10.14 Silberschatz, Galvin and Gagne
Steps in Handling a Page Fault
Operating System Concepts – 10th Edition 10.15 Silberschatz, Galvin and Gagne
Aspects of Demand Paging
Extreme case – start process with no pages in memory
OS sets instruction pointer to first instruction of
process, non-memory-resident -> page fault
And for every other process pages on first access
Pure demand paging
Actually, a given instruction could access multiple pages
-> multiple page faults
Consider fetch and decode of instruction which adds
2 numbers from memory and stores result back to
memory
Pain decreased because of locality of reference
Hardware support needed for demand paging
Page table with valid / invalid bit
Secondary memory (swap device with swap space)
Instruction restart
Operating System Concepts – 10th Edition 10.16 Silberschatz, Galvin and Gagne
Instruction Restart
Consider an instruction that could access several
different locations
block move
Operating System Concepts – 10th Edition 10.17 Silberschatz, Galvin and Gagne
Process creation
Virtual memory allows other benefits during process
creation:
Copy-on-Write:
If the process forks (i.e., creates a new child
process), the operating system uses a technique
called copy-on-write to avoid duplicating the
process's entire virtual address space. Instead,
the child process initially shares the same physical
pages as the parent process, and only makes
copies of pages that are modified.
Memory-Mapped Files (Later)
Operating System Concepts – 10th Edition 10.18 Silberschatz, Galvin and Gagne
Copy-on-Write
Copy-on-Write (COW) allows both parent and child processes
to initially share the same pages in memory
If either process modifies a shared page, only then is the
page copied
COW allows more efficient process creation as only modified
pages are copied
In general, free pages are allocated from a pool of zero-fill-
on-demand pages
Pool should always have free frames for fast demand
page execution
Don’t want to have to free a frame as well as other
processing on page fault
Operating System Concepts – 10th Edition 10.19 Silberschatz, Galvin and Gagne
Before Process 1 Modifies Page C
Operating System Concepts – 10th Edition 10.20 Silberschatz, Galvin and Gagne
After Process 1 Modifies Page C
Operating System Concepts – 10th Edition 10.21 Silberschatz, Galvin and Gagne
What Happens if There is no Free Frame?
Operating System Concepts – 10th Edition 10.22 Silberschatz, Galvin and Gagne
Page Replacement
Operating System Concepts – 10th Edition 10.23 Silberschatz, Galvin and Gagne
Need For Page Replacement
Operating System Concepts – 10th Edition 10.24 Silberschatz, Galvin and Gagne
Basic Page Replacement
1. Find the location of the desired page on disk
Operating System Concepts – 10th Edition 10.25 Silberschatz, Galvin and Gagne
Page Replacement
Operating System Concepts – 10th Edition 10.26 Silberschatz, Galvin and Gagne
Page and Frame Replacement Algorithms
Operating System Concepts – 10th Edition 10.27 Silberschatz, Galvin and Gagne
Graph of Page Faults Versus The Number of Frames
Operating System Concepts – 10th Edition 10.28 Silberschatz, Galvin and Gagne
First-In-First-Out (FIFO) Algorithm
Reference string: 7,0,1,2,0,3,0,4,2,3,0,3,2,1,2,0,1,7,0,1
3 frames (3 pages can be in memory at a time per
process)
15 page faults
Can vary by reference string: consider
1,2,3,4,1,2,5,1,2,3,4,5 with 3 frames vs 4 Frames?
Adding more frames can cause more page faults!
Belady’s Anomaly
Operating System Concepts – 10th Edition 10.29 Silberschatz, Galvin and Gagne
FIFO Illustrating Belady’s Anomaly
Operating System Concepts – 10th Edition 10.30 Silberschatz, Galvin and Gagne
Optimal Algorithm
Replace page that will not be used for longest period of time
Once memory is full, apply optimal page replacement
You will need to know in advance the future string values
How do you know this?
Can’t read the future
Used for measuring how well your algorithm performs
Operating System Concepts – 10th Edition 10.31 Silberschatz, Galvin and Gagne
Least Recently Used (LRU) Algorithm
Use past knowledge rather than future
Replace page that has not been used in the most
amount of time
Associate time of last use with each page (last 3
recents)
Operating System Concepts – 10th Edition 10.32 Silberschatz, Galvin and Gagne
Class Activity
Consider the following reference string:
1, 2, 3, 4, 2, 1, 5, 6, 2, 1, 2, 3, 7, 6, 3, 2, 1, 2, 3, 6
Operating System Concepts – 10th Edition 10.33 Silberschatz, Galvin and Gagne
LRU Algorithm (Cont.)
Counter implementation
Every page entry has a counter; every time page is
referenced through this entry, copy the clock into
the counter
When a page needs to be changed, look at the
counters to find smallest value
Search through table needed
Stack implementation
Keep a stack of page numbers in a double link form:
Page referenced:
move it to the top
requires 6 pointers to be changed
But each update more expensive
No search for replacement
LRU and OPT are cases of stack algorithms that don’t
have Belady’s Anomaly
Operating System Concepts – 10th Edition 10.34 Silberschatz, Galvin and Gagne
Use Of A Stack to Record Most Recent Page References
Operating System Concepts – 10th Edition 10.35 Silberschatz, Galvin and Gagne
LRU Approximation Algorithms
LRU needs special hardware and still slow
Reference bit
With each page associate a bit, initially = 0
When page is referenced bit set to 1
Replace any with reference bit = 0 (if one exists)
We do not know the order, however
Second-chance algorithm
Generally FIFO, plus hardware-provided reference
bit
Clock replacement
If page to be replaced has
Reference bit = 0 -> replace it
reference bit = 1 then:
– set reference bit 0, leave page in memory
– replace next page, subject to same rules
Operating System Concepts – 10th Edition 10.36 Silberschatz, Galvin and Gagne
Second-Chance (clock) Page-Replacement Algorithm
Operating System Concepts – 10th Edition 10.37 Silberschatz, Galvin and Gagne
Counting Algorithms
Keep a counter of the number of references that have
been made to each page
Not common
Operating System Concepts – 10th Edition 10.38 Silberschatz, Galvin and Gagne
Allocation of Frames
Each process needs minimum number of frames
Example: IBM 370 – 6 pages to handle SS MOVE
instruction:
instruction is 6 bytes, might span 2 pages
2 pages to handle from
2 pages to handle to
Maximum of course is total frames in the system
Two major allocation schemes
fixed allocation
priority allocation
Many variations
Operating System Concepts – 10th Edition 10.39 Silberschatz, Galvin and Gagne
Fixed Allocation
Equal allocation – For example, if there are 100
frames (after allocating frames for the OS) and 5
processes, give each process 20 frames
Keep some as free frame buffer pool
Operating System Concepts – 10th Edition 10.40 Silberschatz, Galvin and Gagne
Priority Allocation
Use a proportional allocation scheme using
priorities rather than size
Operating System Concepts – 10th Edition 10.41 Silberschatz, Galvin and Gagne
Global vs. Local Allocation
Global replacement – process selects a
replacement frame from the set of all frames; one
process can take a frame from another
But then process execution time can vary
greatly
But greater throughput so more common
Operating System Concepts – 10th Edition 10.42 Silberschatz, Galvin and Gagne
Non-Uniform Memory Access
So far all memory accessed equally
Many systems are NUMA – speed of access to
memory varies
Consider system boards containing CPUs and
memory, interconnected over a system bus
Optimal performance comes from allocating memory
“close to” the CPU on which the thread is scheduled
And modifying the scheduler to schedule the
thread on the same system board when possible
Solved by Solaris by creating lgroups
Structure to track CPU / Memory low latency
groups
Used my schedule and pager
When possible schedule all threads of a
process and allocate all memory for that
process within the lgroup
Operating System Concepts – 10th Edition 10.43 Silberschatz, Galvin and Gagne
Thrashing
Consider what occurs if a process does not have “enough”
frames—that is, it does not have the minimum number of
frames it needs to support pages in the working set.
The process will quickly page-fault. At this point, it must
replace
some page.
However, since all its pages are in active use, it must
replace a page
that will be needed again right away.
Consequently, it quickly faults again, and again, and again,
replacing pages that it must bring back in immediately.
This high paging activity is called thrashing.
A process is thrashing if it is spending more time paging
than executing.
As you might expect, thrashing results in severe
performance problems.
Operating System Concepts – 10th Edition 10.44 Silberschatz, Galvin and Gagne
Thrashing
If a process does not have “enough” pages, the page-
fault rate is very high
Page fault to get page
Replace existing frame
But quickly need replaced frame back
This leads to:
Low CPU utilization
Operating system thinking that it needs to
increase the degree of multiprogramming
Another process added to the system
Operating System Concepts – 10th Edition 10.45 Silberschatz, Galvin and Gagne
Memory-Mapped Files
Memory-mapped files are a technique used in
computer systems to allow programs to access files
stored on disk as if they were in main memory.
This is achieved by mapping a region of the file
directly into memory, allowing the program to access
the file's contents using normal memory operations.
The operating system handles the actual
implementation of memory-mapped files, and it is
generally done by creating a virtual memory address
space that corresponds to the file's contents.
When the program accesses this memory, the
operating system automatically fetches the required
data from the file, either from memory or by reading it
from the disk if necessary.
Operating System Concepts – 10th Edition 10.46 Silberschatz, Galvin and Gagne
Memory Mapped Files
Operating System Concepts – 10th Edition 10.47 Silberschatz, Galvin and Gagne
Shared Memory via Memory-Mapped I/O
Operating System Concepts – 10th Edition 10.48 Silberschatz, Galvin and Gagne
Buddy System
Allocates memory from fixed-size segment consisting of
physically-contiguous pages
Memory allocated using power-of-2 allocator
Satisfies requests in units sized as power of 2
Request rounded up to next highest power of 2
When smaller allocation needed than is available,
current chunk split into two buddies of next-lower
power of 2
Continue until appropriate sized chunk available
For example, assume 256KB chunk available, kernel
requests 21KB
Split into AL and AR of 128KB each
One further divided into BL and BR of 64KB
– One further into CL and CR of 32KB each – one
used to satisfy request
Advantage – quickly coalesce unused chunks into larger
chunk
Operating System Concepts – 10th Edition 10.49 Silberschatz, Galvin and Gagne
Buddy System Allocator
Operating System Concepts – 10th Edition 10.50 Silberschatz, Galvin and Gagne
Slab Allocator
Alternate strategy
Slab is one or more physically contiguous pages
Cache consists of one or more slabs
Single cache for each unique kernel data structure
Each cache filled with objects – instantiations of
the data structure
When cache created, filled with objects marked as
free
When structures stored, objects marked as used
If slab is full of used objects, next object allocated
from empty slab
If no empty slabs, new slab allocated
Benefits include no fragmentation, fast memory
request satisfaction
Operating System Concepts – 10th Edition 10.51 Silberschatz, Galvin and Gagne
Slab Allocation
Operating System Concepts – 10th Edition 10.52 Silberschatz, Galvin and Gagne
Operating System Examples
Windows
Solaris
Operating System Concepts – 10th Edition 10.53 Silberschatz, Galvin and Gagne
Windows
Uses demand paging with clustering. Clustering
brings in pages surrounding the faulting page
Processes are assigned working set minimum and
working set maximum
Working set minimum is the minimum number of
pages the process is guaranteed to have in memory
A process may be assigned as many pages up to its
working set maximum
When the amount of free memory in the system falls
below a threshold, automatic working set trimming is
performed to restore the amount of free memory
Working set trimming removes pages from processes
that have pages in excess of their working set
minimum
Operating System Concepts – 10th Edition 10.54 Silberschatz, Galvin and Gagne
Solaris
Maintains a list of free pages to assign faulting
processes
Lotsfree – threshold parameter (amount of free
memory) to begin paging
Desfree – threshold parameter to increasing paging
Minfree – threshold parameter to being swapping
Paging is performed by pageout process
Pageout scans pages using modified clock algorithm
Scanrate is the rate at which pages are scanned. This
ranges from slowscan to fastscan
Pageout is called more frequently depending upon the
amount of free memory available
Priority paging gives priority to process code pages
Operating System Concepts – 10th Edition 10.55 Silberschatz, Galvin and Gagne
Class Activity
Consider the following reference string:
2, 3, 7, 6, 3, 2, 1, 2, 3, 6
Operating System Concepts – 10th Edition 10.56 Silberschatz, Galvin and Gagne
End of Chapter 10