ch9 Virmem
ch9 Virmem
ch9 Virmem
! Background
! Demand Paging
! Copy-on-Write
! Page Replacement
! Allocation of Frames
! Thrashing
! Memory-Mapped Files
! Other Considerations
! Operating-System Examples
! Illustrate how pages are loaded into memory using demand paging.
! Observation
4 Logical address space can be much larger than physical address space
! Demand paging
! Demand segmentation
! Meanwhile, physical
memory organized in
page frames
! Faster response
! More users
! Lazy swapper – never swaps a page into memory unless page will be
needed (Swapper that deals with pages is a pager)
! With swapping, pager guesses which pages will be used before
swapping out again
! Instead, pager brings in only those pages into memory
! How to determine that set of pages?
! Need new MMU functionality to implement demand paging
! If pages needed are already memory resident
! No difference from non demand-paging
! If page needed are not memory resident
! Need to detect and load the page into memory from storage
4 Without changing program behavior
4 Without programmer needing to change code
⌧
⌧
⌧
! When a page fault occurs, the operating system must bring the
desired page from secondary storage into main memory
! Most operating systems maintain a free-frame list – a pool of free
frames for satisfying such requests
! When a system starts up, all available memory is placed on the free-
frame list
4. Check that the page reference was legal and determine the location
of the page on the disk
5. Issue a read from the disk to a free frame:
1. Wait in a queue for this device until the read request is serviced
8. Save the registers and process state for the other user processes
12. Restore the user registers, process state, and new page table, and
then resume the interrupted instruction
! Swap space I/O faster than file system I/O even if on the same device
! Swap allocated in larger chunks, less management needed than file system
! Copy entire process image to swap space at process load time, then page in
and out of swap space
! Used in older BSD Unix
! Demand page in from program binary on disk, but discard rather than paging
out when freeing frame
! Used in Solaris and current BSD
! Still need to write to swap space
4 Pages not associated with a file (like stack and heap) – anonymous memory
4 Pages modified in memory but not yet written back to the file system
! Mobile systems
! Typically don’t support swapping
! Instead, demand page from file system
and reclaim read-only pages (such as code)
! Page replacement – find some page in memory, but not really in use,
page it out
! Algorithm – terminate? swap out? replace the page?
3. Bring the desired page into the (newly) free frame; update the page
and frame tables
15 page faults
16
14
number of page faults
12
10
8
6
4
2
1 2 3 4 5 6 7
number of frames
! Replace page that will not be used for longest period of time
! Replace page that has not been used in the most amount of time
! Counter implementation
! Every page entry has a counter; every time page is referenced through
this entry, copy the clock into the counter
! When a page needs to be changed, check counters to find smallest value
4 Search through table needed
! Stack implementation
! Keep a stack of page numbers in a double link form:
! Page referenced:
4 move it to the top
4 requires 6 pointers to be changed (for the example of reference string)
! But each update more expensive
! No search for replacement
! LRU and OPT are cases of stack algorithms that don’t have
Belady’s Anomaly
Operating System Concepts 37 Silberschatz, Galvin and Gagne ©2018
Use Of A Stack to Record Most Recent
Page References
reference string
4 7 0 7 1 0 1 2 1 2 7 1 2
2 7
a b
1 2
0 1
7 0
4 4
stack stack
before after
a b
0 0
0 0
next 1 0
victim
1 0
0 0
…
…
1 1
1 1
F F F
F F F
F F F
F F F
! Improve algorithm by using reference bit and modify bit (if available)
in concert
! Take ordered pair (reference, modify):
! (0, 0) neither recently used nor modified => best page to replace
! (0, 1) not recently used but modified => not quite as good, must write out
before replacement
! (1, 0) recently used but clean => probably will be used again soon
! (1, 1) recently used and modified => probably will be used again soon
and need to write out before replacement
! When page replacement called for, use the clock scheme but use the
four classes replace page in lowest non-empty class
! Might need to search circular queue several times
0172327103
How many page faults would occur for the following replacement algorithms,
assuming three empty frames?
Remember all frames are initially empty, so your first unique pages will all cost
one fault each.
• LRU replacement
• FIFO replacement
• Optimal replacement
• Clock (Second-chance)
! 2 pages to handle to
! fixed allocation
! priority allocation
! Local replacement – each process selects from only its own set of
allocated frames
! More consistent per-process performance
4 When possible, schedule all threads of a process and allocate all memory for
that process within the lgroup
! Locality model
32
30
28
memory address
26
24
22
page numbers
20
18
execution time
! Approximation of locality
! if D > m Þ Thrashing
! One policy: if D > m, then suspend or swap out one of the processes
! D=2
Fig from Feitelson
Operating System Concepts 61 Silberschatz, Galvin and Gagne61
©2018
D=6
! Example: D = 10,000
! Whenever a timer interrupts, copy and sets the values of all reference
bits to 0
increase number
of frames
upper bound
lower bound
decrease number
of frames
number of frames
Operating System Concepts 64 Silberschatz, Galvin and Gagne ©2018
Working Sets and Page Fault Rates
256 KB
128 KB 128 KB
A A
L R
64 KB 64 KB
B B
L R
32 KB 32 KB
C C
L R
! Pre-paging
! Page size
! TLB reach
! Program structure
! But if pre-paged pages are unused, I/O and memory was wasted
! Is cost of s * α save pages faults > or < than the cost of pre-paging
s * (1- α) unnecessary pages?
! This allows applications that require larger page sizes the opportunity to
use them without an increase in fragmentation
! Program structure
! int[128,128] data;
! Program 2
for (i = 0; i < 128; i++)
for (j = 0; j < 128; j++)
data[i,j] = 0;
128 page faults
! Consider I/O – Pages that are used for copying a file from a device
must be locked from being selected for eviction by a page
replacement algorithm
! TLB reach refers to the amount of memory accessible from the TLB
and is equal to the number of entries in the TLB multiplied by the
page size. One technique for increasing TLB reach is to increase the
size of pages.
! Linux, Windows, and Solaris manage virtual memory similarly,
using demand paging and copy-on-write, among other features. Each
system also uses a variation of LRU approximation known as the
clock algorithm.