Mem 4
Mem 4
Mem 4
• Any page not in main memory right now has the “present” bit cleared in its
page table entry.
• When page fault occurs:
– Operating system brings page into memory
– Page table is updated, “present” bit is set.
– The process continues execution.
Continuing process is potentially tricky, since page fault may have occurred in the
middle of an instruction. Don’t want user process to be aware that the page
fault even happened.
Once the hardware has provided basic capabilities for virtual memory, the OS
must make two kinds of scheduling decisions:
Operating Systems 80
MIN is optimal (can’t be beaten), but the principle of locality states that past
behavior predicts future behavior, thus LRU should do just about as well.
Implementing LRU: need some form of hardware support in order to keep track of
which pages have been used recently.
• Perfect LRU? Keep a register for each page, and store the system clock into
that register on each memory reference. To replace a page, scan through all
of them to find the one with the oldest clock. This is expensive if there are a
lot of memory pages.
• In practice, nobody implements perfect LRU. Instead, we settle for an
approximation that is efficient. Just find an old page, not necessarily the
oldest. LRU is just an approximation anyway so why not approximate a
little more?
Clock algorithm : keep “use” bit for each page frame, hardware sets the
appropriate bit on every memory reference. The operating system clears the bits
from time to time in order to figure out how often pages are being referenced.
Introduce clock algorithm where to find a page to throw out the OS circulates
through the physical frames clearing use bits until one is found that is zero. Use
that one. Note clock analogy.
Some systems also use a “dirty” bit to give preference to dirty pages. This is
because it is more expensive to throw out dirty pages: clean ones need not be
written to the backing store. When the clock algorithm finds an unused but dirty
page, it schedules a disk write for the page and continues (i.e., the page gets a
“second chance” while it is being written to disk). On the next rotation of the
clock, if the page is still unused, it will now be “clean” and thus replaced.)
The “use” and “dirty” bits are implemented by the TLB: the hardware sets the
“use” bit upon each access to the page and the “dirty” bit upon each write to the
Operating Systems 82
page. When a TLB entry is evicted, the bits are written to the corresponding
page table entry.
What does it mean if the clock hand is sweeping very slowly? (plenty of memory,
not many page faults, good)
What does it mean if the clock hand is sweeping very fast? (not enough memory,
thrashing, or threshold is too high)
Note: when contention for physical memory is high, there are many page faults
and the clock rotates fast; therefore, the clock algorithm has more fine-grained
information about recent page accesses and makes better replacement decisions.
When contention is low, the clock rotates slowly, may make less precise
replacement decisions, but overhead is low.
• Each time one page is brought in, another page, whose contents will soon be
referenced, is thrown out.
• Compute average memory access time.
• The system will spend all of its time reading and writing pages. It will be
working very hard but not getting much done.
• The progress of the programs will make it look like the access time of main
memory is as slow as backing store, rather than the backing store being as
fast as main memory.
• Thrashing was a severe problem in early demand paging systems.
Thrashing occurs because the system doesn’t know when it has taken on more
work than it can handle. LRU mechanisms order pages in terms of last access,
but don’t give absolute numbers indicating pages that mustn’t be thrown out.
What do humans do when thrashing? If flunking all courses at midterm time,
drop one.
What can be done?
• If a single process is too large for memory, there is nothing the OS can do.
That process will simply thrash. Note course analogy.
• If the problem arises because of the sum of several processes:
– Figure out how much memory each process needs.
– Change scheduling priorities to run processes in groups whose memory
needs can be satisfied. Shed load.
• A process will never be executed unless its working set is resident in main
memory. Pages outside the working set may be replaced at any time.
Working sets are not enough by themselves to make sure memory doesn’t get
overcommitted. We must also introduce the idea of a balance set:
• If the sum of the working sets of all runnable processes is greater than the
size of memory, then refuse to run some of the processes (for a while).
• Divide runnable processes up into two groups: active and inactive. When a
process is made active its working set is loaded, when it is made inactive its
working set migrates to backing store. The collection of active processes is
called the balance set.
• Some algorithm must be provided for moving processes into and out of the
balance set. What happens if the balance set changes too frequently? (Still
get thrashing)
Problem with the working set: must constantly be updating working set
information.
• One of the initial plans was to store some sort of a capacitor with each
memory page. The capacitor would be charged on each reference, then would
discharge slowly if the page wasn’t referenced. Tau would be determined by
the size of the capacitor. This wasn’t actually implemented. One problem is
that we want separate working sets for each process, so the capacitor should
only be allowed to discharge when a particular process executes. What if a
page is shared?
• Actual solution: take advantage of use bits.
– OS maintains idle time value for each page: amount of CPU time
received by process since last access to page.
Operating Systems 85
– Every once in a while, scan all pages of a process. For each use bit on,
clear page’s idle time. For use bit off, add process’ CPU time (since last
scan) to idle time. Turn all use bits off during scan.
– Scans happen on order of every few seconds.