ICS 431 Ch9 Memory Management
ICS 431 Ch9 Memory Management
ICS 431 Ch9 Memory Management
Weeks 9-10
0-L
• The run time mapping from Logical to Physical addresses is done by the MMU,
many methods to do such mapping, i.e. ”contiguous, paging, segments”
• User’s programs go through many steps (compile, link, load) before running.
• With dynamic linking, stub is included in the image for each library routine.
– The Stub is a small piece of code that indicates how to locate the
appropriate memory-resident library routine or how to load the library if
the routine is not already present.
– Stub replaces itself with the address of the routine, and executes the
routine so that the next time the code segment is reached, the library
routing executed directly, incurring the cost of dynamic linking.
• Dynamic linking is particularly useful for libraries.
Dr. Tarek Helmy@KFUPM-ICS 13
Dynamic Loading
– Link and Load time: Codes can be linked then loaded to any
portion of memory. (Re-locatable code)
– Run time: Code can be moved to any portion of memory during its
execution.
OS OS OS OS
process 8
process 2 process 2
OS OS OS
process 10
Hole 2
process 2 process 2 process 2
A hole of 64K is left after loading 3 processes: not enough room for
another process.
The OS selects P2 and swaps it out to bring in P4.
8K 8K
6K 6K
Allocated block
14K Free block 14K
Next Fit
Best-fit searches for the smallest partition: the fragment left behind is as
small as possible.
Main memory quickly forms holes too small to hold any process:
compaction needs to be done more often.
First-fit favors allocation near the beginning: tends to create less
fragmentation than Next-fit.
Next-fit often leads to allocation of the largest blocks at the end of
memory.
Worst-fit and quick-fit have still fragmentation (useless holes) problems.
• It is an overhead to
maintain the compaction
for a hole of 2 bytes.
300 K
660K
260 K
32 Bytes=25
13 11 01
Logical Address
Physical memory space = 25
2*4+1=9 Logical address space = 24
Page size = 22
9 010 01 PT Size = 24/22= 22
Each PT entry needs 5-2 bits
5-2 2
Physical Address
Dr. Tarek Helmy@KFUPM-ICS 42
Exercise
Page no.
Frame no.
• With each process having its own page table, and with each page table
consuming considerable amount of memory.
• A lot of memory will be used to keep track of the main memory. i.e.
• Consider a process with 32-bit logical address space (4GB), if the page
size is 4 KB (212) then a page table may consists of up to 1 million
entries, 232/212=1MB. Assuming that each entry needs 4 bytes (to
represent the # of frames) then each process may need up to 4 MB of
the MM for its page table only.
• Smaller page size leads to more pages, and more pages lead to
larger page table’s size.
p = 2se
Parallel
Search
• Example 1 • Example 2
– Associate lookup = 20 – Associate lookup = 20
– Memory access x = 100 – Memory access x = 100
– Hit ratio () = 0.8 – Hit ratio ()= 0.98
– EAT = (100 + 20) * 0.8 – EAT = (100 + 20) * 0.98
+ (200 + 20) * 0.2 + (200 + 20) * 0.02
= 1.2 * 100 + 20 = 140 = 1.02 * 100 + 20 = 122
40% slow in memory access time 22% slow in memory access time
• Such kind of bits can enhance the protection and also minimize the EAT by
avoiding the second time access to memory if the page is not there in the MM.
– “Invalid” indicates that the page is not in MM and thus the second acess to
the MM should be avoided.
• With each process having its own page table stored in memory and used to
map logical address into physical one.
• A huge a mount of memory will be used to map logical addresses into physical
addresses.
• It also needs more free and contiguous memory space to be stored in.
• How to solve this problem?
• Hierarchical Paging
• Hashed Page Tables
• Inverted Page Tables
• To resolve the large size page tables problem, we want to allocate the page
table into noncontiguous blocks.
• And this means, break up the page table address space into multiple page
tables.
• A simple technique is to use two-level paging algorithm, in which the page table
itself is also paged.
Physical Address
(F# + D)
Frame #
8 bits 12 bits
virtual address
master frame 0
physical address
page table
page frame # offset
frame 1
secondary
secondary page table #
page table # addr frame 2
frame 3
page frame
…
number
page
frame Y
i1 i2 i3
Hash Table
• Hash table with <pid, p> as hash key
• The IPT does not contains complete information about the logical
address space of a process and this information is necessary in case of
a page fault.
• For this information to be available an external page table (one per a
process) must be kept in memory.
• Referring to page tables may negate the benefit of the IPT. However,
– These referenced page tables will occur only when a page fault occurs.
– On demand paging, we need only the page table of the active process to be
kept in memory by swapping it in.
Segment 1 Segment 2
Logical Address space
Physical Memory
Segment 3
Segment 4
• The hardware must map a two dimensional (segment # and offset) into
one-dimensional address.
Segment Table
Limit Base
S D
CPU
Logical
Address Yes
< +
Physical Physical
No Address Memory
76
Dr. Tarek Helmy@KFUPM-ICS
Combined Segmentation and Paging
• To combine their advantages, some OSs page the segments into pages
and use a page table per each segment. Why?
• Each process has:
– One segment table
– One page table per segment
Logical Address
Segment Nbr. Page Nbr. Offset
Physical Memory
Segment Table
Page Tables
Dr. Tarek Helmy@KFUPM-ICS 78
Simple Combined Segmentation and Paging
• Problem: to run large processes (larger than the available physical memory) and
to increase the number of processes simultaneously loaded into the memory.
Thank you
Any Questions?