Chapter08 PDF
Chapter08 PDF
Chapter08 PDF
Olav Beckmann
Huxley 449
http://www.doc.ic.ac.uk/~ob3
Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 2
Single Contiguous Store Allocation
0
0
Operating System code
global data
CP/M Example:
heap
MSDOS User Program Store Organisation
RT11 (code & data) free for a C Program
stack
max
Advantages Disadvantages
Simple - no special H/W No protection - Single User
No need for address relocation Programs must fit into store
(or use overlays)
Waste of unused store
Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 3
Overlays Illustrated by example
Procedures A, B, C do not call each other
so only Main & A, Main & B, or Main
& C need be resident in memory at any
Main one time.
Program The code A, B, C can be kept on disk
and copied into the same bit of main
A()
Procedure A memory when needed.
Procedures A, B & C can be overlayed.
B() Procedure B
0
Main
Procedure C
C() Space for
A or B or C
max
Disadvantages
Programmer responsible for organising overlay structure of program.
OS only provides the facility to load files into overlay region.
Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 4
Fixed Partition Store Allocation
0
Operating Allocate fixed-size, strongly separated
System partitions for different tasks
Program A Permits multi-programming.
Partition 1 Need relocating loader.
Unused
Usually some H/W protection between
Program B partitions.
Partition 2
max Unused Potential Disadvantages
Potentially sub-optimal use of overall
available space
Real Example: IBM pSeries
Constraints that would not be present if
all resources were equally available
Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 7
Compaction
The fragmentation problem could be solved by moving running
programs to close free gaps and create larger ones
Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 8
Virtual Addressing
Permits dynamic relocation
Program uses virtual addresses which the H/W maps to real store locations.
Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 9
Base & Limit Register One way of doing
Virtual Addressing
0
Base
Amap(A) = A+B
B
Register protection error if A<0 or A+B>L
A + Program
A Address Dynamic Relocation:
Copy user program & data to new
Limit part of store and change base &
L
Register limit registers.
max
Advantages Disadvantages
Dynamic partitions + relocation Fragmentation requires compaction
Relocation permits compaction which consumes CPU cycles
Simple hardware Virtual address space <= real address
Programs can request and space
release store dynamically
Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 10
Placement of Tasks in Memory (non-paged )
Allocation Tactics
If segment size < free chunk size locate segment at one end of the chunk to
minimise fragmentation.
If segment size > free chunk size move segments to create larger free chunk
(i.e. compact store).
Deallocation Tactics
Coalesce free store - Join adjacent chunks to form one contiguous block of
storage with Size = sum of individual chunks.
Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 11
Placement Algorithms (non-paged)
0 max
Free List
Appears to waste least space, but leaves many unusably small holes.
Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 12
Placement Algorithms (non-paged)
0 max
Free List
Allocating from a large chunk leaves a chunk big enough to use. Allocation
from a chunk of nearly the right size is likely to leave an unusable small
chunk.
Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 13
Placement Algorithms (non-paged)
0 max
Free List
Allocate Function:
Find first free chunk such that S Ci.
Compaction is easier if free chunks are held in address order.
Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 14
Placement Algorithms (non-paged)
Allocation unit is power of 2 i.e. 1k, 2k, 4k .
Separate list for each size in address order.
(4) BUDDY Initially - one large block (size = power of 2).
0 Free List 0
Free List
0 1 1
4
1 2 2
Allocate 8
2 4 3k 4
3 8 8
4 16 16
16k 16k
Allocation:
Try to remove chunk from list i, where 2i is the smallest unit S.
If empty, split block list i+1 - allocate half & chain remainder on list i.
Deallocation
Return to appropriate list, i
If adjacent to another block on the same list coalesce and add to list i+1.
Advantages: Fast Allocation, Less Unusable chunks, Reduces Fragmentation.
Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 15
Swapping - Roll-in / Roll-out
A program occupies store even when it is waiting
for I/O or some other event. Swap (roll-out) user programs
to backing store (disk) when
Main Store Disk they are waiting. Only if
Roll waiting for slow I/O i.e. wait
out time >> swap time.
Swap back into store (roll-in)
when event occurs. (May swap
back into a different
partition).
Roll in
Advantages Disadvantages
Allows higher degree of multi- Virtual address space <= real
programming. address space
Better utilisation of store. Whole program must be in store
Less CPU time wasted on when it is executing
compaction (swap is DMA).
Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 16
Memory Management: Summary so far
Dynamic partition store allocation: each job is allocated
a contiguous region of memory
Advantage over fixed partition: avoids wasted space
Disadvantage: fragmentation several problems
Cannot run jobs larger than available memory
Dynamic partition store management with compaction means
that we need virtual addressing
Placement algorithms (first fit, best fit etc) are about where to
allocate a job, given a number of memory regions that are
large enough
Paging deals with all these issues in a different way!
Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 17
Unresolved Problems in Dynamic Partition
Memory Management:
(1) The entire program must be resident in store when executing.
(2) Maximum program size available physical store.
(3) Programs must occupy contiguous real store.
Solution:
Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 18
Virtual address space is divided into pages of equal size.
Paging Main Memory is divided into page frames the same size.
Inactive
Pages Paging Mechanism
P = Page No.
O = Offset Physical Memory
Virtual Address
B = Page Frame Address
P O
Process
Page Table Page P
B+O
Pointer to
current
Process Page B
Table
Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 21
Paging - Page Transfer
What happens when we access a page which is currently not in main
memory (i.e. the page table entry is empty)?
Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 24
Basic Paging Scheme - Example
Address 12291 (decimal)
31 12 11 0
The virtual
00000000000000000011 000000000011 page number
12
is the index
Main Memory into the page
20
table.
Page Table
Page table
entry (5) here
is an
20
1 00000000000000000101 Page Frame 5
(Line 3)
Byte 3 in Page
example only
Frame could be
any value.
Pr M P Note the P
(present) bit
in the page
table,
indicating
that the
requested
page is in
memory.
Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 25
What happens if there are no free page frames?
Must swap out some other pages to disk.
Which pages ?
Decided by the Page Replacement Algorithm based on information in the
Page Table. e.g.
(a) How many times page has been referenced (use count).
(b) Time page was last referenced.
(c) Whether page has been written to (dirty/clean bit).
Problem so far -
We have doubled memory access time since each memory access involves a
page table access.
This could be solved by putting the page table in fast store such as registers
but the size of the page table is proportional to the virtual address space.
Too big !
Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 26
Paging with TLB: Basic Idea
The page table is not small.
Basic paging scheme requires two memory reads for every
virtual memory access.
Idea: speed up the page table access by holding a small subset
of page table entries on-chip, in a special very fast memory.
This special fast memory is called a TLB: Translation Lookaside
Buffer (Note: in the Tanenbaum book, it is called an Associative
Store for reasons which I will explain later.)
(The term TLB comes from the idea that while performing a
virtual address translation, we look aside into this buffer first to
check whether the result of the lookup is held there.)
Typical TLB size: 64 entries. (How many bits to index?)
This idea will pay off if we can make sure that a high proportion
of page table accesses can be serviced by this small on-chip
buffer.
Chances should be good if the code we are executing mainly
accesses a small number of different pages.
Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 27
Paging with TLB
Pentium 4
12
TLB for 4k
pages
20
(data) has
64 entries.
TLB index in
20
this diagram
is virtual
page
number
modulo TLB
size.
Here, each
page table
entry is
mapped to
exactly one
TLB entry.
Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 28
Paging with TLB: Example
Address 12291 (decimal)
31 18 17 12 11 0
00000000000000 000011 000000000011
12
Index 011
Assume TLB has 64 lines Main Memory
20
20
correct page table
Page Table
entry is held in TLB
1 00000000000000 000101
Page Table index modulo
64 is TLB index
Page Frame 5
Pr M P
Byte 3 within
page frame
Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 29
More about TLBs
In the basic TLB scheme covered so far, each page table entry is mapped to
exactly one TLB line: (virtual page number modulo TLB size).
This can result in pathologically poor performance if we access virtual
addresses in a stride that is a multiple of page size and TLB size.
Better schemes have been developed to deal with this:
The idea is that every page table entry can be mapped to more than one
TLB entry.
If, for example, every page table entry can be mapped to two different TLB
entries, this is known as a 2-way set-associative TLB.
The extreme case (not uncommon) is that each page table entry can be
mapped to any one of the TLB entries. This is known as a fully set-
associative TLB, or an associative store (in fact, Tanenbaum uses this term
for TLBs).
Which TLB line is actually used for storing a page table entry can depend
on, for example, most recent use etc.
Important: Because each process has its own page table, and the TLB holds a
snapshot of some entries of this page table, we must flush the TLB on a
context switch. This makes context switches very expensive in paged
operating systems.
Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 30
Paging Summary
Look in TLB
Yes
Match?
No
Index Page Table
Yes Update
Page Associative
present? Store (TLB)
No = Page Fault
FETCH PAGE B+W
(Real Address)
Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 31
ADVANTAGES OF PAGED SYSTEMS:
Fully utilise available main store.
Can execute programs where
Virtual Address Space > Main Memory Size.
No fragmentation or compaction.
DISADVANTAGES:
Complex hardware and software.
Large overhead - not always worth it?
Still only contiguous virtual store i.e. not divided into
logical units to reflect logical divisions in program and
data.
Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 32
Segmentation
Program Address Space divided into segments to reflect the logical program
structure functional division:
i.e. procedures, program module, stack, heap, collection of data, etc.
Real Memory
Segment 1000
Address space Seg. 0
Table
seen by 2500
program 0 1500 1000
4000
Seg 1 1 700 7100 Seg 3
Seg 0 4500
Main 2 500 8500
OS calls
Prog
3 500 4000 7100
Seg 3 Length Base Seg 1
7800
Proc A Seg 2
Stack 8500
Seg 2
9000
Protection
PROTECTION BITS: Bits
May be used to indicate Code (Execute), Read Only data, Read/Write
data, Shared Segment etc.
SHARED SEGMENTS:
Enter segment descriptor in segment table of each process which wants
to share the segment (perhaps with different protection bits).
Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 34
Comparison of Paging & Segmentation:
Paging Segmentation
Physical division of memory to Logical division of program address
implement one-level or virtual space. Two dimensional store visible
store. Transparent to in assembly code.
programmer.
Variable size up to some limit.
Fixed size.
Division into segment number, offset
Division of program address into is logical, consequently overflow of
page no., word no. is done by segment offset is a protection
H/W, overflow of word no. violation.
increments the page no.
In practice, it is often not possible to fit all segments for a program into
main memory at once (virtual memory > real memory), so
Swap Segments or Use Paging
Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 35
Paged Segmented Store
Segmented Segments divided into fixed
Virtual Real sized pages.
Memory Memory
Whole segments need not be
brought into main memory -
Seg 1
pages brought in as accessed,
so no overlaying.
Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 36
Memory Management Policies
Allocation/ Placement
How to allocate memory
Where should information be loaded into main memory?
Trivial for paged systems
any available block (page frame) can be allocated
Replacement
What should be swapped out of main memory to create free space?
Fetch
When should information be loaded into main memory? e.g.
on demand
in advance
Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 37
Replacement Algorithms paged
If page frame has been modified (written to) it must be written back to
disk, otherwise it can just be discarded as it already resides on disk.
Therefore choose clean pages first.
Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 38
Replacement Algorithms paged
1) First-in First-out
Replace page which has been resident the longest.
Must use time stamps or maintain loading sequence.
Can replace heavily used pages e.g. editor
FIFO anomaly: more page frames available in store may increase no. of
faults. e.g.
Try sequence of page refs: A B C D A B E A B C D E with 3 & 4
frames available, initially none resident.
Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 39
Replacement Algorithms paged
3) Least Frequently Used (LFU)
Replace page which has been least frequently used during the immediately
preceding time interval.
Keep a per page use count.
Most recently brought in will have low count, so gets swapped out.
Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 40
Fetch Policies - when to swap in
DEMAND
- fetch when needed is always required and is provided by the page
fault mechanism
ANTICIPATORY
- fetch in advance. This relies on being able to predict future
program behaviour.
We can use:
- knowledge of the nature of construction of programs.
- inference from past behaviour.
Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 41
Demand Paging
Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 42
Principle of Locality of Reference
Time
Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 43
Thrashing Processor
Utilisation
(1) As the degree of multi- (%)
(1)
programming increases so (2)
does the percentage CPU
utilisation.
This is because the scheduler
has a greater probability of Degree of Multi-programming
finding a ready process. (No. of concurrent processes)
Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 44
Working Set
1.0
Random References
Page Fault
Rate Localised references
0.5
Working set
0 50 100
% Processs Pages in Main Store
To avoid thrashing: (1) Run a process only if its working set is in memory
(2) Never remove a page which is part of a working set
Overshoot caused by residual pages Notes:
from previous WS still being in store Working set applies
No. of main to a particular
memory process.
pages
allocated to The particular set of
this pages constituting the
process WS changes with
WS WS WS WS time.
1 2 3 4
Size of WS also
Time changes with time.
Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 46
Determining the Working Set
Use reference bits which are set
by H/W when a page is
referenced. Can increase accuracy by
e.g. Assume window 2t secs. increasing history size, and/or
Generate interrupt every t ms interrupt frequency.
copy and clear reference bits increased overheads.
for every page on interrupt
maintain history of 2 bits for Note that all the page
each page. replacement algorithms
discussed earlier implicitly tend
On page fault, check reference to maintain process working
bits and 2 history bits. If any sets.
are set, page was accessed within
past 2t-3t ms.
Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 47
Page Size Considerations
Small Pages Large Pages
Less information brought in
Reduce number of pages,
that is unreferenced. hence page table size.
From locality of reference,
Reduce I/O overheads due to
smaller pages leads to tighter disc seek time if transfer
working sets. larger pages.
Internal fragmentation - on
Less page faults to bring in
average, half of last page of whole segment, so reduce
a segment is unused. Smaller overheads of handling page
pages lead to less wastage. faults.
Policies
Replacement (what to swap out)
Fetch (when to load)
Placement (where to load)
Memory Management means we must add to the process context:
(1) copy of contents of base & limit registers
or (2) pointer to page table
or (3) pointer to segment table
Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 49