Chapter08 PDF

Download as pdf or txt
Download as pdf or txt
You are on page 1of 49

Operating Systems Concepts:

Chapter 8: Memory Management

Olav Beckmann
Huxley 449
http://www.doc.ic.ac.uk/~ob3

Acknowledgements: There are lots. See end of Chapter 1.

Home Page for the course:


http://www.doc.ic.ac.uk/~ob3/Teaching/OperatingSystemsConcepts/

This is only up-to-date after I have issued printed versions of


the notes, tutorials, solutions etc.
Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 1
Chapter 8: Memory Management
Ch 8a: Fixed Partitions, Dynamic Partitions, Placement, Swapping
System Architecture
Issues covered in Chapter 8
Application
Programs
Organisation of main
store
System
Utility Allocation of memory to
Programs
processes
Filing System
Dynamic Relocation
OS Input/Output
Memory Management Sharing
System Nucleus
Protection
Hardware

Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 2
Single Contiguous Store Allocation
0
0
Operating System code
global data
CP/M Example:
heap
MSDOS User Program Store Organisation
RT11 (code & data) free for a C Program

stack
max

Single user / non multi-programming


Program address = Real Store Address

Advantages Disadvantages
Simple - no special H/W No protection - Single User
No need for address relocation Programs must fit into store
(or use overlays)
Waste of unused store
Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 3
Overlays Illustrated by example
Procedures A, B, C do not call each other
so only Main & A, Main & B, or Main
& C need be resident in memory at any
Main one time.
Program The code A, B, C can be kept on disk
and copied into the same bit of main
A()
Procedure A memory when needed.
Procedures A, B & C can be overlayed.
B() Procedure B
0
Main
Procedure C
C() Space for
A or B or C
max

Disadvantages
Programmer responsible for organising overlay structure of program.
OS only provides the facility to load files into overlay region.

Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 4
Fixed Partition Store Allocation
0
Operating Allocate fixed-size, strongly separated
System partitions for different tasks
Program A Permits multi-programming.
Partition 1 Need relocating loader.
Unused
Usually some H/W protection between
Program B partitions.
Partition 2
max Unused Potential Disadvantages
Potentially sub-optimal use of overall
available space
Real Example: IBM pSeries
Constraints that would not be present if
all resources were equally available

IBM pSeries offers logical partitioning: multiprocessor


server can be partitioned for multiple services.
Strong isolation between partitions good thing.
IBM

Designed to run full OSs in partitions.


Isolation enforced by hardware + special firmware.
Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 5
Multiprogramming with Fixed Partitions
Multiprogramming
improves CPU utilisation:
Assume a job normally
spends a fraction p of its
time waiting for I/O.
Then, for one job, CPU
utilisation is 1 p.
For n jobs in a
multiprogramming system,
approximate CPU utilisation
Tanenbaum is 1 pn.

Fixed partitions can be of different sizes.


Different possible queuing strategies for jobs (see illustration
above).
OS/MFT (Multiprogramming with fixed partitions) used by OS/360
on IBM mainframes for many years.
Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 6
Dynamic Partition Store Allocation
Store organised as before but partition created dynamically at the
start of a job. i.e. OS attempts to allocate a contiguous chunk of
store = job size.
E.g. OS/MVT (multiprogramming with a variable number of tasks, IBM 1967->)

J1 & J2 start J1ends, J3 starts


0 0
Example: OS
50
OS
50
Total store = 500K J1(100K) Free (100K)
J1 requires 100K
150 150
J2 requires 50K J2(50K) J2(50K)
J3 requires 240K 200 200
J4 requires 120K J3 (240K)
Free
(300K)
460
Free (60K)
500 500

Store Fragmentation cannot start J4 because even though 160K of free


store, there is not a contiguous 120K chunk.

Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 7
Compaction
The fragmentation problem could be solved by moving running
programs to close free gaps and create larger ones

Dynamic Program relocation


Problem:
When a program is loaded into store by the operating system,
program addresses = real store locations.
Programs use absolute address to access operands from real store
locations.
To move a program the operating system would have to modify
these addresses.
But the relocation information produced by the
assembler/linker/loader is not maintained in main store.
In general:-
Programs are NOT dynamically relocatable if program address =
real store address.

Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 8
Virtual Addressing
Permits dynamic relocation
Program uses virtual addresses which the H/W maps to real store locations.

R = Real Address or physical store location.


R = Amap (A) Amap = address mapping function.
A = Virtual Address or program address.

Modification of NARC to handle Virtual Addressing:


void fetch() { void execute() {
int W; switch (Op) { // only the modified cases are shown here
W = Mem[Amap(PC)]; case 2: Acc = Mem[Amap(D)]; break; // loadm
Op = Opcode(W); case 3: Mem[Amap(D)] = Acc; break; // storem
D = Operand(W); case 5: Acc = Acc + Mem[Amap(D)]; break;// addm
} case 7: Acc = Acc - Mem[Amap(D)]; break; // subm
}
}

Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 9
Base & Limit Register One way of doing
Virtual Addressing
0
Base
Amap(A) = A+B
B
Register protection error if A<0 or A+B>L
A + Program
A Address Dynamic Relocation:
Copy user program & data to new
Limit part of store and change base &
L
Register limit registers.
max

Advantages Disadvantages
Dynamic partitions + relocation Fragmentation requires compaction
Relocation permits compaction which consumes CPU cycles
Simple hardware Virtual address space <= real address
Programs can request and space
release store dynamically

Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 10
Placement of Tasks in Memory (non-paged )

The Operating System maintains


Main Memory
a list of free memory chunks
usually called the free list.

Segment Free Store

Allocation Tactics
If segment size < free chunk size locate segment at one end of the chunk to
minimise fragmentation.
If segment size > free chunk size move segments to create larger free chunk
(i.e. compact store).

Deallocation Tactics
Coalesce free store - Join adjacent chunks to form one contiguous block of
storage with Size = sum of individual chunks.

Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 11
Placement Algorithms (non-paged)

1) BEST FIT 100 50 150

0 max

Free List

The free chunks are listed in order of increasing size.


C1 C2 C3 Cn
Allocate Function:
Find smallest Ci such that the segment S Ci by searching list.

Appears to waste least space, but leaves many unusably small holes.

Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 12
Placement Algorithms (non-paged)

2) WORST FIT 100 50 150

0 max

Free List

The free chunks are listed in order of decreasing size.


C1 C2 C3 Cn
Allocate Function:
Place segment in C1 and link remaining space back into free list, coalesce
if possible.

Allocating from a large chunk leaves a chunk big enough to use. Allocation
from a chunk of nearly the right size is likely to leave an unusable small
chunk.

Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 13
Placement Algorithms (non-paged)

3) FIRST FIT 100 50 150

0 max

Free List

The free chunks are listed in order of increasing base address.

Allocate Function:
Find first free chunk such that S Ci.
Compaction is easier if free chunks are held in address order.

Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 14
Placement Algorithms (non-paged)
Allocation unit is power of 2 i.e. 1k, 2k, 4k .
Separate list for each size in address order.
(4) BUDDY Initially - one large block (size = power of 2).
0 Free List 0
Free List
0 1 1
4
1 2 2
Allocate 8
2 4 3k 4
3 8 8
4 16 16
16k 16k
Allocation:
Try to remove chunk from list i, where 2i is the smallest unit S.
If empty, split block list i+1 - allocate half & chain remainder on list i.
Deallocation
Return to appropriate list, i
If adjacent to another block on the same list coalesce and add to list i+1.
Advantages: Fast Allocation, Less Unusable chunks, Reduces Fragmentation.
Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 15
Swapping - Roll-in / Roll-out
A program occupies store even when it is waiting
for I/O or some other event. Swap (roll-out) user programs
to backing store (disk) when
Main Store Disk they are waiting. Only if
Roll waiting for slow I/O i.e. wait
out time >> swap time.
Swap back into store (roll-in)
when event occurs. (May swap
back into a different
partition).
Roll in

Advantages Disadvantages
Allows higher degree of multi- Virtual address space <= real
programming. address space
Better utilisation of store. Whole program must be in store
Less CPU time wasted on when it is executing
compaction (swap is DMA).

Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 16
Memory Management: Summary so far
Dynamic partition store allocation: each job is allocated
a contiguous region of memory
Advantage over fixed partition: avoids wasted space
Disadvantage: fragmentation several problems
Cannot run jobs larger than available memory
Dynamic partition store management with compaction means
that we need virtual addressing
Placement algorithms (first fit, best fit etc) are about where to
allocate a job, given a number of memory regions that are
large enough
Paging deals with all these issues in a different way!

Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 17
Unresolved Problems in Dynamic Partition
Memory Management:
(1) The entire program must be resident in store when executing.
(2) Maximum program size available physical store.
(3) Programs must occupy contiguous real store.

Solution:

Map contiguous virtual address space onto discontiguous


chunks of real store.
solves (3) and reduces requirement for compaction.

Make secondary or backing store (disks) appear as a


logical extension of primary (main) memory.
Chunks of program which are not currently needed can be
kept on disk.
solves (1) & (2)

Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 18
Virtual address space is divided into pages of equal size.
Paging Main Memory is divided into page frames the same size.

Virtual Real Running or ready process


Memory Memory some pages in main memory
Waiting process
all pages can be on disk
Active Paging is transparent to
Pages programmer
Each process has its own virtual
address space.

Inactive
Pages Paging Mechanism

(1) Address Mapping


Swapping (2) Page Transfer
Disc
Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 19
Paging - Address Mapping

P = Page No.
O = Offset Physical Memory
Virtual Address
B = Page Frame Address
P O
Process
Page Table Page P
B+O
Pointer to
current
Process Page B
Table

Example: Word addressed machine, 8-bit words, page size = 256

Amap(P,O) = PPT[P] * 256 + O


Note: The Process Page Table (PPT) itself can be paged
Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 20
Paging: A Detailed Example

Consider a (very) simple computer system that has:


32 bytes of main (physical) memory
128 bytes of virtual memory
8 byte pages
256 bytes of disk storage

How big are virtual memory addresses?

How big are physical memory addresses?

How big is a process page table entry?

Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 21
Paging - Page Transfer
What happens when we access a page which is currently not in main
memory (i.e. the page table entry is empty)?

Page Fault Suspend running process


Get page from disk
Update page table
Resume process (re-execute instruction)
? Can one instruction cause more than one page fault?
The location of a page on disk can be recorded in a separate table or in the page
table itself using a presence bit.
Dirty Presence bit set
Protection Note: We can run
Main Memory Page another ready
Page 1 B Frame Location process while the
Table page fault is being
Entry serviced.
0 D Disk Page Location

Presence bit clear


Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 22
Some Useful Facts
Kilo, Mega, Giga:
1K = 210 = 1024
1M = 220 = 1024 * 1024 =
1G = 230 = 1024 * 1024 * 1024 =
Storage layout of arrays in memory
int x[5];
Address 0 x[0] x[1] x[2] x[3] x[4]

Storage layout of structures


typedef struct { int y, int z[2] } mydata;

Address 0 y z[0] z[1]


Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 23
Basic Paging Scheme: A Closer Look
32-bit virtual and
physical address
Assume 4k page -
12-bit page offset
20-bit page
number
Each page table
entry consists of
20-bit phys page
number plus
status bits
There are 2^20
page table entries
Page table size is
at least (20 +
status) bits * 2^20
= approx 3MB.

Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 24
Basic Paging Scheme - Example
Address 12291 (decimal)
31 12 11 0
The virtual
00000000000000000011 000000000011 page number

12
is the index
Main Memory into the page
20

table.
Page Table
Page table
entry (5) here
is an

20
1 00000000000000000101 Page Frame 5
(Line 3)
Byte 3 in Page
example only
Frame could be
any value.
Pr M P Note the P
(present) bit
in the page
table,
indicating
that the
requested
page is in
memory.
Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 25
What happens if there are no free page frames?
Must swap out some other pages to disk.

Which pages ?
Decided by the Page Replacement Algorithm based on information in the
Page Table. e.g.
(a) How many times page has been referenced (use count).
(b) Time page was last referenced.
(c) Whether page has been written to (dirty/clean bit).

Problem so far -
We have doubled memory access time since each memory access involves a
page table access.
This could be solved by putting the page table in fast store such as registers
but the size of the page table is proportional to the virtual address space.
Too big !

Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 26
Paging with TLB: Basic Idea
The page table is not small.
Basic paging scheme requires two memory reads for every
virtual memory access.
Idea: speed up the page table access by holding a small subset
of page table entries on-chip, in a special very fast memory.
This special fast memory is called a TLB: Translation Lookaside
Buffer (Note: in the Tanenbaum book, it is called an Associative
Store for reasons which I will explain later.)
(The term TLB comes from the idea that while performing a
virtual address translation, we look aside into this buffer first to
check whether the result of the lookup is held there.)
Typical TLB size: 64 entries. (How many bits to index?)
This idea will pay off if we can make sure that a high proportion
of page table accesses can be serviced by this small on-chip
buffer.
Chances should be good if the code we are executing mainly
accesses a small number of different pages.
Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 27
Paging with TLB
Pentium 4

12
TLB for 4k
pages
20

(data) has
64 entries.
TLB index in

20
this diagram
is virtual
page
number
modulo TLB
size.
Here, each
page table
entry is
mapped to
exactly one
TLB entry.
Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 28
Paging with TLB: Example
Address 12291 (decimal)
31 18 17 12 11 0
00000000000000 000011 000000000011

12
Index 011
Assume TLB has 64 lines Main Memory
20

Tag (binary) gives (Tag) (Physical Page No) TLB entry


line number 3 is correct
page
Index 011 frame no.
00000000000000 00000000000000000101
(binary) gives
line number 3 Tag indicates that

20
correct page table
Page Table
entry is held in TLB

1 00000000000000 000101
Page Table index modulo
64 is TLB index

Page Frame 5
Pr M P
Byte 3 within
page frame

Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 29
More about TLBs
In the basic TLB scheme covered so far, each page table entry is mapped to
exactly one TLB line: (virtual page number modulo TLB size).
This can result in pathologically poor performance if we access virtual
addresses in a stride that is a multiple of page size and TLB size.
Better schemes have been developed to deal with this:
The idea is that every page table entry can be mapped to more than one
TLB entry.
If, for example, every page table entry can be mapped to two different TLB
entries, this is known as a 2-way set-associative TLB.
The extreme case (not uncommon) is that each page table entry can be
mapped to any one of the TLB entries. This is known as a fully set-
associative TLB, or an associative store (in fact, Tanenbaum uses this term
for TLBs).
Which TLB line is actually used for storing a page table entry can depend
on, for example, most recent use etc.
Important: Because each process has its own page table, and the TLB holds a
snapshot of some entries of this page table, we must flush the TLB on a
context switch. This makes context switches very expensive in paged
operating systems.

Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 30
Paging Summary

Process No., Page No.

Look in TLB

Yes
Match?
No
Index Page Table

Yes Update
Page Associative
present? Store (TLB)
No = Page Fault
FETCH PAGE B+W
(Real Address)

Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 31
ADVANTAGES OF PAGED SYSTEMS:
Fully utilise available main store.
Can execute programs where
Virtual Address Space > Main Memory Size.
No fragmentation or compaction.

DISADVANTAGES:
Complex hardware and software.
Large overhead - not always worth it?
Still only contiguous virtual store i.e. not divided into
logical units to reflect logical divisions in program and
data.

Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 32
Segmentation
Program Address Space divided into segments to reflect the logical program
structure functional division:
i.e. procedures, program module, stack, heap, collection of data, etc.
Real Memory
Segment 1000
Address space Seg. 0
Table
seen by 2500
program 0 1500 1000
4000
Seg 1 1 700 7100 Seg 3
Seg 0 4500
Main 2 500 8500
OS calls
Prog
3 500 4000 7100
Seg 3 Length Base Seg 1
7800
Proc A Seg 2
Stack 8500
Seg 2
9000

Program sees a two-dimensional virtual address space (Segment, Offset)


Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 33
Address Mapping
Segmentation
Program Amap(S,O) = PST[S].B + O
Address
S O Segment S
Process S = Segment No.
Segment Table O L = Length
O = Offset
B = Base
B+O
L B
Segment Segments are
Descriptor relocatable

Protection
PROTECTION BITS: Bits
May be used to indicate Code (Execute), Read Only data, Read/Write
data, Shared Segment etc.
SHARED SEGMENTS:
Enter segment descriptor in segment table of each process which wants
to share the segment (perhaps with different protection bits).
Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 34
Comparison of Paging & Segmentation:

Paging Segmentation
Physical division of memory to Logical division of program address
implement one-level or virtual space. Two dimensional store visible
store. Transparent to in assembly code.
programmer.
Variable size up to some limit.
Fixed size.
Division into segment number, offset
Division of program address into is logical, consequently overflow of
page no., word no. is done by segment offset is a protection
H/W, overflow of word no. violation.
increments the page no.

In practice, it is often not possible to fit all segments for a program into
main memory at once (virtual memory > real memory), so
Swap Segments or Use Paging
Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 35
Paged Segmented Store
Segmented Segments divided into fixed
Virtual Real sized pages.
Memory Memory
Whole segments need not be
brought into main memory -
Seg 1
pages brought in as accessed,
so no overlaying.

Segmentation Performs: Limit


Seg 2 and protection checks

Paging Performs: Automatic


swapping - invisible to
programmer.
Seg 3
Swapping
Disc Segmented page table:
smaller page table, collapse
unused parts of it.

Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 36
Memory Management Policies

Allocation/ Placement
How to allocate memory
Where should information be loaded into main memory?
Trivial for paged systems
any available block (page frame) can be allocated

Replacement
What should be swapped out of main memory to create free space?

Fetch
When should information be loaded into main memory? e.g.
on demand
in advance

Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 37
Replacement Algorithms paged

Decides which page to remove from main store

If page frame has been modified (written to) it must be written back to
disk, otherwise it can just be discarded as it already resides on disk.
Therefore choose clean pages first.

Objective - to replace page which is not going to be referenced for the


longest time in the future.

Difficult ! - the best we can do is to infer future behaviour from past


behaviour.

Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 38
Replacement Algorithms paged

1) First-in First-out
Replace page which has been resident the longest.
Must use time stamps or maintain loading sequence.
Can replace heavily used pages e.g. editor
FIFO anomaly: more page frames available in store may increase no. of
faults. e.g.
Try sequence of page refs: A B C D A B E A B C D E with 3 & 4
frames available, initially none resident.

2) Least Recently Used (LRU)


Replace page which has least recently been used.
Usually best page to replace
Time-stamp every page reference or record sequence of access to pages,
so high overhead
Page chosen could be next to be used, e.g. within a large loop.

Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 39
Replacement Algorithms paged
3) Least Frequently Used (LFU)
Replace page which has been least frequently used during the immediately
preceding time interval.
Keep a per page use count.
Most recently brought in will have low count, so gets swapped out.

4) Not Used Recently


Approximates to LRU, but no time stamp.
Reference Bit: set on reference, cleared periodically.
Dirty Bit: set on write to page, cleared on page-in.
Priority order for page replacement:
1) unreferenced, unmodified
2) unreferenced, modified
3) referenced, unmodified Non-paged are similar,
4) referenced, modified but need to consider
segment size.

Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 40
Fetch Policies - when to swap in

DEMAND
- fetch when needed is always required and is provided by the page
fault mechanism

ANTICIPATORY
- fetch in advance. This relies on being able to predict future
program behaviour.
We can use:
- knowledge of the nature of construction of programs.
- inference from past behaviour.

Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 41
Demand Paging

Process Running One Page Frame


Primary Storage Allocation

Page Fault Page Fault Page Fault Page Fault


Serviced Serviced Serviced Serviced
Time
Page Average Page Page Page Page
Fault Wait Time Fault Fault Fault

Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 42
Principle of Locality of Reference

If a program accesses a location n Temporal Locality:


at some point in time it is likely to Storage locations referenced
reference the same location n and recently are likely to be
locations in the immediate vicinity of referenced again.
n in the near future. e.g. loops, stacks, variables
(counts, pointers).
Storage
Addresses
Spatial Locality:
Clustered references - once a
location is referenced it is likely
a nearby location will be
referenced.
e.g. arrays, sequential code.

Time

Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 43
Thrashing Processor
Utilisation
(1) As the degree of multi- (%)
(1)
programming increases so (2)
does the percentage CPU
utilisation.
This is because the scheduler
has a greater probability of Degree of Multi-programming
finding a ready process. (No. of concurrent processes)

(2) After a threshold, the degree of multi-programming makes it impossible to


keep sufficient pages in store for a process to avoid generating a large number
of page faults. This causes:
an increase in disk paging traffic;
the disk channel becomes a bottleneck;
most processes are blocked waiting for page transfer.

Thrashing = processor spending more time paging than


executing programs.

Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 44
Working Set
1.0

Random References
Page Fault
Rate Localised references
0.5
Working set

0 50 100
% Processs Pages in Main Store

Only a subset of the program's code and data (working set) is


referenced over a given time interval.

To avoid thrashing, each process requires a minimum set of


pages in store.
Minimum set = Working Set.
Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 45
Working Set Model
The working set is the set of pages a program references within a given time
interval (window). From the principle of locality of reference we expect this set
to change slowly.

WS(t, w) = set of pages referenced in interval {t-w, t}, w = window

To avoid thrashing: (1) Run a process only if its working set is in memory
(2) Never remove a page which is part of a working set
Overshoot caused by residual pages Notes:
from previous WS still being in store Working set applies
No. of main to a particular
memory process.
pages
allocated to The particular set of
this pages constituting the
process WS changes with
WS WS WS WS time.
1 2 3 4
Size of WS also
Time changes with time.
Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 46
Determining the Working Set
Use reference bits which are set
by H/W when a page is
referenced. Can increase accuracy by
e.g. Assume window 2t secs. increasing history size, and/or
Generate interrupt every t ms interrupt frequency.
copy and clear reference bits increased overheads.
for every page on interrupt
maintain history of 2 bits for Note that all the page
each page. replacement algorithms
discussed earlier implicitly tend
On page fault, check reference to maintain process working
bits and 2 history bits. If any sets.
are set, page was accessed within
past 2t-3t ms.

Page with at least 1 bit set


considered to be in WS.

Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 47
Page Size Considerations
Small Pages Large Pages
Less information brought in Reduce number of pages,
that is unreferenced. hence page table size.
From locality of reference, Reduce I/O overheads due to
smaller pages leads to tighter disc seek time if transfer
working sets. larger pages.
Internal fragmentation - on Less page faults to bring in
average, half of last page of whole segment, so reduce
a segment is unused. Smaller overheads of handling page
pages lead to less wastage. faults.

Computer Page Size (words) Word Size (bits)


Pentium 1024 32
Multics 1024 36
IBM 370 1024 or 512 32
DEC PDP 20 512 36
VAX 128 32
Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 48
Memory Management - Summary
Mechanisms
Fixed Partition Wasted Space
Dynamic partition Fragmentation
Relocatable (base & limit) Compaction Overhead
Swapping Swap entire process
Paging Linear virtual address space
Segmentation 2-Dim. address space

Policies
Replacement (what to swap out)
Fetch (when to load)
Placement (where to load)
Memory Management means we must add to the process context:
(1) copy of contents of base & limit registers
or (2) pointer to page table
or (3) pointer to segment table
Department of Computing, Imperial College London. Operating Systems Concepts (MSc Computing / JMC2) 2005/06 Chapter 8 49

You might also like