Topics: Cache Innovations (Sections 2.4, B.4, B.5), Virtual Memory Intro
Topics: Cache Innovations (Sections 2.4, B.4, B.5), Virtual Memory Intro
Topics: Cache Innovations (Sections 2.4, B.4, B.5), Virtual Memory Intro
1
Types of Cache Misses
Increasing cache
capacity
Increasing
number of sets
Increasing block
size
Increasing
associativity
3
Reducing Miss Rate
4
More Cache Basics
• L1 caches are split as instruction and data; L2 and L3
are unified
6
Techniques to Reduce Cache Misses
• Victim caches
7
Victim Caches
8
Replacement Policies
• NRU: every block in a set has a bit; the bit is made zero
when the block is touched; if all are zero, make all one;
a block with bit set to 1 is evicted
9
Prefetching
• When you read the top of the queue, bring in the next line
Sequential lines
L1
Stream buffer
11
Stride-Based Prefetching
incorrect
init steady
correct
correct
incorrect
(update stride) PC tag prev_addr stride state
correct
correct
trans no-pred
incorrect
(update stride) incorrect
12
(update stride)
Shared Vs. Private Caches in Multi-Core
P1 P2 P3 P4 P1 P2 P3 P4
L1 L1 L1 L1 L1 L1 L1 L1
L2 L2 L2 L2
L2
13
Shared Vs. Private Caches in Multi-Core
14
UCA and NUCA
15
Large NUCA
• Mapping
CPU • Migration
• Search
• Replication
16
Shared NUCA Cache
Virtual address
13
virtual page page offset
number
Translated to phys
page number
Physical address
13
physical page page offset
number
Physical memory
19
Title
• Bullet
20