T1Ch4
T1Ch4
T1Ch4
Purushotham BV Asst. Prof. Department of CSE DSCE, Bangalore 78 94489 19064 utham74@gmail.com
Chapter 4
Cache Memory
Objectives
Characteristics
Location Capacity Unit of transfer Access method Performance Physical type Physical characteristics Organisation
Location
CPU or processor Internal mostly main memory External secondary memory Word size
Capacity
Number of words
Unit of Transfer
Internal
Usually governed by data bus width Usually a block which is much larger than a word Smallest location which can be uniquely addressed Word internally The relation between the length in bits A of an address and the number N of addressable unit is 2A = N.
External
Addressable unit
Access Methods
Sequential
Start at the beginning and read through in order Access time depends on location of data and previous location e.g. tape Individual blocks have unique address Access is by jumping to vicinity plus sequential search Access time depends on location and previous location e.g. disk
Direct
Random
Individual addresses identify locations exactly Access time is independent of location or previous access e.g. RAM Data is located by a comparison with contents of a portion of the store Access time is independent of location or previous access e.g. cache
Associative
Performance
Access time(latency)
In random-access memory the time taken to perform read/write operation In non-random-access memory, the time taken to position the read-write mechanism at the desired location This time is applied to random access and consists of access time + additional time required before the next access.
Performance (contd.,)
Transfer rate
The rate at which the data can be transferred to or out of memory For random-access memory it is 1 / cycle time For non-random-access memory TN = TA + N / R Where
TN - average time to read or write N bits TA - average access time N - Number of bits R - Transfer rate, in bits per second (bps)
Memory Hierarchy
Registers
In CPU May include one or more levels of cache RAM Backing store
External memory
Performance
Access time
Time between presenting the address and getting the valid data Time may be required for the memory to recover before next access Cycle time is access + recovery Rate at which data can be moved
Transfer Rate
Physical Types
Semiconductor
Magnetic
Optical
Others
Physical Characteristics
Organisation
Physical arrangement of bits into words Not always obvious e.g. interleaved
Organisation
Physical arrangement of bits into words Not always obvious e.g. interleaved Faster access time, greater cost per bit Greater capacity, smaller cost per bit Greater capacity, slower access time
How much?
How fast?
How expensive?
Hierarchy List
Registers L1 Cache L2 Cache Main memory Disk cache Disk Optical Tape
It is possible to build a computer which uses only static RAM (see later) This would be very fast This would need no cache
Locality of Reference
During the course of the execution of a program, memory references tend to cluster e.g. loops
Cache
Small amount of fast memory Sits between normal main memory and CPU May be located on CPU chip or module
CPU requests contents of memory location Check cache for this data If present, get from cache (fast) If not present, read required block from main memory to cache Then deliver from cache to CPU Cache includes tags to identify which block of main memory is in each cache slot
Cache Design
Size Mapping Function Replacement Algorithm Write Policy Block Size Number of Caches
Cost
More cache is expensive More cache is faster (up to a point) Checking cache for data takes time
Speed
Mapping Function
(224=16M)
Direct Mapping
Address is in two parts Least Significant w bits identify unique word Most Significant s bits specify one memory block The MSBs are split into a cache line field r and a tag of s-r (most significant) i = j modulo m where i is cache line number, j is MM block number, m is no. of lines in the cache
24 bit address 2 bit word identifier (4 byte block) 22 bit block identifier
No two blocks in the same line have the same Tag field Check contents of cache by finding line and checking Tag Tag s-r Line or Slot r Word w 8 14 2
Main Memory blocks held 0, m, 2m, 3m2s-m 1,m+1, 2m+12s-m+1 m-1, 2m-1,3m-12s-1
Address length = (s + w) bits Number of addressable units = 2s+w words or bytes Block size = line size = 2w words or bytes Number of blocks in main memory = 2s+ w/2w = 2s Number of lines in cache = m = 2r Size of tag = (s r) bits
Simple Inexpensive Fixed location for given block If a program accesses 2 blocks that map to the same line repeatedly, cache misses are very high
Easy to implement Relatively inexpensive to implement Easy to determine where a main memory reference can be found in cache Each main memory block is mapped to a specific cache line Through locality of reference, it is possible to repeatedly reference to blocks that map to the same line number These blocks will be constantly swapped in and out of cache, causing the hit ratio to be low.
Disadvantage
Associative Mapping
A main memory block can load into any line of cache Memory address is interpreted as tag and word Tag uniquely identifies block of memory Every lines tag is examined for a match Cache searching gets expensive
22 bit tag stored with each 32 bit block of data Compare tag field with tag entry in cache to check for hit Least significant 2 bits of address identify which 16 bit word is required from 32 bit data block e.g.
Address FFFFFC
Tag FFFFFC
Data 24682468
Tag 22
Word w 2
Address length = (s + w) bits Number of addressable units = 2s+w words/bytes Block size = line size = 2w words or bytes Number of blocks in main memory = 2s+ w/2w = 2s Number of lines in cache = undetermined Size of tag = s bits
Cache is divided into a number of sets Each set contains a number of lines A given block maps to any line in a given set
e.g. Block B can be in any line of set i 2 way associative mapping A given block can be in one of 2 lines in only one set
13 bit set number Block number in main memory is modulo 213 000000, 00A000, 00B000, 00C000 map to same set
Use set field to determine cache set to look in Compare tag field to see if we have a hit e.g
Data
Address length = (s + w) bits Number of addressable units = 2s+w words or bytes Block size = line size = 2w words or bytes Number of blocks in main memory = 2d Number of lines in set = k Number of sets = v = 2d Number of lines in cache = kv = k * 2d Size of tag = (s d) bits
No choice Each block only maps to one line Replace that line
Hardware implemented algorithm (speed) Least Recently used (LRU) e.g. in 2 way set associative
Which of the 2 block is lru? replace block that has been in cache longest replace block which has had fewest hits
Random
Write Policy
Must not overwrite a cache block unless main memory is up to date Multiple CPUs may have individual caches I/O may address main memory directly
Write through
All writes go to main memory as well as cache Multiple CPUs can monitor main memory traffic to keep local (to CPU) cache up to date Lots of traffic, may create a bottle-neck Slows down writes
Write back
Updates initially made in cache only Update bit for cache slot is set when update occurs If block is to be replaced, write to main memory only if update bit is set Other caches get out of sync I/O must access main memory through cache 15% of memory references are writes, in HPC it may ranges between 33% 50 %
Summary