Caching: Application Server Cache
Caching: Application Server Cache
Caching: Application Server Cache
(/learn)
Caching
What happens when you expand this to many nodes? If the request
layer is expanded to multiple nodes, it’s still quite possible to have each
node host its own cache. However, if your load balancer randomly
https://www.educative.io/courses/grokking-the-system-design-interview/3j6NnJrpp5p 1/4
2/5/2020 y Caching y
(/learn)
CDNs are a kind of cache that comes into play for sites serving large
amounts of static media. In a typical CDN setup, a request will first ask
the CDN for a piece of static media; the CDN will serve that content if it
has it locally available. If it isn’t available, the CDN will query the back-
end servers for the file, cache it locally, and serve it to the requesting
user.
If the system we are building isn’t yet large enough to have its own CDN,
we can ease a future transition by serving the static media off a
separate subdomain (e.g. static.yourservice.com
(http://static.yourservice.com)) using a lightweight HTTP server like
Nginx, and cut-over the DNS from your servers to a CDN later.
Cache Invalidation
Write-through cache: Under this scheme, data is written into the cache
and the corresponding database at the same time. The cached data
allows for fast retrieval and, since the same data gets written in the
https://www.educative.io/courses/grokking-the-system-design-interview/3j6NnJrpp5p 2/4
2/5/2020 Caching
(/learn)
Although, write through minimizes the risk of data loss, since every
write operation must be done twice before returning success to the
client, this scheme has the disadvantage of higher latency for write
operations.
1. First In First Out (FIFO): The cache evicts the first block accessed first
without any regard to how often or how many times it was accessed
before.
2. Last In First Out (LIFO): The cache evicts the block accessed most
recently first without any regard to how often or how many times it was
https://www.educative.io/courses/grokking-the-system-design-interview/3j6NnJrpp5p 3/4
2/5/2020 Caching
recently first without any regard to how often or how many times it was
accessed before.
(/learn)
3. Least Recently Used (LRU): Discards the least recently used items first.
4. Most Recently Used (MRU): Discards, in contrast to LRU, the most
recently used items first.
5. Least Frequently Used (LFU): Counts how often an item is needed. Those
that are used least often are discarded first.
6. Random Replacement (RR): Randomly selects a candidate item and
discards it to make space when necessary.
← Back Next →
Mark as Completed
(/courses/grokking- (/courses/grokking-
the- the-
system- system-
design- design-
Load Balancing Data Partitioning
interview/3jEwl04BL7Q) interview/mEN8lJXV1LA)
Stuck? DISCUSS
Get help (https://discuss.educative.io/c/grokking-the-system-design- 57
Send
on interview-design-gurus/glossary-of-system-design-basics- Recommendations
feedback
caching)
https://www.educative.io/courses/grokking-the-system-design-interview/3j6NnJrpp5p 4/4