018 - System Design - Netflix - EXTERNAL
018 - System Design - Netflix - EXTERNAL
018 - System Design - Netflix - EXTERNAL
Let's design a video sharing service like Youtube, where users will be able to upload/view/search
videos.
1. Why Youtube?
Youtube is one of the most popular video sharing websites in the world. Users of the
service can upload, view, share, rate, and report videos as well as add comments on videos.
Functional Requirements:
Non-Functional Requirements:
1. The system should be highly reliable, any video uploaded should not be lost.
2. The system should be highly available. Consistency can take a hit (in the interest of
availability), if a user doesn’t see a video for a while, it should be fine.
3. Users should have real time experience while watching videos and should not feel any
lag.
Not in scope:Video recommendation, most popular videos, channels, and subscriptions,
watch later, favorites, etc.
Let’s assume our upload:view ratio is 1:200 i.e., for every video upload we have 200 video
viewed, giving us 230 videos uploaded per second.
Storage Estimates: Let’s assume that every minute 500 hours worth of videos are uploaded
to Youtube. If on average, one minute of video needs 50MB of storage (videos need to be
stored in multiple formats), total storage needed for videos uploaded in a minute would be:
These numbers are estimated, ignoring video compression and replication, which would
change our estimates.
Bandwidth estimates:With 500 hours of video uploads per minute, assuming each video
upload takes a bandwidth of 10MB/min, we would be getting 300GB of uploads every
minute.
4. System APIs
We can have SOAP or REST APIs to expose the functionality of our service. Following
could be the definitions of the APIs for uploading and searching videos:
Returns:(string)
A successful upload will return HTTP 202 (request accepted), and once the video encoding
is completed, the user is notified through email with a link to access the video. We can also
expose a queryable API to let users know the current status of their uploaded video.
searchVideo(api dev key search query user location maximum videos to retu
Parameters:
api_dev_key (string): The API developer key of a registered account of our service.
search_query (string): A string containing the search terms.
user_location (string): Optional location of the user performing the search.
maximum_videos_to_return (number): Maximum number of results returned in one
request.
page_token (string): This token will specify a page in the result set that should be returned.
Returns: (JSON)
A JSON containing information about the list of video resources matching the search
query. Each video resource will have a video title, a thumbnail, a video creation date and
how many views it has.
Returns:(STREAM)
A media stream (a video chunk) from the given offset.
6. Database Schema
Video metadata storage - MySql
Videos metadata can be stored in a SQL database. Following information should be stored
with each video:
VideoID
Title
Description
Size
Thumbnail
Uploader/User
Total number of likes
Total number of dislikes
Total number of views
CommentID
VideoID
UserID
Comment
TimeOfCreation
How should we efficiently manage read traffic? We should segregate our read traffic from
write. Since we will be having multiple copies of each video, we can distribute our read
traffic on different servers. For metadata, we can have master-slave configurations, where
writes will go to master first and then replayed at all the slaves. Such configurations can
cause some staleness in data, e.g., when a new video is added, its metadata would be
inserted in the master first, and before it gets replayed at the slave, our slaves would not be
able to see it and therefore will be returning stale results to the user. This staleness might
be acceptable in our system, as it would be very short-lived and the user will be able to see
the new videos after a few milliseconds.
Where would thumbnails be stored? There will be a lot more thumbnails than videos. If we
assume that every video will have five thumbnails, we need to have a very efficient storage
system that can serve a huge read traffic. There will be two consideration before deciding
which storage system will be used for thumbnails:
Let’s evaluate storing all the thumbnails on disk. Given that we have a huge number of
files; to read these files we have to perform a lot of seeks to different locations on the disk.
This is quite inefficient and will result in higher latencies.
Bigtable can be a reasonable choice here, as it combines multiple files into one block to
store on the disk and is very efficient in reading a small amount of data. Both of these are
the two most significant requirements of our service. Keeping hot thumbnails in the cache
will also help in improving the latencies, and given that thumbnails files are small in size,
we can easily cache a large number of such files in memory.
Video Uploads:Since videos could be huge, if while uploading, the connection drops, we
should support resuming from the same point.
Video Encoding:Newly uploaded videos are stored on the server, and a new task is added to
the processing queue to encode the video into multiple formats. Once all the encoding is
completed; uploader is notified, and video is made available for view/sharing.
8. Metadata Sharding
Since we have a huge number of new videos every day and our read load is extremely high
too, we need to distribute our data onto multiple machines so that we can perform
read/write operations efficiently. We have many options to shard our data. Let’s go
through different strategies of sharding this data one by one:
Sharding based on UserID: We can try storing all the data for a particular user on one
server. While storing, we can pass the UserID to our hash function which will map the user
to a database server where we will store all the metadata for that user’s videos. While
querying for videos of a user, we can ask our hash function to find the server holding user’s
data and then read it from there. To search videos by titles, we will have to query all
servers, and each server will return a set of videos. A centralized server will then aggregate
and rank these results before returning them to the user.
To recover from these situations either we have to repartition/redistribute our data or used
consistent hashing to balance the load between servers.
Sharding based on VideoID:Our hash function will map each VideoID to a random server
where we will store that Video’s metadata. To find videos of a user we will query all servers,
and each server will return a set of videos. A centralized server will aggregate and rank
these results before returning them to the user. This approach solves our problem of
popular users but shifts it to popular videos.
We can further improve our performance by introducing cache to store hot videos in front
of the database servers.
9. Video Deduplication
With a huge number of users, uploading a massive amount of video data, our service will
have to deal with widespread video duplication. Duplicate videos often differ in aspect
ratios or encodings, can contain overlays or additional borders, or can be excerpts from a
longer, original video. The proliferation of duplicate videos can have an impact on many
levels:
1. Data Storage: We could be wasting storage space by keeping multiple copies of the
same video.
2. Caching: Duplicate videos would result in degraded cache efficiency by taking up
space that could be used for unique content.
3. Network usage: Increasing the amount of data that must be sent over the network to
in-network caching systems.
4. Energy consumption: Higher storage, inefficient cache, and network usage will result
in energy wastage.
For the end user, these inefficiencies will be realized in the form of duplicate search results,
longer video startup times, and interrupted streaming.
For our service, deduplication makes most sense early, when a user is uploading a video; as
compared to post-processing it to find duplicate videos later. Inline deduplication will save
us a lot of resources that can be used to encode, transfer and store the duplicate copy of the
video. As soon as any user starts uploading a video, our service can run video matching
algorithms (e.g., Block Matching, Phase Correlation, etc.) to find duplications. If we already
have a copy of the video being uploaded, we can either stop the upload and use the existing
copy or use the newly uploaded video if it is of higher quality. If the newly uploaded video
is a subpart of an existing video or vice versa, we can intelligently divide the video into
smaller chunks, so that we only upload those parts that are missing.
However, the use of redirections also has its drawbacks. First, since our service tries to load
balance locally, it leads to multiple redirections if the host that receives the redirection
can’t serve the video. Also, each redirection requires a client to make an additional HTTP
request; it also leads to higher delays before the video starts playing back. Moreover, inter-
tier (or cross data-center) redirections lead a client to a distant cache location because the
higher tier caches are only present at a small number of locations.
11. Cache
To serve globally distributed users, our service needs a massive-scale video delivery
system. Our service should push its content closer to the user using a large number of
geographically distributed video cache servers. We need to have a strategy that would
maximize user performance and also evenly distributes the load on its cache servers.
We can introduce a cache for metadata servers to cache hot database rows. Using
Memcache to cache the data and Application servers before hitting database can quickly
check if the cache has the desired rows. Least Recently Used (LRU) can be a reasonable
cache eviction policy for our system. Under this policy, we discard the least recently viewed
row first.
How can we build more intelligent cache? If we go with 80-20 rule, i.e., 20% of daily read
volume for videos is generating 80% of traffic, meaning that certain videos are so popular
that the majority of people view them; It follows that we can try caching 20% of daily read
volume of videos and metadata.
CDNs replicate content in multiple places. There’s a better chance of videos being
closer to the user, and with fewer hops, videos will stream from a friendlier network.
CDN machines make heavy use of caching and can mostly serve videos out of
memory.
Less popular videos (1-20 views per day) that are not cached by CDNs can be served by our
servers in various data centers.