Lecture_11
Lecture_11
Lecture_11
Lecture 11 — February 14
Lecturer: David Tse Scribe: Junjie J, Jiada L, Yizheng L, Vivek B
11.1 Outline
• Achieving capacity using random codes
• Linear codes
11.2 Recap
In the last couple of lectures we showed that one cannot communicate reliably at a rate
above the capacity. Therefore, we are left with the question of whether we can design codes
whose rate is arbitrarily close to the capacity.
This yields heuristically the same result as we proved rigorously in the last lecture, namely
that we cannot communicate reliably above capacity. But the fact that this ”sphere-packing”
picture gives the correct upper bound suggests that it is a useful visualization of thinking
about how to achieve this upper bound, namely, we need to pack the {0, 1}n -sphere using
2nI(X,Y ) noise spheres with negligible overlap.
11-1
EE/Stats 376A Lecture 11 — February 14 Winter 2017
10
7
xn (3)
6
4
xn (2)
3
xn (1)
2
1
{0, 1}n -sphere: all typical Y n (m)
0
0 1 2 3 4 5 6 7 8 9 10
Figure 11.1
11.3.2 Decoder
For the sake of exposition, will shall fix our message m for the entire discussion below.
Suppose we send the random codeword X n (m) through the channel and receive an output
sequence Y n (m). The output Y n (m) will fall into one of the three cases:
• Case 1: there is a unique m̃ = m for which (X n (m̃), Y n (m)) is jointly typical. In this
case, the decoder returns m.
11-2
EE/Stats 376A Lecture 11 — February 14 Winter 2017
3 xn (m)
Y n (m)
2
xn (m̃)
1
0
0 1 2 3 4 5
Figure 11.2: Coding schema
• Case 2: there is another m̃ ̸= m for which (X n (m̃), Y n (m)) is jointly typical. Here,
the decoder declares an error.
• Case 3: there exists no m̃ for which (X n (m̃), Y n (m)) is jointly typical. Here too, the
decoder declares an error.
2. Since X n (m) and X n (m̃) are picked independently, and Y n (m) depends only the input
X n (m) (and the channel), Y n (m) and X n (m̃) are independent.
3. From the construction of the codewords, X n (m̃) is also uniform over the set of input
typical sequences.
( )
Using the above three points, for m̃ ̸= m, the probability that X n (m̃), Y n (m) is a
jointly typical sequence is
11-3
EE/Stats 376A Lecture 11 — February 14 Winter 2017
typical y n sequences
typical xn sequences
Figure 11.3: Only a subset of the intersections (black dots) are jointly typical
11-4
EE/Stats 376A Lecture 11 — February 14 Winter 2017
• Decoding: The time complexity of the decoder is also exponential in n, as one needs
to test for joint typicality with the output sequence for all the codewords.
k ≤ n(1 − p)
k
=⇒ R = ≤ 1 − p = Capacity.
n
For general binary symmetric channels, one can prove that random codes can achieve ca-
pacity, using a similar argument as the proof that general random codes achieve capacity.
However, there is no known fast decoding algorithm that an exponential brute-force search.
In the upcoming lectures, we will design polar codes (a class of linear codes), which
achieve the capacity along with efficient encoding and decoding space/time complexities of
order O(n log n). Stay tuned!
11-5