DC Slide module-3

Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

Channel-coding Theorem

 Shannon’s second theorem: the channel-


coding theorem.
 The inevitable presence of noise in a
channel causes discrepancies (errors)
between the output and input data
sequences of a digital communication
system.
Channel-coding Theorem
 The design goal of channel coding is to increase the resistance
of a digital communication system to channel noise.
 Specifically, channel coding consists of mapping the incoming
data sequence into a channel input sequence and inverse
mapping the channel output sequence into an output data
sequence in such a way that the overall effect of channel noise
on the system is minimized.
Channel-coding Theorem
 The first mapping operation is performed
in the transmitter by a channel encoder,
whereas the inverse mapping operation is
performed in the receiver by a channel
decoder, as shown in the block diagram
Channel-coding Theorem
 The channel encoder and channel decoder
in Figure are both under the designer’s
control and should be designed to
optimize the overall reliability of the
communication system.
 We may thus view channel coding as the
dual of source coding, in that the former
introduces controlled redundancy to
improve reliability whereas the latter
reduces redundancy to improve efficiency
Channel-coding Theorem
 For the purpose of our present discussion, it
suffices to confine our attention to block
codes.
 In this class of codes, the message sequence
is subdivided into sequential blocks each k
bits long, and each k-bit block is mapped into
an n-bit block, where n > k.
 The number of redundant bits added by the
encoder to each transmitted block is n – k
bits.
Channel-coding Theorem
 The ratio k/n is called the code rate
where, of course, r is less than unity.
 For a prescribed k, the code rate r (and,
therefore, the system’s coding efficiency)
approaches zero as the block length n
approaches infinity.
 The accurate reconstruction of the original
source sequence at the destination requires
that the average probability of symbol error
be arbitrarily low.
Channel-coding Theorem
 Does a channel-coding scheme exist such
that the probability that a message bit will
be in error is less than any positive
number ε (i.e., as small as we want it), and
yet the channel-coding scheme is efficient
in that the code rate need not be too
small?
Channel-coding Theorem
 The decoder delivers decoded symbols to the
destination from the source alphabet S and at the
same source rate of one symbol every Ts seconds.
 The discrete memoryless channel has a channel
capacity equal to C bits per use of the channel.
 We assume that the channel is capable of being
used once every Tc seconds.
 Hence, the channel capacity per unit time is C/Tc
bits per second, which represents the maximum
rate of information transfer over the channel.
Channel-coding Theorem
 Shannon’s second theorem, the channel-
coding theorem,10 in two parts as follows:
 1. Let a discrete memoryless source with
an alphabet 𝒮 have entropy H(S) for
random variable S and produce symbols
once every Ts seconds. Let a discrete
memoryless channel have capacity C and
be used once every Tc seconds, Then, if
Channel-coding Theorem
 there exists a coding scheme for which the
source output can be transmitted over the
channel and be reconstructed with an
arbitrarily small probability of error.
 The parameter C/Tc is called the critical
rate; when it is satisfied with the equality
sign, the system is said to be signaling at
the critical rate.
Channel-coding Theorem
 Conversely, if it is not possible to transmit
information over the channel and reconstruct
it with an arbitrarily small probability of
error.
 The channel-coding theorem is the single
most important result of information theory.
 The theorem specifies the channel capacity C
as a fundamental limit on the rate at which
the transmission of reliable error-free
messages can take place over a discrete
memoryless channel.
Channel-coding Theorem
 However, it is important to note two limitations of the
theorem:
 1. The channel-coding theorem does not show us how
to construct a good code. Rather, the theorem should
be viewed as an existence proof in the sense that it
tells us that if the condition of

is satisfied, then good codes do exist.


 2. The theorem does not have a precise result for the
probability of symbol error after decoding the channel
output. Rather, it tells us that the probability of
symbol error tends to zero as the of the code increases,
again provided that the condition of equation is satisfied.
Information capacity Law
 The information capacity of the channel is
defined as the maximum of the mutual
information between the channel input Xk
and the channel output Yk over all
distributions of the input Xk that satisfy the
power constraint of

 Let I(Xk;Yk) denote the mutual information


between Xk and Yk. We may then define the
information capacity of the channel as
Information capacity Law
 In words, maximization of the mutual information I(Xk;Yk) is
done with respect to all probability distributions of the
channel input Xk, satisfying the power constraint

 Information capacity of the channel in the following


equivalent form:

 where N0B is the total noise power at the channel output


 The information capacity law of above equation is one of the
most remarkable results of Shannon’s information theory.
Information capacity Law
 In a single formula, it highlights most vividly the interplay
among three key system parameters: channel bandwidth,
average transmitted power, and power spectral density of
channel noise.
 Note, however, that the dependence of information capacity C
on channel bandwidth B is linear, whereas its dependence on
signal-to-noise ratio P/N0B is logarithmic.

 Accordingly, we may make another insightful statement:


 “It is easier to increase the information capacity of a
continuous communication channel by expanding its
bandwidth than by increasing the transmitted power for a
prescribed noise variance.”

You might also like