Digital Communication - Information Theory
Digital Communication - Information Theory
Digital Communication - Information Theory
If the event has occurred, a time back, there is a condition of having some information.
These three events occur at different times. The difference in these conditions help us gain
knowledge on the probabilities of the occurrence of events.
Entropy
When we observe the possibilities of the occurrence of an event, how surprising or uncertain it
would be, it means that we are trying to have an idea on the average content of the information
from the source of the event.
Entropy can be defined as a measure of the average information content per source symbol.
Claude Shannon, the “father of the Information Theory”, provided a formula for it as −
H = − ∑ p i log pi
b
Where pi is the probability of the occurrence of character number i from a given stream of
characters and b is the base of the algorithm used. Hence, this is also called as Shannon’s
Entropy.
The amount of uncertainty remaining about the channel input after observing the channel output,
Mutual Information
https://www.tutorialspoint.com/digital_communication/digital_communication_information_theory.htm 1/4
7/25/22, 3:27 AM Digital Communication - Information Theory
T hisisassumedbef oretheinputisapplied
To know about the uncertainty of the output, after the input is applied, let us consider Conditional
Entropy, given that Y = yk
j−1
1
H (x ∣ yk ) = ∑ p (xj ∣ yk ) log [ ]
2
p(xj ∣ yk )
j=0
k−1
H (X ∣ Y ) = ∑ H (X ∣ y = yk ) p (yk )
k=0
k−1 j−1
1
= ∑ ∑ p (xj ∣ yk ) p (yk ) log [ ]
2
p (xj ∣ yk )
k=0 j=0
k−1 j−1
1
= ∑ ∑ p (xj , yk ) log [ ]
2
p (xj ∣ yk )
k=0 j=0
come to know that the difference, i.e. H (x) − H (x ∣ y) must represent the uncertainty about
Denoting the Mutual Information as I (x; y) , we can write the whole thing in an equation, as
follows
https://www.tutorialspoint.com/digital_communication/digital_communication_information_theory.htm 2/4
7/25/22, 3:27 AM Digital Communication - Information Theory
I (x; y) = H (x) − H (x ∣ y)
I (x; y) = I (y; x)
I (x; y) ≥ 0
I (x; y) = H (y) − H (y ∣ x)
Mutual information of a channel is related to the joint entropy of the channel input and the
channel output.
https://www.tutorialspoint.com/digital_communication/digital_communication_information_theory.htm 3/4
7/25/22, 3:27 AM Digital Communication - Information Theory
j−1 k−1
1
H (x, y) = ∑ ∑ p(xj , yk ) log ( )
2
p (xi , yk )
j=0 k=0
Channel Capacity
We have so far discussed mutual information. The maximum average mutual information, in an
instant of a signaling interval, when transmitted by a discrete memoryless channel, the
probabilities of the rate of maximum reliable transmission of data, can be understood as the
channel capacity.
This source is discrete as it is not considered for a continuous time interval, but at discrete time
intervals. This source is memoryless as it is fresh at each instant of time, without considering the
previous values.
https://www.tutorialspoint.com/digital_communication/digital_communication_information_theory.htm 4/4