DC Handouts
DC Handouts
DC Handouts
HANDOUTS
KONGUNADU COLLEGE OF ENGINEERING AND TECHNOLOGY
DEPARTMENT OF ELECTRONICS AND COMMUNIATION ENGINEERING
EC 8501 – DIGITAL COMMUNICATION
UNIT – I
COURSE HANDOUTS
INFORMATION THEORY
1.1 INTRODUCTION:
Information is the source of a communication system, whether it is analog or
digital. Information theory is a mathematical approach to the study of coding of
information along with the quantification, storage, and communication of information.
These three events occur at different times. The difference in these conditions helps us gain
knowledge on the probabilities of the occurrence of events.
ENTROPY
When we observe the possibilities of the occurrence of an event, how surprising or uncertain
it would be, it means that we are trying to have an idea on the average content of the
information from the source of the event.
Entropy can be defined as a measure of the average information content per source
symbol. Claude Shannon, the “father of the Information Theory”, provided a formula for it
as
Where pi is the probability of the occurrence of character number i from a given stream of
characters and b is the base of the algorithm used. Hence, this is also called as Shannon’s
Entropy.
The amount of uncertainty remaining about the channel input after observing the channel
output, is called as Conditional Entropy. It is denoted by H(x∣y)H(x∣y)
Mutual Information
Let us consider a channel whose output is Y and input is X
Now, considering both the uncertainty conditions (before and after applying the inputs), we
come to know that the difference, i.e. H(x)−H(x/y)H(x)−H(x/y) must represent the
uncertainty about the channel input that is resolved by observing the channel output.
This is called as the Mutual Information of the channel.
Denoting the Mutual Information as I(x;y) we can write the whole thing in an equation, as
follows
I(x;y)=H(x)−H(x | y)
A source from which the data is being emitted at successive intervals, which is independent
of previous values, can be termed as discrete memory less source.
This source is discrete as it is not considered for a continuous time interval, but at discrete
time intervals. This source is memory less as it is fresh at each instant of time, without
considering the previous values.
The Code produced by a discrete memory less source, has to be efficiently represented,
which is an important problem in communications. For this to happen, there are code words,
which represent these source codes.
For example, in telegraphy, we use Morse code, in which the alphabets are denoted
by Marks and Spaces. If the letter E is considered, which is mostly used, it is denoted
by “.” Whereas the letter Q which is rarely used.
Where Sk is the output of the discrete memory less source and bk is the output of the source
encoder which is represented by 0s and 1s.
Let us assume that the source has an alphabet with k different symbols and that
the kth symbol Sk occurs with the probability Pk, where k = 0, 1…k-1.
Let the binary code word assigned to symbol Sk, by the encoder having length lk, measured
in bits.
Hence, we define the average code word length of the source encoder as
Let us refer to the definition, “Given a discrete memory less source of entropy H(δ) the
average code-word length for any source encoding is bounded as ≥H(δ)."
In simpler words, the code word (example: Morse code for the word QUEUE is -.- ..- . ..- . )
is always greater than or equal to the source code (QUEUE in example). Which means, the
symbols in the code word are greater than or equal to the alphabets in the source code
Hence with Lmin=H(δ), the efficiency of the source encoder in terms of Entropy H(δ) may
be written as
This source coding theorem is called as noiseless coding theorem as it establishes an error-
free encoding. It is also called as Shannon‟s first theorem.
The noise present in a channel creates unwanted errors between the input and the output
sequences of a digital communication system. The error probability should be very
low, nearly ≤ 10-6 for a reliable communication.
Inverse Mapping the channel output sequence into an output data sequence.
The final target is that the overall effect of the channel noise should be minimized.
The mapping is done by the transmitter, with the help of an encoder, whereas the inverse
mapping is done by the decoder in the receiver.
1.3 Discrete Memoryless Channels
The focus in this section is on information transmission through a discrete memoryless
channel rather than on information generation through a DMS. A discrete channel is a
statistical model with an input X and an output Y. During each signaling interval (symbol
period), the channel accepts an input symbol from X, and in response, it generates an output
symbol from Y, generally a noisy version of X. The channel is discrete when the alphabets
of X and Y are both finite. X and Y in all practical channels are random variables. In a discrete
memoryless channel (DMC), the current output symbol depends only on the current input
symbol and not on any of the previous input symbols.
Pr( Y = 0 | X = 0 ) = 1 – p
Pr( Y = 0 | X = 1) = p
Pr( Y = 1 | X = 0 ) = p
Pr( Y = 1 | X = 1 ) = 1 − p
It is assumed that 0 ≤ p ≤ 1/2. If p > 1/2, then the receiver can swap the output
(interpret 1 when it sees 0, and vice versa) and obtain an equivalent channel
with crossover probability 1 − p ≤ 1/2.
This channel is often used by theorists because it is one of the simplest noisy
channels to analyse. Many problems in communication theory can be reduced to
a BSC.
Conversely, being able to transmit effectively over the BSC can give rise to
solutions for more complicated channels.
MEASURE OF INFORMATION
The basic goal of a communication system is to transmit and subsequently receive the desired
information as efficiently as possible. Therefore it is necessary to understand the concept of
information and the ways it can be measured. The definition of information based on the idea
of lack of information i.e., the less information one has the greater is the information to be
gained.thus information is a measure of uncertainty, i.e. the less is the possibility of
occurrence of a certain message, the higher is its information content.
For example, the weather forecast of a particular city stated as follows
i) It would be hot and sunny
The amount of information received about the city is totally different for the three messages.
The first message contains very little information because the weather in a desert city in
summer is expected to be hot and sunny for maximum time. The second message forecasting
a scattered rain contains some more information because it is not an event that occurs often.
The forecast of a cylonic storm contains even more information compared to the second
message. This is because the third forecast is a rearest event in the city. Hence on an
conceptual basis the amount of information received from the knowledge of occurance of that
event may be related to the probability of occurance of that event.
If the uncertainty of choosing of a particular message is high, the message has high
information. Thus the information can also bedefined as the choice of one message out of a
finite set of messages.
A practical source in a communication system is a device which produces messages and it
can be either analog or discrete. In this chapter only discrete source have been discussed.
Since analog source can be transformed to discrete source. A information source is a source
which has only a finite set of symbols called as source alphabet and the elements of the set
are called as symbols or letters.
In any message there is some redundaancy, repetition, or periodicity occurs, the the efficiency
of the system is reduced. It can be improved only if all the redundancy is removed.
The mathematical measure of information should be function of the probability of the
outcome and it should satisfy the following
i) information should be proportional to the uncertainity of an outcome
The successive symbols are stability independent i.e., each message emitted from the
source is independent of previous message.
Consider a source emitting „m‟ possible symbols x1, x2, x3, ………. xm with their
probabilities of occurrence is p(x1), p(x2) …., p(xm).
m
The total probability is P(x1) + P(x2) + ………+ P(xm) = p(x )
i-1
1 (2)
For a long interval, „L‟ messages have been generated then the number of message xt = ptL.
1
The amount of information in message x1 = log thus the total amount of information in
p 1
all x1 message
1
= P(xt) L log (3)
p(x t )
The amount of information in all „L‟ messages will be
1 1 1
I(t) = p(x1)Llog + p(x2)Llog +………….+ p(xm)Llog (4)
p(x1 ) p(x 2 ) p(x m )
The average information per message or entropy is given by
m
1
= p(x
K=1
K )log
p(x K )
m m
1
in simple form H= P log P =- P logP
K=1
K
K=1
K K (5)
k
If a source emitting a sequnce of „n‟ symbols, then the total information to be transmitted is
nH bits. Let us assume the source produces „r‟ symbols/sec, thus the time duration of the
equation can be written as
n
Tb =
r
The average rate at which the information must be transferred is called Information rate and
is given by
nH
R= = rH bits/sec (6)
nr
r
Let us consider a two sources of equal entropy, generating r1 and r2 message per second,
respectively. The first source will transmit the information at a rate R 1 = r1H the
second source will transmit the information at a R2 = r2H.
If r2 > r1 the R2 > R1 i.e., more information transmitted from the second source, in a given
period, placing greater demands on the communication channel. Thus the source is not
defined by its entropy alone but also its rate of information.
From, the equation (5) it can be concluded that, the entropy depends upon the symbol
probability pi and also on the alphabets size „m‟ However „H‟ satisfies the following
equation.
0 H log2 M
H = 0 corresponds to n0 uncertainly or freedom of choice, this occurs when the cource emits
always only one symbol.
H = log2M corresponds to maximum freedom of choice or uncertainty, which occurs when P i
1
= for all i, i.e., all the symbols are equiprobable.
M
In order to study the variation of „H‟ between these two extremes consider the binary source,
it emits two symbols with their probability of one with „p‟ and other with q.
1.7. ENTROPY IN BINARY CASE
The symbol „0‟ is tranmitted with the probability „p‟ and another symbol „1‟ is transmitted
with the probability „q‟
therefore P + q = 1 or q = 1 – p
m
We know entropy H(x) = - P(x1 )logP(x1 )=-[plogp+qlogq].
i=1
= - [(p log p + (1-p) log (1-p)]
To maximize H(x) differentiate the above equation with respect to „p‟ and equated to zero
dH(x) 1 1
ie =0=- P logp+(1-p) (-1)log(1-p)
dP P 1-p
= - [log p – log (1-p)] = 0.
1
or P = (1-p) therefore P=
2
1
The maximum value of H(x) occurs at P = i.e.,
2
At P = 0, and P = 1, H(x) = 0.
1
To find the maximum value of entropy substitute P = .
2
1 1 1 1
Then H(x) = - log + log
2 2 2 2
1
= - log log 2 2 = 1 bit / symbol.
2
PROPERTIES OF ENTROPY
XY
x m y1 xm y2 ......xm yn
m
where P(xi) = p(x y )
j=1
i j
n
and H(Y) = p(y j ) log p(y j )
j=1
m
where P(Yi) = p(x i y j )
i=1
m n
similarly H(XY) = p(x i y j ) log p(x i y j )
i=1 j=1
where H(X) and H(Y) are marginal entropies of „X‟ and „Y‟ and
H(XY) = joint entropy of „X‟ and „Y‟ respectively.
X P(XY)
The conditional probability p(X/Y) is given by P =
Y P(Y)
We know that YK may occurs in conjunction with x1 x2 … xm
x x x
Thus (X/yj) = 1 , 2 ,..... m
y j y j y j
n m
= p(y j ) P(x i /y j )logp(x i /y j )
y=1 i=1
n m
= p(y )P(x /y )logp(x /y )
i=1 j=1 j i j i j
m n
= P (x i y j )logP(x i /y j )
i=1 j=1
m n
Similarly H(Y/X) = P (x i y j )logP(yi /x j )
i=1 j=1
m n
= H(X/Y) P(x i ,y j )logP(y j )
i=1 j=1
n
m
= H(X/Y) P(x i y j ) logP(y j )
j=1 i=1
n
= H(X/Y) P(y j )logP(y j )
j=1
m n
= P(x i y j )log P(x i )P(y j )
i=1 j=1
m n
= P(x i y j ) logP(xi )+logP(y j )
i=1 j=1
m n m n
= P(x i y j )logP(x i ) P(x i ,y j)logP(y j)
i=1 j=1 i=1 j=1
m n
= P(x i )logP(x i ) P(y j )logP(y j )
i=1 j=1
note that H (x) is not an absolute measure of information, but only relative subject to a
coordinate system which can be changed.
If the continuous signal x(t) is limited to a finite range of values then H(x) is maximum, when
p(x) is uniformly distributed over the range
1 1
i.e., if p(x) = a for x
2α 2α
α/2
compare this with discrete system when the entropy is maximum when the probability of all
symbols were equal.
SOURCE CODING THEOREM
A conversion of the discrete message sequence into a sequence of binary symbols is known
as source coding. The main problem of coding, technique is the development of an efficient
source encoder. An objective of source coding is to minimize the average bit rate required for
representation of the source by reducing the redundancy of the source. The basic requirement
of the source encoder is
i) The code words produced by the source encoder should be in binary form.
ii) The source code is uniquely decodable so that the original source sequence binary
sequences.
Let us consider a discrete memory less source encoder as shown below. The source generates
a output of x1, x2, ….xn is converted by the source encoder in to binary values of „0‟s and
„1‟s.
Code length and code efficiency
Let „X‟ be a discrete memory source with finite entropy H(X) and an alphabet X = {x 1 x2 x3
….xm} with corresponding probability of occurance are P(x1), P(x2)….P(xm).
The binary code word assigned to symbol xi by the encoder have length ni measured in bits.
The length of a code word is the number of binary digits in the code word. The average code
word length „L‟ per source symbol is given by
m
L = P(x )n
i=1
i i
Where L = average number of bits / symbol used in the source coding process and code
efficiency is defined as
L min
η=
L
Where Lmin is the minimum possible value of „L‟ when η approaches unity, then the code is
said to be effiecient.
The code reduncy is defined as γ = 1 – η
The source coding theorem states that for a source „x‟ with entropy H(x), the average code
length „L‟ per symbol is bounded as
L H(X)
H(x)
If L = H(x) then code efficiency η =
L
Example for Source Encoding Theorem
Case (a)
Let us consider a discrete binary source has two outputs „P‟ and „Q‟ with the probability of
0.6 and 0.4 respectively. The source output is connected to a binary channel, it has the
transmission rate of 2 symbols per second. Assume source rate = 2.5 symbols / second.
for total probability p =1, the channel capacity is 1 bit/symbols and the information rate is 2
bit / second.
The source output H(x) = – p(x1) log2p(x1) – p(x2) log2p(x2)
H(x) = –0.6 log20.6 – 0.4 log20.4 = 0.442 +0.528 = 0.970 bits / symbols.
log10 x
We know log2x = = 3.3219 log10x
log10 2
The source information rate = rH(x) = 2.5(0.970)
= 3.3976 bits / second.
In this case the source information rate is greater than channel capacity or given source rate
thus the transmission is not possible.
Case (b)
It the source the probability p1 = 0.2; p2 = 0.8 and source information rate = 2.5 symbols /
second.
H(x) = – 0.2 log2 0.2 – 0.8 log2 0.8 = 0.464 + 0.257 = 0.721.
The source information rate = rH(x) = 2.5(0.721) = 1.8025.
In this case the source information rate is less than the channel capacity rate so the
transmission is possible.
MUTUAL INFORMATION AND CHANNEL CAPACITY
Mutual information of channel is the average information raised by the receiver when the
state of transmission is known. The mutual information can be represented by
I(X, Y) = H(X) – H(X/Y)
Where H(x) = entropy of the source
H(X/Y) = Entropy of the source when the state of receiver is
known.
m m n
i.e., I(X, Y) = p(x i )logp(x i ) p(x i y j) log p(x i/y j)
i=1 i=1 j=1
n
we know P(xi) = p(x y )
j=1
i j
m n m n
then I(X, Y) = P(x i y j )log p (x i / y j ) P(x iy j)log p (x i)
i=1 j=1 i=1 j=1
m n
p(x i /y j )
I(X, Y) = p(x y )log i j 2
i=1 j=1 p(x i )
Similarly I(Y, X) = H(Y) – H(Y/X)
m n p(y j /x i )
I(Y, X) = p(x y )log
i=1 j=1
i j 2
p(y j )
If the signal is band limited, and the samples are taken at „2ω‟ per second.
then the rate of information transmission is
For maximizing „R‟, R(Y) should be maximized the average power of received signal is
signal + noise i.e., „S + N‟, the R(y) is maximum, if y(t) is also a gaussian random process
because noise in gaussian.
S
C = ω log 1+ bits
N
Where ω = channel bandwidth we know the power spectual density of noise is
N = ηω
S
... C = ω log 1+ bits/second.
ηω
The trade-off between S/N and Bandwidth
We know the channel capacity interms of bandwidth is as follows
S
C = ω log 1+
ηω
S
For noiseless channel , then the channel capacity is infinite. But, practically the
N
channel capacity does not become infinite, if the bandwidth approaches infinity, because the
increase in bandwidth. Therefore the channel capacity approaches an upper limit with
increasing bandwidth, it can be proved as follows
S S
C = ω log 1+ = ω log 1+
N ηω
ηω/S
S ηω S S S
= . log 1+ = log 1+
η S ηω η ηω
1
S
= log(1+x) x
η
1
S
We know Lt
x 0
(1+x) x = e thus x =
ηω
When ω , then x 0 hence, the above equation becomes,
S ηω/δ
Lt (1+ ηω )
x 0
= e
S S
C =
log 2 e = 1.44 = R max
η η
This equation represents an upper limit on the channel capacity with increasing bandwidth,
where the channel capacity is finite. Hence, the channel capacity of a channel of infinite
bandwidth with gaussian moise is finite.
The size of input symmbol and output symbol need not be the same size. Due to the coding
process the input symbol may be greater than input symbol i.e., j 1 of output symbol be
less than the input symbol i.e., i 1.
A discrete memory less channel may be less arrange in a matrix form as follows
n-1
i.e., p(y /x )=1 for all „i'
j=0
j i
The joint probability distribution of the random variable „X‟ and „Y‟ is given by
The marginal probability distribution of the output random variable „Y‟ is obtained by
averaging p(xi, yj) i.e.,
n-1
p(yj) = p(y /x )p(x ) for j = 0, 1…..n -1
j=0
j i i
1 0 0 ...... 0
0 1 0 ...... 0
p(X Y) =
0 0 1 ...... 0
0 0 0 ...... 1
(2)
we know
m n n
H(X Y) = p(x i y j ) log p(x i y j )= p(x iy j) log p(x iy j)
i=1 j=1 i=1
When the channel has noise it becomes difficult to reconstruct the transmitted signal
faithfully. Assume the channel noise is a stationary process, in the sense that sensitive
symbols are perturbed independently.
In otherwords it can be defined as the channel there is no correlation between the input and
output symbol. Thus the joint probability matrix is provides corretion input and output.
p1 p1 ........ p1
P(x y) = p 2 p 2 ........ p 2
p n p n ........ p n
n
1
Let we know p
i=1
i =
m
1
P(yj) = for j = 1 to n.
m
If p(xi yj) = pi then
p(x i y j )
and = p(y j ) or p(y j / x i ) = p(y j )
p(y j )
m n
thus H(Y / X) = p(x i y j ) log p(y j / x i )
i=1 j=1
m n
= p(x i ) p(y j ) log p(y j )
i=1 j=1
n
m
= - p(x i ) p(y j ) log p(y j )
j=1 i=1
n m
= - p(y j ) log p(y j ) since p(x i ) = 1
j=1 i=1
= H(Y).
Similarly H(X/Y) = H(X)
If the channel is represented as shown in figure 4 then the joint probability matrix is
p1 p 2 ........ p n
P [X, Y] = p1 p 2 ........ p n
p1 p 2 ........ p n
n
1
Then p
j=1
j =
n
1
and p (x i ) = for i=1, 2, ……..m.
n
p (y j ) = mp j for j=1, 2, ……n. Figure 4: Noisy channel
The equation (3) shows that xi and yj are independent for all „i' and „j‟, i.e., no information is
tranmitted through it. i.e., I (X Y) = 0.
From the above theory it is concluded that, the joint probability matrix if each row consists
the same element or each column consists the same element then the channel is said to be
noisy channel.
same.
In other words, the symmetric channel can be defined as, the channel is symmetirc if the rows
coloum‟s of the channel metrix p(Y/x) are indentical except permutation shown as below.
0.5 0.5
p(Y / X) = 0.4 0.6 = not a rows and coloums are identical except for
0.3 0.7 permutation (i.e., each contain two
symmetrical channel. 0.3 and 0.4)
p 1-p p q
P(Y / X) = = where q = 1 – p
1-p p q p
n
I(X, Y) = H(Y) – H(Y / X) = H(Y) - p(Y / x i ) p(x i )
i=1
n
= H(Y) – A p(x i ) = H(Y) – A
i=1
Cascaded Channel
If the channel are cascaded as shown in figure 5. The message from X0 to Z0 reached in two
ways is.
Hence q1 = pq + qp = 2pq.
1 - 2pq 2pq p1 q1
P(Z/X) = = 1 1
2pq 1 - 2pq q p
The above equation represents the channel capacity of two cascaded BSC is less than single
BSC.
CODING
Coding is a procedure for mapping a given set of message (m1, m2, ….mn) into a new set of
encoded messages (C1, C2,…Cn) such a way that the transformation is one to one for each
message. This is known as source coding. It is possible to device codes for special purposes
such as secrecy or minimum probability of error without relevance to the efficiency of
transmission. This is known as channel coding.
Advantages of coding
g) Irreducible : When no encoded words can be obtained from each other by the
addition of more letters, the code is said to be irrsducibel when a code is
irreducible it is also uniquely decipherable, but the reverse is not true.
C1 C3 C2 C2 C1 C1 C2 C3 C3 C2
It is uniquely decoded. Thus it can be said that when a code is irreducible, it is also uniquely
decipherable.
Coding Efficiency
Let „M‟ be thee number of symbols in an encoding alphabet. Let there be „N‟ messages {x 1,
x2, . . .xn) with the probabilities (P1, P2, …..Pn). Let ni be number of symbols in the ith
message. The average length of the message per code word is given by
N
L = n i p(n i ) letters / message.
i=1
L min
η=
L
Log M = Maximum average information associated with each letter in bits / letter.
SHANNON FANO-CODING
This method of coding is directed towards constructing reasonably efficient separable binary
codes.
The sequence Ck of binary numbers of length nk associated to each message xk should the
following conditions.
i) No sequence of employed binary numbers Ck can be obtained from each other by adding
more binary digits to the shorter sequence.
The message set, then is partitioned or divided into two most equiprobable susets (X1)
and (X2).
Assign „O‟ to each message contained in X1 and assign „1‟to each message contained in
X2 and vice-versa.
The same procedure is repeated for subsets of (X1) and (X2) i.e., X1 is partitioned into X11
and X12 is partitioned into X21 and X22. The codewords for
X21 = 10
X22 = 11
This procedure is continued until each subset contains only one message note that each digit
„0‟ or „1‟ in each partitioning of the probability space appears with a more or less equal
probability and is independent of the previous or subsequent partitioning. Hence p(0) and
p(1) are also equiprobable.
HUFFMAN CODING
Huffman coding method leads to the lowest possible value of I for a given m, resulting in a
maximum η. Hence it is also known as the minimum redundancy code or optimum code. The
procedure for obtaining Huffman coding is as follows:
Combine the last two messages (for binary coding) into one message (last „µ‟ message
into one message for µ‟ ary coding) by adding their probabilities.
Assign „0‟ and „1‟ to these last two messages as their first digits in the code sequence.
Go back and assign the numbers „0‟ and „1‟ to the second digit for the two messages that
were combined in the previous step.
It is also noted the Huffman code is uniquely decodable with the result that a sequence of
coded messages can be decoded without any ambiguity.
KONGUNADU COLLEGE OF ENGINEERING AND TECHNOLOGY
DEPARTMENT OF ELECTRONICS AND COMMUNIATION ENGINEERING
EC 8501 – DIGITAL COMMUNICATION
Design a modulation system that transmit one bit per sample which use the uniform
step size for quantizer. .
DM Transmitter block diagram
LPC Analyzeranalyzes the speech signal by estimating the formants, removing
their effects from the speech signal, and estimating the intensity and frequency of
the remaining buzz.
The analyzer determines LP coefficients for the synthesis filter.
Voiced-un voiced decision used to analyze the given speech segment is voiced
or unvoiced..
Pitch analysis Estimate the fundamental frequency of 40 Hz, at least 50ms of
the speech signal must be analyzed.. Autocorrelation methods need at least two
pitch periods to detect pitch.
Speech Synthesis using LPC
Explain the principle of DPCM system along with the transmitter and receiver
DPCMTransmitter
Identify the type of modulation scheme which use the principle for continuously
variable slope technique
ADM Transmitter block diagram
• Logic Step size control produces step size for each incoming bit.
• Accumulator build up the staircase waveform
• LPF used to smoothens out the staircase waveform and reconstruct the original signal.
Step Wave forms for ADM
5) Summarize the features of DPCM system along with the encoder and decoder
diagrams.
ADPCM Transmitter block diagram
Block Diagram explanation
An 8-bit PCM value is input and converted to a 14-bit linear format.
The predicted value is subtracted from this linear value to generate a difference signal.
Adaptive quantization is performed on this difference, producing the 4-bit ADPCM
value to be transmitted.
The adaptive predictor computes a weighted average of the last six
dequantized difference values and the last two predicted values.
The coefficients of the filter are updated based on their previous values, the current
difference value, and other derived values.
From the above eqn,the second term will be zero if ISI is zero,this is possible if the
received pulse p(t) is controlled such that
If p(t) satisfies the above condition,then the received signal is free from ISI.
The above condition in time domain for perfect reception in absence of noise
Let p(nTb) represents the impulses at which p(t) is sampled for decision.
for n=0has two possible values 0 and 1E[ak ak-n] will be given as
For n=1 has four possible values 0 (a2k ),0 (a2k ),0 (a2k ), and 1 (a2k ),hence
Write short notes on how eye pattern is used to study the performance of a digital
transmission system
Frequency response
Tb is a delay element.
e-j2πftb frequency response of delay element
1+e-j2πftb frequency response of delay line filter
Delay line filter is connected with the ideal Nyquist channel .hence overall frequency
response is given by
2)Differential Encoder
• Precoder is used in a duo binary encoder to avoid error propagation
output of encoder dk = bk + dk-1 dk =1 if bk = 1
If dk = 0 if bk = 0
The sequence {dk} is applied to level shifter.All the output of level shifter sequence
{ak} is bipolar
output of level shifter
dk=1 then ak= 1
dk = 0 then ak = -1
The sequence {ak} is then applied to duobinary encoder.then the output summer ck =
ak +ak-1
ck =0 if bk = 1
if ck =± 2 if bk = 0
Decoder
It is one of the algorithms to change the tap weights of the adaptive filter recursively.
The tap weights are adapted by this algorithm as follows.
KONGUNADU COLLEGE OF ENGINEERING AND TECHNOLOGY
DEPARTMENT OF ELECTRONICS AND COMMUNIATION ENGINEERING
EC 8501 – DIGITAL COMMUNICATION
UNIT – IV - Digital modulation schemes
Draw the block diagram of QPSK transmitter and receiver with signal space diagram,
bandwidth and also calculate bit error probability.
QPSK transmitter
QPSK Receiver
Explain BPSK expressions, signal space, spectral characteristics, wave form generation
and reception
BPSKTransmitter
The baseband signal b(t) is applied as a
modulating signal to Balance Modulator
NRZ level encoder converts the binary sequence
into bipolar NRZ signal.
Output of BPSK s(t)= b (t) √Pscos(2πfot)
BPSK Receiver
Phase shiftbased on input, signal goes phase shift s(t)= b (t) √Pscos(2πfot+θ)
Square law deviceused for carrier separation.1/2 +1/2 cos2(2πfot+θ)
BPFallow signal frequency is centered at 2f0.the output of filter is given by
cos2(2πfot+θ)
Freq-dividerfiltered signal is divided by factor 2cos(2πfot+θ)
Syn-demodulatormultiply input signal and recovered carrier. b (t)
√P/2[1+cos2(2πfot+θ)]
Integratorintegrates signal over 1bit period.The output of integrator is given by
so(kTb)= b (kTb)
With the help of block diagram explain BFSK transmitter and receiver
BFSK transmitter
BFSK receiver
Discuss the 16-QAM scheme with block schematics to generate and receive and relevant
waveforms at various stages.
In this method amplitude of the carrier signal is varied hence it indirectly change the
phase of the carrier
For 4-bit symbol,then 16 possible symbols are generated at a distance of d=2a.
The distance of 16-QAM is greater than 16-PSK and lesser than QPSK.
QAM Transmitter
The input bit stream is applied to a
serial to parallel converter.
Bits bk, ,bk+1 are applied to upper
converter.Bits bk+2 , bk+3are applied
to upper converter are applied to
lower converter.
Modulator modulates the carrier
based on msg bit.
Adder combines odd and even
sequence
Adder output is given by s(t)= Ae(t) √Pscos(2πfot) + Ao(t) √Ps sin(2πfot)
QAM Receiver
Carrier recovery circuit obtains quadrature carriers from received QASK signal.
The integrators integrate the multiplied signals over one symbol period.
The in-phase and quadrature carriers are multiplied with QASK signal.
KONGUNADU COLLEGE OF ENGINEERING AND TECHNOLOGY
DEPARTMENT OF ELECTRONICS AND COMMUNIATION ENGINEERING
EC 8501 – DIGITAL COMMUNICATION
Explain the viterbi algorithm to decode a convolution coded message with a suitable
example.
Viterbi decoding:
Design the encoder for the (7,4) cyclic code generated by G(p)=P3+P+1 and verify its
operation for any message vector.
3. The parity check matrix of a particular (7,4) linear block code is given by
1110100
11 01010
1011001