0% found this document useful (0 votes)
224 views

This Sheet Is For 1 Mark Questions: Channel Capacity Is Exactly Equal To

1. The channel capacity depends on bandwidth and signal-to-noise ratio. 2. For sphere packing codes, the received code vector is located inside the sphere after noise is added. 3. Channel coding theorem states the maximum permissible error-free transmission rate depends on channel capacity.

Uploaded by

Shubham
Copyright
© © All Rights Reserved
Available Formats
Download as XLSX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
224 views

This Sheet Is For 1 Mark Questions: Channel Capacity Is Exactly Equal To

1. The channel capacity depends on bandwidth and signal-to-noise ratio. 2. For sphere packing codes, the received code vector is located inside the sphere after noise is added. 3. Channel coding theorem states the maximum permissible error-free transmission rate depends on channel capacity.

Uploaded by

Shubham
Copyright
© © All Rights Reserved
Available Formats
Download as XLSX, PDF, TXT or read online on Scribd
You are on page 1/ 16

This sheet is for 1 Mark questions

S.r No Question

1 On which factors does the channel capacity


depen/s in the communication system?

2 In sphere packing, where is the received


code vector with added noise located?
According to Shannon's second theorem,
3 it is not feasible to transmit information over
the channel with ______error probability, although
by using any coding technique.
 In channel coding theorem, channel
capacity decides the _________permissible
4 rate at which error free transmission
is possible.

5 In digital communication system, smaller the


code rate, _________are the redundant bits.
Which among the following represents the
6 code in which codewords consists of
message bits and parity bits separately?

In a linear code, the minimum Hamming


distance between any two code words is
7 ______minimum weight of any non-zero
code word.

8 Basically, Galois field consists of ______


number of elements.
The minimum distance of linear block code
(dmin) is equal to minimum number of rows or
9
columns of HT, whose  _____ is equal to
zero vector.
In digital communication system, if both
10 power and bandwidth are limited, then which
mechanism/choice is preferred?
11 Channel capacity is exactly equal to –

12 The capacity of a channel is :


13 Which parameter is called as Shannon limit?
14 Average effective information is obtained by

 For a (6,4) block code where n = 6, k = 4


15 and dmin = 3, how many errors can be
corrected by this code?
Assuming that the channel is noiseless,
16 if TV channels are 8 kHz wide with the
bits/sample = 3Hz and signalling rate =
16 x 106 samples/second, then what
would be the value of data rate?
If a noiseless channel bandlimited to 5 kHz is
17 sampled every 1msec, what will be the value
of sampling frequency?
If the channel is bandlimited to 6 kHz &
18 signal to noise ratio is 16, what would
be the capacity of channel?

For a Gaussian channel of 1 MHz


19 bandwidth with the signal power to
noise spectral density ratio of about
104 Hz, what would be the maximum
information rate?
20 For a baseband system with transmission rate 'r s'
symbols/sec, what would be the required bandwidth?

We consider a channel with some white additive


gaussian noise whose bandwith is equal to 4 kHz
21 and the noise power spectral density is equal to
N0/2 = 10−12 W/Hz. The required signal power at
the receiver is equal to 0,1 mW. Compute the
channel capacity
22 Which code is a perfet code

The capacity of a band-limited additive white


Gaussian (AWGN) channel is given by = wlog2
23 (1 + P/σ2w ) bits per second(bps), where W is the
channel bandwidth, P is the average power received
and σ 2 is the one-sided power spectral density of the
AWGN. For a fixed 𝑃 P/σ2 = 1000, the channel capacity
(in kbps) with infinite bandwidth ( w→ ∞) is
approximately
In a digital communication system, transmission of
successive bits through a noisy channel are
assumed to be independent events with
24 error probability p. The probability of at most
one error in the transmission of an 8-bit sequence
is

A communication channel with AWGN operating at


a signal at a signal to noise ratio SNR >>1 and
bandwidth B has capacity C1. If the SNR is doubled
25 keeping B constant, the resulting capacity C2 is
given by

A memoryless source emits n symbols each with


26 a probability p. The entropy of the source as a
function of n
The capacity of a Binary Symmetric Channel
27
(BSC) with cross-over probability 0.5 is ________

During transmission over a certain binary


communication channel, bit errors occur
28 independently with probability p.
The probability of at most one bit in error in a
block of n bits is given by

Consider a binary code that consists only four valid


codewords as given below. 00000, 01011, 10101, 11110
29 Let minimum Hamming distance of code be p and
maximum number of erroneous bits that can be
corrected by the code be q. The value of p and q are:
An error correcting code has the following
code words:
00000000, 00001111, 01010101, 10101010,
30 11110000.What is the maximum number of bit
errors that can be corrected ?

For a systematic (7,4) LBC the parity matrix is given


31 [110:011:111:101] . Find code vectors for message
[1100]

For a systematic LBC the parity are


c1=M1+M2+M3
32 c2=M2+M3+M4
c3==M1+M2+M4
find parity check matrix

Find the error correcting capability of the code


33
genrated.

34 find (5,4) even parity code for message (1,0,1,0)


Find coding techniques for (5,1) LBC. Find the valid
35 codeword for[1] and find error correcting and detecting
capability.
find the capacity of following channel

36

Find the binary symetric channel capacity if p=0.8


37

Calculate H(X),H(Y),H(XY)
38 P(Y/X)=[0.9,0.1,0:0,0.8,0.2:0,0.3,0.7] where P(X1)=0.3,
P(X2)=0.25, P(X3)=0.45
For a AWGN channel with 4.0KHz bandwidth, the noise
spectral density N0/2 is 1.0 picowatts/Hz and the signal
power at the receiver is 0.1mW. Determine the
39 maximum capacity as also the actual capacity for above
AWGN channel

What is the capacity of the following channel

40
Image a

 Bandwidth

Inside the sphere

small

Maximum

less

Block Codes

Less than

Finite

sum

Power efficient modulation

bandwidth of demand

 Number of digits used in coding


PB/N0
Subtracting equivocation
from entropy

0
16 Mbps

250 samples/sec

15.15 kbps

12000 bits/sec

 rs / 2 Hz

200kb/s.

(7,4)

1.44

7(1-p)+p/8

c2≈2c1

increases
0

pn

p = 3 and q = 1

1000101

[■8(1&1&1
@0&1&1@1&1&0)■8(0@1
@1)■8( 1&0& 0 @
0&1&0@ 0&0&1)]

[■8(1&0& 0
@0&1&0@█(0@0)&█(0@0)&█
(1@0))■8(0@0 @█(0@1))■8( tc≤1
1&1& 0 @ 0&1&1@█( 1@
1)&█(1@0)&█(1@1))]

[1 0 1 0 0]

[1 1 1 1 1], tc≤1,td≤4
C=1+(1-2pq)log(1-2pq)+2pqlog(2pq)

C=0.125bits/channels

H(X)=1.233bits/msg,
H(Y)=1.33bits/msg,
H(XY) =1.75 bits/msg

C=72Mbps

c1=0
c2=0
c3=0.322
b c d

 Signal to Noise Ratio Both a and b None of the above

On the boundary
Outside the sphere All of the above
(circumference) of sphere

stable large unpredictable

 Minimum Constant  None of the above

more equal unpredictable

Systematic Codes Code Rate Hamming Distance

Greater than Equal to None of the above

Infinite  Both a and b None of the above

difference  product divison

Bandwidth efficient modulation Error control coding Trellis coded modulation

Amount of information per


  Noise rate in the demand   None of the above
second
 Volume of information it can Maximum rate of  Bandwidth required for
take information transmission information
EB/N0 EBN0 None of the mentioned
b)subtracting equivocation with c) Ratio of number of error
d) None of the mentioned
entropy bits by total number of bits

 1 2 3
 24 Mbps 48 Mbps  64 Mbps

500 samples/sec 800 samples/sec  1000 samples/sec

 24.74 kbps 30.12 kbps 52.18 kbps

14400 bits/sec  28000 bits/sec  32500 bits/sec

 rs / 4 Hz  rs / 8 Hz  rs / 16 Hz

100kb/s. 54,44 kb/s. 700kb/s.

(6,3) (12,8) a and c

1.08 0.72 0.36

(1-p)8+8p(1-p)7 (1-p)8+p(1-p)7 (1-p)8+(1-p)7

c2≈2c1+2B c2≈2c1+B c2≈2c1+0.3B

decreases as log n increases as n increases as n log n


1 2 3

1-pn np(1-p)n-1+(1-p)n 1-(1-p)n

p = 3 and q = 2 p = 4 and q = 1 p = 4 and q = 2

1 2 3

1000100 1010010 1010001

[■8(1&1&0
@0&1&1@1&1&0)■8(0@1 [■8(1&1&1 [■8(1&1&1
@1)■8( 1&0& 0 @ @0&1&1@1&0&0)■8(0@1 @0&1&1@0&0&1)■8(0@1
0&1&0@ 0&0&1)] @1)■8( 1&0& 0 @ @1)■8( 1&0& 0 @
0&1&0@ 0&0&1)] 0&1&0@ 0&0&1)]

tc=1 tc≤2 tc=2

[1 0 1 0 1] [1 0 1 1 1] None of these

[1 1 1 1 1], tc≤2,td≤4 [0 0 0 0 0], tc≤1,td≤4 [1 1 1 1 1], tc≤2,td≤4


C=1+(1-2pq)log(1-2pq) C=1+(1-2pq)log(1-2pq)
C=1+(1-2pq)log(1-2pq)+pqlog(2pq)
+4pqlog(2pq) +3pqlog(2pq)

C=0.278bits/channels C=0.31bits/channels C=0.3221bits/channels

H(X)=1.233bits/msg, H(X)=1.5395bits/msg, H(X)=1.5395bits/msg,


H(Y)=1.571bits/msg, H(Y)=1.33bits/msg, H(Y)=1.5715bits/msg,
H(XY) =1.785 bits/msg H(XY) =2.75 bits/msg H(XY) =2.2573bits/msg

C=64Mbps C=74Mbps C=70Mbps

c1=1 c1=1 c1=0


c2=0 c2=1 c2=1
c3=0.322 c3=0.322 c3=0.322
Correct
Answer

c
b
a

a
c

a
a

&1)■8(0@1
0 @ a
1)]

d
a

You might also like