EE450 - Wireless Communications: Professor Ha Nguyen
EE450 - Wireless Communications: Professor Ha Nguyen
EE450 - Wireless Communications: Professor Ha Nguyen
Professor Ha Nguyen
December 2014
1/291
INTRODUCTION
2/291
Source
(User)
Message
signal
Transmitted
signal
Transmitter
Received
signal
Channel
Detected
signal
Receiver
Sink
(User)
Distortion
and noise
3/291
x(t)
Ts
t
0
Ts
4/291
AM signal
Analog message
0.5
1.5
2.5
t
3.5
4.5
0.5
1.5
2.5
t
3.5
4.5
0.5
1.5
2.5
t
3.5
4.5
0.5
1.5
2.5
t
3.5
4.5
1
0
1
0
1
0
1
0
1
0
1
0
5/291
0.5
1.5
2.5
t
3.5
4.5
0.5
1.5
2.5
t
3.5
4.5
0.5
1.5
2.5
t
3.5
4.5
0.5
1.5
2.5
t
3.5
4.5
2
0
2
0
1
0
1
0
2
0
2
0
6/291
Sampling does not introduce information loss if it satisfies the Nyquist sampling
theorem.
Quantization always introduces information loss, but the loss can be made
arbitrarily small by increasing the number of quantization levels (i.e., using more
bits).
EE450 Wireless Communications, Duy Tan University, Da Nang, Vietnam
7/291
8/291
Source
(User)
Transmitter
Channel
Receiver
Sink
(User)
(a)
Transmitter
Source
Encoder
Channel
Encoder
Receiver
Demodulator
Modulator
Channel
Decoder
Source
Decoder
(b)
9/291
Advantages:
Digital signals are much easier to be regenerated.
Digital circuits are less subject to distortion and interference.
Digital circuits are more reliable and can be produced at a lower cost than analog
circuits.
It is more flexible to implement digital hardware than analog hardware.
Digital signals are beneficial from digital signal processing (DSP) techniques.
Disadvantages:
Heavy signal processing.
Synchronization is crucial.
Larger transmission bandwidth.
Non-graceful degradation.
10/291
REVIEW OF PROBABILITY,
RANDOM VARIABLES, RANDOM
PROCESSES
11/291
Random experiment: its outcome, for some reason, cannot be predicted with
certainty.
Examples: throwing a die, flipping a coin and drawing a card from a deck.
Sample space: the set of all possible outcomes, denoted by . Outcomes are
denoted by s and each lies in , i.e., .
A sample space can be discrete or continuous.
Events are subsets of the sample space for which measures of their occurrences,
called probabilities, can be defined or determined.
12/291
Various events can be defined: the outcome is even number of dots, the outcome
is smaller than 4 dots, the outcome is more than 3 dots, etc.
13/291
The events E1 , E2 , E3 ,. . . are mutually exclusive if Ei Ej = for all i 6= j, where is the null set.
14/291
15/291
Conditional Probability
We observe or are told that event E1 has occurred but are actually interested in
event E2 : Knowledge that E1 has occurred generally changes the probability of
E2 occurring.
If it was P (E2 ) before, it now becomes P (E2 |E1 ), the probability of E2
occurring given that event E1 has occurred.
This conditional probability is given by
P (E2 E1 )
,
P (E2 |E1 ) =
P (E1 )
0,
if P (E1 ) 6= 0
otherwise
If P (E2 |E1 ) = P (E2 ), or P (E2 E1 ) = P (E1 )P (E2 ), then E1 and E2 are said
to be statistically independent.
Bayes rule
P (E2 |E1 ) =
16/291
n
[
Ei =
(1a)
i=1
(ii)
Ei Ej =
(1b)
Bayes rule:
P (Ei |A) =
P (A|Ei )P (Ei )
P (A|Ei )P (Ei )
= Pn
.
P (A)
j=1 P (A|Ej )P (Ej )
17/291
Example 1: In the example of throwing a fair die, based on the three axioms of
probability, it is easy to show that the probability of each face occurring is 1/6.
Example 2: In the example of throwing a fair die, one can define the following
events:
E1 = the outcome is even
E2 = the outcome is smaller than 4
Compute P (E1 ), P (E2 ), P (E2 |E1 ), P (E1 |E2 ). Are the events E1 and E2
statistically independent?
18/291
Random Variables
x(4 )
x(1 )
x(3 )
x(2 )
A random variable is a mapping from the sample space to the set of real
numbers.
We shall denote random variables by boldface, i.e., x, y, etc., while individual or
specific values of the mapping x are denoted by x().
19/291
R
1
There could be many other random variables defined to describe the outcome of this
random experiment!
20/291
21/291
1.0
(a)
0
Fx ( x )
1.0
(b)
22/291
Fx ( x )
1.0
(c)
23/291
dFx (x)
.
dx
It follows that:
P (x1 x x2 ) = P (x x2 ) P (x x1 )
Z x2
= Fx (x2 ) Fx (x1 ) =
fx (x)dx.
x1
For discrete random variables, it is more common to define the probability mass
function (pmf): pi = P (x = xi ).
X
Note that, for all i, one has pi 0 and
pi = 1.
i
24/291
fx ( x )
Fx ( x )
1
1 p
(1 p )
( p)
A discrete random variable that takes two values 1 and 0 with probabilities p and
1 p.
Good model for a binary data source whose output is 1 or 0.
Can also be used to model the channel errors.
25/291
0.15
0.10
0.05
x
0
26/291
fx ( x )
Fx ( x )
1
1
ba
x
a
x
a
A continuous random variable that takes values between a and b with equal
probabilities over intervals of equal length.
The phase of a received sinusoidal carrier is usually modeled as a uniform random
variable between 0 and 2. Quantization error is also typically modeled as
uniform.
27/291
fx ( x )
Fx ( x )
1
2 2
1
1
2
x
0
28/291
Outcome
Outcome
Uniform or Gaussian?
0
1
2
0
100
50
Trial number
100
50
Trial number
100
5
Outcome
Outcome
1
2
50
Trial number
5
0
50
Trial number
100
5
0
29/291
6
4
Outcome
f x(x)
0.3
0.2
2
0
2
0.1
4
0
5
6
0
20
40
60
80
100
80
100
Trial number
0.4
6
4
Outcome
f x(x)
0.3
0.2
2
0
2
0.1
4
0
5
6
0
20
40
60
Trial number
30/291
6
4
Outcome
f x(x)
0.3
0.2
2
0
2
0.1
4
0
5
6
0
20
40
60
80
100
80
100
Trial number
0.4
6
4
Outcome
f x(x)
0.3
0.2
2
0
2
0.1
4
0
5
6
0
20
40
60
Trial number
31/291
2
Count
Outcome
0
2
4
0
500
Trial number
0
Bin
150
2
Count
Outcome
100
0
4
1000
0
2
4
0
200
500
Trial number
1000
100
50
0
2
0
Bin
32/291
Q-function
2
1 2
e
2
1
Q(x)
2
Area = Q ( x )
exp
2
2
d.
10
10
Q(x)
10
10
10
10
10
3
x
33/291
34/291
For n = 2,
E{x2 }
2:
When n = 2 the central moment is called the variance, commonly denoted as x
Z
2
(x mx )2 fx (x)dx.
x
= var(x) = E{(x mx )2 } =
35/291
36/291
Example
The noise voltage in an electric circuit can be modeled as a Gaussian random variable
with zero mean and variance 2 .
(a) Show that the probability
that the noise voltage exceeds some level A can be
.
expressed as Q A
(b) Given that the noise voltage is negative, express in terms of the Q function the
probability that the noise voltage exceeds A, where A < 0.
(c) Using the MATLAB function Q.m available from the courses website, evaluate
the expressions found in (a) and (b) for 2 = 108 and A = 104 .
37/291
Solution
fx (x) =
1
2
(x)2 (=0)
22
=
1
2
2
x2
2
2
2
=
=
Z
2
1
x
e 22 dx
2 A
A
.
Q
x)
(=
)
(d= dx
2
2
38/291
Given the noise voltage is negative, the probability that the noise voltage exceeds
some level A (A < 0) is:
P (x > A|x < 0)
P (A < x < 0)
P (x > A , x < 0)
=
P (x < 0)
P (x < 0)
A
A
Q Q(0)
Q 12
A
=
= 2Q
1.
Q(0)
1/2
f x ( x | x < 0)
2
2 2
Area=P (x > A | x < 0)
fx ( x )
1
2 2
39/291
Example
The Matlab function randn defines a Gaussian random variable of zero mean and unit
variance. Generate a vector of L = 106 samples according to a zero-mean Gaussian
random variable x with variance 2 = 108 . This can be done as follows:
x=sigma*randn(1,L).
(a) Based on the sample vector x, verify the mean and variance of random variable x.
(b) Based on the sample vector x, compute the probability that x > A, where
A = 104 . Compare the result with that found in Question 2-(c).
(c) Using the Matlab command hist, plot the histogram of sample vector x with 100
bins. Next obtain and plot the pdf from the histogram. Then also plot on the
same figure the theoretical Gaussian pdf (note the value of the variance for the
pdf). Do the histogram pdf and theoretical pdf fit well?
40/291
Solution
L=10^6; % Length of the noise vector
sigma=sqrt(10^(-8)); % Standard deviation of the noise
A=-10^(-4);
x=sigma*randn(1,L);
%[2] (a) Verify mean and variance:
mean_x=sum(x)/L; % this is the same as mean(x)
variance_x=sum((x-mean_x).^2)/L; % this is the same as var(x);
% mean_x = -3.7696e-008
% variance_x = 1.0028e-008
%[3] (b) Compute P(x>A)
P=length(find(x>A))/L;
% P = 0.8410
%[5] (c) Histogram and Gaussian pdf fit
N_bins=100;
[y,x_center]=hist(x,100); % This bins the elements of x into N_bins equally spaced containers
% and returns the number of elements in each container in y,
% while the bin centers are stored in x_center.
dx=(max(x)-min(x))/N_bins; % This gives the width of each bin.
hist_pdf= (y/L)/dx; % This approximate the pdf to be a constant over each bin;
pl(1)=plot(x_center,hist_pdf,color,blue,linestyle,--,linewidth,1.0);
hold on;
x0=[-5:0.001:5]*sigma; % this specifies the range of random variable x
true_pdf=1/(sqrt(2*pi)*sigma)*exp(-x0.^2/(2*sigma^2));
pl(2)=plot(x0,true_pdf,color,r,linestyle,-,linewidth,1.0);
xlabel({\itx},FontName,Times New Roman,FontSize,16);
ylabel({\itf}_{\bf x}({\itx}),FontName,Times New Roman,FontSize,16);
legend(pl,pdf from histogram,true pdf,+1);
set(gca,FontSize,16,XGrid,on,YGrid,on,GridLineStyle,:,MinorGridLineStyle,none,FontName,Times New Roman);
41/291
4000
pdf from histogram
true pdf
3500
3000
f (x)
2500
2000
1500
1000
500
0
5
0
x
5
x 10
42/291
43/291
2 Fx,y (x, y)
.
xy
When the joint pdf is integrated over one of the variables, one obtains the pdf of
other variable, called the marginal pdf:
Z
fx,y (x, y)dx = fy (y),
Z
Note that:
Z
44/291
The conditional pdf of the random variable y, given that the value of the random
variable x is equal to x, is defined as
fx,y (x, y)
, fx (x) 6= 0
.
fy (y|x) =
fx (x)
0,
otherwise
Two random variables x and y are statistically independent if and only if
fy (y|x) = fy (y)
or equivalently
45/291
cov{x, y}
(correlation)
E{(x mx )(y my )}
E{xy} mx my
(covariance).
46/291
1
y
1.5
1.5
0.5
0
0
0.5
0.5
1.5
0
0
0.5
1.5
47/291
5
5
3
2
2
1
5
2
0
x
48/291
100
y
150
1.5
0.5
0
0
50
0.5
0
0
1.5
50
100
150
x
(d) x,y 0.97
1.5
0.5
1
0
0.5
0.5
0
0
0.5
1.5
0.5
1.5
49/291
50/291
Example 1
1
2
1
4
1
4
P (y = 1, z = 0) = 0
P (y = 1, z = 1) = P (x = 1) = 1/4
P (y = 0, z = 0) = P (x = 0) = 1/2
P (y = 0, z = 1) = 0
1
0
P (y = 1, z = 0) = 0
P (y = 1, z = 1) = P (x = 1) = 1/4
51/291
fx,y (x, y) =
"
1
exp
q
2x y 1 2x,y
1
2(1 2x,y )
(x mx )2
(y my )2
2x,y (x mx )(y my )
+
2
2
x
x y
y
#
When x,y = 0 fx,y (x, y) = fx (x)fy (y) random variables x and y are
statistically independent.
For joint Gaussian random variables, uncorrelatedness means statistically
independent!
Weighted sum of two jointly Gaussian random variables is also Gaussian.
52/291
=0
x,y
0.15
2.5
x,y
f (x,y)
0.1
0.05
0
0
1
2
0
2
y
3
2
2.5
0
x
53/291
=0.30
x,y
a crosssection
0.15
2.5
fx,y(x,y)
0.1
1
0.05
0
0
1
2
0
2
y
3
2
2.5
0
x
54/291
=0.70
x,y
0.2
2.5
2
a crosssection
0.1
0.05
fx,y(x,y)
0.15
0
1
2
0
2
y
3
2
2.5
0
x
55/291
=0.95
x,y
0.5
2.5
0.4
fx,y(x,y)
0.3
1
0.2
y
0.1
0
1
2
0
2
y
3
2
2.5
0
x
56/291
Define
x = [x1 , x2 , . . . , xn ], a vector of the means
m = [m1 , m2 , . . . , mn ], and
the n n covariance matrix C with
Ci,j = cov(xi , xj ) = E{(xi mi )(xj mj )}.
The random variables {xi }n
i=1 are jointly Gaussian if:
fx1 ,x2 ,...,xn (x1 , x2 , . . . , xn ) = p
(2)n det(C)
1
x
m)C 1 (
x
m) .
exp (
2
57/291
Random Processes
2
Real number
x(t,)
x(t ,)
k
x (t, )
1
x2(t,2)
x (t, )
M
...
.
.
.
.
.
.
.
.
.
.
.
.
Time
58/291
59/291
60/291
Tb
61/291
Based on whether its statistics change with time: the process is non-stationary or
stationary.
Different levels of stationarity:
Strictly stationary: the joint pdf of any order is independent of a shift in time.
N th-order stationarity: the joint pdf does not depend on the time shift, but depends on
time spacings:
fx(t1 ),x(t2 ),...x(tN ) (x1 , x2 , . . . , xN ; t1 , t2 , . . . , tN ) =
fx(t1 +t),x(t2 +t),...x(tN +t) (x1 , x2 , . . . , xN ; t1 + t, t2 + t, . . . , tN + t).
= t2 t1 .
62/291
Consider N random variables x(t1 ), x(t2 ), . . . x(tN ). The joint moments of these
random variables is
Z
Z
xN =
Shall only consider the first- and second-order moments, i.e., E{x(t)}, E{x2 (t)}
and E{x(t1 )x(t2 )}. They are the mean value, mean-squared value and
(auto)correlation.
63/291
The average is across the ensemble and if the pdf varies with time then the mean
value is a (deterministic) function of time.
If the process is stationary then the mean is independent of t or a constant:
Z
xfx (x)dx.
mx = E{x(t)} =
64/291
This is defined as
MSVx (t) = E{x2 (t)} =
x2 fx (x)dx (stationary).
E [x(t) mx ]2 = MSVx m2x (stationary).
65/291
Correlation
x2 =
x2 =
66/291
67/291
(b) For a specific time, t, over what values of amplitude does the random variable
x(t) range?
(c) Find the mean and mean-squared value of x(t). Is the process x(t) wide-sense
stationary (WSS)?
(d) Determine the first-order pdf of x(t).
68/291
1.5
0.5
0
0.5
t
a=0.4447
1.5
1
0.5
0
2
1.5
0.5
0
0.5
t
a=0.61543
1.5
1
0.5
0
2
1.5
0.5
0
0.5
t
a=0.79194
1.5
1
0.5
0
2
1.5
0.5
1.5
0
t
0.5
69/291
(b) Since a ranges between 0 and 1, for a specific time, t, the random variable x(t)
ranges from e|t| (when a = 1) to 1 (when a = 0).
(c) x is considered as a function of a with t a parameter. The mean and
mean-squared values of x(t) are:
E{x(t)}
x(t)fa (a)da =
x (t)fa (a)da =
ea|t| da =
E{x (t)}
Z1
1
1 e2|t| .
2|t|
Z1
0
2a|t|
1
ea|t| 1
1 e|t| .
=
|t| 0
|t|
da =
e2a|t| 1
2|t| 0
Since both the mean and mean-squared value are functions of time t, the process
x(t) is not wide-sense stationary.
70/291
For e|t| < x 1, the equation x = g(a) = ea|t| has only one solution
1
a = |t|
ln x, e|t| < x 1. Furthermore
dg(a)
= |t|ea|t|
= |t|x.
da
a= 1 ln x
a= 1 ln x
|t|
|t|
Therefore
To conclude:
1
fa (a = |t|
ln x)
1
=
.
fx(t) (x; t) =
|t|x
dg(a)
da
a= 1 ln x
|t|
fx(t) (x; t) =
1
,
|t|x
0,
e|t| < x 1
otherwise
71/291
f =
Input
x(t )
Linear, Time-Invariant
(LTI) System
h(t )
H( f )
Output
y (t )
Rx , y ( )
my , Ry ( )
Sy ( f )
mx , Rx ( )
Sx ( f )
Z
my
E{y(t)} = E
Sy (f )
Ry ( )
|H(f )|2 Sx (f )
h()x(t )d = mx H(0)
h( ) h( ) Rx ( ).
72/291
A natural noise source is thermal noise, whose amplitude statistics are well
modeled to be Gaussian with zero mean.
The autocorrelation and PSD are well modeled as:
Rw ( )
Sw (f )
e| |/t0
t0
2kG
1 + (2f t0 )2
kG
(watts),
(watts/Hz).
73/291
S (f) (watts/Hz)
White noise
Thermal noise
0
15
10
0
f (GHz)
10
15
(b) Autocorrelation, R ()
w
N /2()
R () (watts)
0.1
White noise
Thermal noise
0.08 0.06 0.04 0.02
0
0.02
(picosec)
0.04
0.06
0.08
0.1
74/291
The noise PSD is approximately flat over the frequency range of 0 to 10 GHz
let the spectrum be flat from 0 to :
Sw (f ) =
N0
2
(watts/Hz),
N0
( )
2
(watts).
75/291
Example
Suppose that a (WSS) white noise process, x(t), of zero-mean and power spectral
density N0 /2 is applied to the input of the filter.
(a) Find and sketch the power spectral density and autocorrelation function of the
random process y(t) at the output of the filter.
(b) What are the mean and variance of the output process y(t)?
L
x(t )
y (t )
76/291
H(f ) =
Sy (f ) =
R
1
=
.
R + j2f L
1 + j2f L/R
N0
N0 R (R/L)| |
1
Ry ( ) =
e
.
2 1 + 2L 2 f 2
4L
R
Ry ( ) (watts)
S y ( f ) (watts/Hz)
N0
2
N0R
4L
f (Hz)
(sec)
77/291
Low-pass filter
1
x(t )
H( f )
y (t )
Sx (f ) |H(f )|2 =
N0
,
2
0,
|f | W
otherwise
78/291
Sx ( f )
Sy ( f )
N0
2
N0
2
f
(b) The autocorrelation can be found as the inverse Fourier transform of the PSD:
Ry ( )
=
=
=
F 1 {Sy (f )} =
N0
4
Sy (f )ej2f df =
ej2f (2 )df
x=2 f
N0
4
W
W
N0 j2f
e
df
2
2W
ejx dx
2W
N0 sin(2W )
N0
2 sin(2W ) =
= N0 W sinc(2W )
4
2
79/291
1
0.8
Ry()/(N0W)
0.6
0.4
0.2
0
0.2
0.4
3
0
W
(d) Since the input x(t) is a Gaussian process, the output y(t) is also a Gaussian
process and the samples y(kTs ) are Gaussian random variables. For Gaussian
random variables, statistical independence is equivalent to uncorrelatedness. Thus
one needs to find the smallest value for (or Ts ) so that the autocorrelation is
zero. From the graph of Ry ( ), the answer is:
min = (Ts )min =
EE450 Wireless Communications, Duy Tan University, Da Nang, Vietnam
1
2W
80/291
Random Sequences
A random sequence can be obtained by sampling a continuous-time random
process. How to characterize a random sequence?
Let x[1], x[2], . . . be a sequence of indexed random variables. Various definitions
of continuous-time random processes can be applied, except that the time
variable t is replaced by the sequence index n.
Mean function: mx [n] = E{x[n]}
2
Variance function: x
[n] = E{(x[n] mx [n])2 }
Correlation function: Rx (n, k) = E{x[n]x[k]}
Covariance function: Cx (n, k) = E{(x[n] mx [n])(x[k] mx [k]}
Sx (ej
)=
Rx [m]ej m
m=
An important case is the white random sequence, where Rx [k] = 2 [k], and
) = 2 : The sequence is completely uncorrelated and all the average
Sx (ej
power in the sequence is equally shared by all frequencies in the sequence!
81/291
h[n ]
H (e j )
x[n ]
y[ n ]
Rx ,y [k ]
my
my , Ry [k ]
mx , Rx [k ]
Sx (e j )
S y (e j )
X
E{y[n]} = E
h[k]x[n k] = mx
h[k] = mx H(ej
)
=0
Sy (ej
)
Ry [k]
k=
) Sx (ej
)
H(ej
82/291
Sx (e j )
H(e
)=
n=0
Ry [k ]
n jn
a e
1
.
=
1 aej
) = 2 , it follows that
Since the input PSD is Sx (ej
Sy (ej
)
Ry [k]
|H(ej
)| Sx (ej
) =
IDFT{Sy (ej
)} =
2
1 + a2 2a cos
2
2
1
1 aej
2
a|k|
1 a2
S y (e j )
83/291
H( f )
t = nTs
w (t )
w (t )
F2s
Fs
2
Sw ( f )
Fs =
N0 2
1
Ts
S w (e j )
(N0 2)Fs
Sw ( f )
w[n ]
N0 2
F2s
Fs
2
(e j ) d =
N0
Fs
2
84/291
1
0
Ts
2 Ts
h (t)
w (t )
w[n ]
S w (e j )
Fs =
N0 2
(e j ) d =
Ts
1
Ts
Sw ( f )
( N0 2)Fs
N0
2
2 Ts
w (t )
Ts
f
H ( f ) = Ts sinc
Fs
Fs
Sw ( f )
( N0 2)Ts
0
total noise power
( f )d f =
N 0 Fs
2
Fs
H( f ) d f =
2
N0
2
85/291
n=
w[n](t nTs ).
To find the PSD of w(t), truncate the random process to a time interval of T = N Ts
PN
to T = N Ts to obtain wT (t) =
n=N w[n](t nTs ).
Take the Fourier transform of the truncated process:
WT (f ) =
N
X
n=N
N
X
w[n]e
j2f nTs
n=N
Rw [k]e
j2kf Ts
k=
jk
Let
= 2f Ts and recognize that Sw (ej
)=
is exactly the PSD
k= Rw [k]e
of random sequence w[n]. Then the PSD of w(t) is
1
j
Sw (f ) =
Sw (e )
Ts
=2f
Ts
86/291
87/291
(t )
m
!"#$%
1 $-**#$
{bk }
)"('*
+,'$-.#** /
b k = 0 s1 (t )
b k = 1 s2 (t )
!&'$$ (
0 ."('*
{ }
b k
+1 #2 /
r( t )
w (t )
Noise w(t) is stationary Gaussian, zero-mean white noise with two-sided power
spectral density of N0 /2 (watts/Hz):
N0
N0
E{w(t)} = 0, E{w(t)w(t + )} =
( ), w(t) N 0,
.
2
2
88/291
m(t )
345678 94:;<=
(t )
m
I874<EB657B;4<
{bk }
A4:5@?B46
CD6?<EF;BB86G
b k = 0 s1 (t )
b k = 1 s2 (t )
9>?<<8@
H8F4:5@?B46
{ }
b k
CI878;J86G
r( t )
w (t )
(k 1)Tb t kTb .
89/291
Wish to represent two arbitrary signals s1 (t) and s2 (t) as linear combinations of two
orthonormal basis functions 1 (t) and 2 (t).
1 (t) and 2 (t) are orthonormal if:
Z
Tb
Tb
21 (t)dt =
Tb
where
s1 (t)
s2 (t)
sij =
Tb
si (t)j (t)dt,
i, j {1, 2},
90/291
2 (t )
s12
s1 (t )
s 22
s2 (t )
s11
R Tb
0
s 21
1 (t )
How to choose orthonormal functions 1 (t) and 2 (t) to represent s1 (t) and
s2 (t) exactly?
91/291
Gram-Schmidt Procedure
s (t)
Let 1 (t) p1 . Note that s11 = E1 and s12 = 0.
E1
s (t)
Project s2 (t) = p2
onto 1 (t) to obtain the correlation coefficient:
E2
=
Tb
3
4
s2 (t)
1
1 (t)dt =
E2
E1 E2
Tb
s1 (t)s2 (t)dt.
s2 (t)
E2
1 (t).
2 (t)
2 (t)
(t)
= p2
2
Tb
1 2
2 (t) dt
0
1
s1 (t)
s2 (t)
.
p
E2
E1
1 2
qR
92/291
2 (t )
s2 (t )
2 (t )
2 (t)
s21
s22
d21
s2 (t )
s 22
E2
1 (t)
d 21
1 (t )
E1
s 21 s1 (t )
1 = cos( ) 1
1 (t )
p
p
1 2
E2 ,
sZ
Tb
s1 (t)
,
E1
s1 (t)
s2 (t)
1
,
p
E2
E1
1 2
Z T
p
b
s2 (t)1 (t)dt = E2 ,
p
E1 2 E1 E2 + E2 .
93/291
s1 (t)
1 (t)
qR
i (t)
qR
i (t)
ij
s2 (t)dt
1
i (t)
2 ,
i (t) dt
i = 2, 3, . . . , N,
i1
si (t) X
ij j (t),
Ei
j=1
Z
si (t)
j (t)dt, j = 1, 2, . . . , i 1.
Ei
94/291
Example 1
s1 (t )
s 2 (t )
Tb
Tb
0
V
LMN
1 (t )
Tb
s1 (t )
s 2 (t )
Tb
PQR
1 (t )
STU
(a) Signal set. (b) Orthonormal function. (c) Signal space representation.
95/291
Example 2
s1 (t )
s 2 (t )
V
Tb
Tb
V
WXY
2 (t )
1 (t )
1 Tb
1 Tb
Tb
0
Tb
1 Tb
[\]
2 (t )
EE450 Wireless Communications, Duy Tan University, Da Nang, Vietnam
96/291
Example 3
s1 (t )
s 2 (t )
Tb
Tb
0
V
2 (t, )
=
d
b
b
c
E 3E
,
2
2
=0
Tb
2
s 2 (t )
a
_
_
`
increasing ,
=
E ,0
Tb
4
E
=
s1 (t )
E ,0
1
E
Tb
s2 (t)s1 (t)dt
1
V 2 Tb
2
1
Tb
h
i
V 2 V 2 (Tb )
1 (t )
97/291
Example 4
s1 (t )
s 2 (t )
3V
Tb
2 (t )
3
Tb
Tb
Tb
fgh
1 (t )
Tb 2
Tb
Tb
Tb 2
3
Tb
jkl
98/291
2 (t )
s 2 (t )
3E 2 , E 2
s1 (t )
E ,0
1 (t )
3
2 3
V t V dt =
,
Tb
2
0
0
#
"
s1 (t)
1
s2 (t)
2
3
s
(t)
,
s
(t)
2 (t) =
=
1
2
1
2
E
E
E
(1 43 ) 2
1
3
s21 =
E, s22 =
E.
2
2
=
1
E
d21 =
Tb
Z
s2 (t)s1 (t)dt =
Tb
0
2
E
Tb /2
r
1
2
[s2 (t) s1 (t)]2 dt
=
2 3 E.
99/291
Example 5
2 (t )
= 3 2
=0
s1 (t)
s2 (t)
2
cos(2fc t),
Tb
2
cos(2fc t + ).
Tb
k , k an integer.
where fc = 2T
b
s1 (t )
0
=
= 1
1 (t )
locus of s2 (t ) as
varies from 0 to 2 .
s 2 (t )
= 2
=0
100/291
x1(t)
5
0
5
5
x2(t)
0
5
2
1(t)
1
2(t)
0
2
0
2
2
3(t)
0
2
2
4(t)
0
2
0
0.1
0.2
0.3
0.4
0.5
t
0.6
0.7
0.8
0.9
101/291
0.1
0.2
0.3
0.4
0.5
t
0.6
0.7
0.8
0.9
102/291
1.5
1.5
0
0.25
0.5
t
0.75
103/291
wi i (t),
where
i=1
wi =
Tb
w(t)i (t)dt.
104/291
i=j .
i 6= j
If w(t) is not only zero-mean and white, but also Gaussian {w1 , w2 , . . .} are Gaussian
and statistically independent!!!
The above properties do not depend on how the set {i (t), i = 1, 2, . . .} is chosen.
Shall choose as the first two functions the functions 1 (t) and 2 (t) used to represent the
two signals s1 (t) and s2 (t) exactly. The remaining functions, i.e., 3 (t), 4 (t), . . . , are
simply chosen to complete the set.
105/291
Optimum Receiver
Without any loss of generality, concentrate on the first bit interval. The received
signal is
r(t)
=
=
=
=
=
si (t) + w(t), 0 t Tb
s1 (t) + w(t),
if a 0 is transmitted
.
s2 (t) + w(t),
if a 1 is transmitted
[si1 1 (t) + si2 2 (t)]
|
{z
}
si (t)
106/291
where rj =
Tb
r1
si1 + w1
r2
si2 + w2
r3
w3
r4
=
..
.
w4
Note that rj , for j = 3, 4, 5, . . ., does not depend on which signal (s1 (t) or s2 (t))
was transmitted.
The decision can now be based on the observations r1 , r2 , r3 , r4 , . . ..
The criterion is to minimize the bit error probability.
Consider only the first n terms (n can be very very large), ~
r = {r1 , r2 , . . . , rn }
Need to partition the n-dimensional observation space into decision regions.
107/291
Observation space
108/291
P [0D , 1T ] + P [1D , 0T ]
=
=
P2
P2
P2 +
f (~
r |1T )d~
r + P1
f (~
r |1T )d~
r+
f (~
r |0T )d~
r
[P1 f (~
r |0T ) P2 f (~
r |1T )]d~
r
[P1 f (~
r |0T ) P2 f (~
r |1T )] d~
r.
decide 0 (0D )
.
decide 1 (1D )
109/291
t = Tb
Pr[error] =
Tb
( ) dt
P2 +
r1 = si1 + w1
1 ( t )
t = Tb
( ) dt
0
t = Tb
r3 = 0 + w 3
AWGN,
PSD
g (r )
Tb
( ) dt
) P2 f ( r |1T )]dr
g (r )
g(r )
>
0
<
1 0 D
g (r ) > 0
3 ( t )
N0
2
r2 = si 2 + w 2
2 ( t )
si (t ) + w (t )
= P1 + [ P2 f ( r |1T ) P2 f ( r | 0T )]dr
Tb
r (t ) =
[P f (r | 0
t = Tb
Tb
( ) dt
0
g (r ) < 0
rn = 0 + w n
n ( t )
g (r ) = 0
2 1D
110/291
Equivalently,
1D
f (~
r |1T )
f (~
r |0T )
The expression
0D
P1 .
P2
(2)
f (~
r |1T )
is called the likelihood ratio.
f (~
r |0T )
The decision rule in (2) was derived without specifying any statistical properties
of the noise process w(t).
Simplified decision rule when the noise w(t) is zero-mean, white and Gaussian:
(r1 s11 )2 + (r2 s12 )2
1D
0D
1
(r1 s21 )2 + (r2 s22 )2 + N0 ln P
P2 .
1D
0D
minimum-distance receiver!
111/291
Minimum-Distance Receiver
1D
0D
r2 2 (t )
s 2 (t )
d2
( s21 , s22 )
r (t )
d1
( r1, r2 )
s1 (t )
( s11 , s12 )
1 (t )
0
Choose s2 (t )
Choose s1 ( t )
r1
112/291
o
0
()dt
r1
Compute
(r1 si1 )2 + (r2 si 2 )2
r (t ) = si (t ) + w (t )
N 0 ln( Pi )
1 (t )
t = Tb
Tb
n
0
()dt
r2
Decision
for i = 1, 2
and choose
the smallest
2 (t )
t = Tb
Tb
q
0
()dt
r1
Form
the
r (t )
1 (t )
dot
product
t = Tb
Tb
p
0
()dt
r2
2 (t )
| |
rsttuv
N0
E
ln(P1 ) 1
2
2
Decision
wsv
xyz{vuw
r si
N0
E
ln( P2 ) 2
2
2
113/291
t = Tb
r1
h1 (t ) = 1 (Tb t )
r (t )
}~
t = Tb
h2 (t ) = 2 (Tb t )
Decision
r2
114/291
Example 5.6
s1 (t )
s 2 (t )
1.5
0.5
0
0.5
1 (t )
2 (t )
0.5
0.5
115/291
2 (t )
s 2 (t )
s1 (t )
0.5
0.5
s1 (t)
s2 (t)
0.5
1 (t )
1
2 (t),
2
1 (t) + 2 (t).
1 (t) +
116/291
2 (t ) r2
2 (t ) r2
s 2 (t )
s 2 (t )
1
s1 (t )
s1 (t )
0.5
0.5
Choose s1 (t )
Choose s2 (t )
1
0.5
0.5
Choose s1 (t )
Choose s2 (t )
1 (t )
r1
0.5
0.5
1 (t )
r1
2 (t ) r2
s 2 (t )
1
Choose s2 (t )
s1 (t )
0.5
Choose s1 (t )
0.5
0.5
1 (t )
r1
117/291
Example 5.7
2 (t )
1 (t )
1
1
0
1
2
118/291
2 (t ) r2
s 2 (t )
Choose s 2 (t )
N 0 P1
ln
4 P2
1 (t )
0
r1
1
Choose s1 (t )
s1 (t )
119/291
t = Tb
Tb
r (t )
()dt
r2
r2 T choose s2 (t )
r2 < T choose s1 (t )
2 (t )
T=
Tb
h2 (t )
r (t )
t = Tb
N 0 P1
ln
4 P2
r2
Tb
r2 T choose s2 (t )
r2 < T choose s1 (t )
T=
N 0 P1
ln
4 P2
120/291
2 (t )
s2 (t )
2 (t )
1 (t )
s11 = s21
s1 (t )
s22
1 (t )
s12
1 (t)
2 (t)
cos
sin
sin
cos
1 (t)
2 (t)
121/291
2 (t )
s2 (t )
1 (t )
2 (t )
s11 = s21
s1 (t )
s22
1 (t )
s12
f (
s21 + w
1 )f (
s22 + w
2 )f (w
3 ) . . .
f (
r1 , r2 , r3 , . . . , |1T )
=
f (
r1 , r2 , r3 , . . . , |0T )
f (
s11 + w
1 )f (
s12 + w
2 )f (w
3 ) . . .
1D
r2
0D
s22 + s
12 +
2
N0 /2
s22 s12
ln
P1
P2
1D
0D
P1
P2
T.
122/291
t = Tb
Tb
r (t )
()dt
r2
2 ( t )
r2 T
1D
r2 < T
0D
r2 T
1D
r2 < T
0D
P1
P2
Threshold T
t = Tb
r (t )
r2
h (t ) = 2 (Tb t )
Threshold T
2 (t) =
s2 (t) s1 (t)
s22 + s12
, T
+
1
2
(E2 2 E1 E2 + E1 ) 2
N0 /2
s22 s12
ln
123/291
Example 5.8
2 (t )
1 (t )
s2 ( t )
2 (t )
s11 = s21
= /4
E
1 (t)
2 (t)
s1 (t )
1 (t )
1
[1 (t) + 2 (t)],
2
1
[1 (t) + 2 (t)].
2
124/291
t = Tb
Tb
r (t )
r2
()dt
r2 T
1D
r2 < T
0D
2 ( t )
Threshold T
2
Tb
Tb 2
Tb
t = Tb
h(t )
r (t )
2
Tb
r2
r2 T
1D
r2 < T
0D
Tb 2
Threshold T
125/291
Receiver Performance
To detect bk , compare
r2 =
T =
s
12 +
s22
2
N0
2(
s22
s12 )
kTb
(k1)T
b
P1
.
ln P
2
f (r2 0T )
f (r2 1T )
s12
choose 0T
P [error]
s22
r2
choose 1T
T
P [(0 transmitted and 1 decided) or (1 transmitted and 0 decided)]
126/291
f (r2 0T )
f (r2 1T )
!
"!#
!
"!#
$!
%$
$!
%$
s22
s12
r2
'
choose 0T
& choose 1T
T
P [error]
=
=
{z
}
|
{z
}
|
Area B
P1 Q
T s12
p
N0 /2
Area A
"
+ P2 1 Q
T s22
p
N0 /2
!#
127/291
Q-function
=;4)
@;7.0
@;7.0
=;//;728 >;?-7540
>;?-7540 12-34-78 A,,-B;4-78
1 2
e
2
9(+)
:+;8<9
()*+,-.-/0
12-345-,x67-*)88
0
1
Q(x)
2
Area = Q ( x )
exp
2
2
d.
10
10
Q(x)
10
10
10
10
10
3
x
128/291
Performance when P1 = P2
P [error] = Q
s22 s12
p
2 N0 /2
=Q
Probability of error decreases as either the two signals become more dissimilar
(increasing the distances between them) or the noise power becomes less.
To maximize the distance between the two signals one chooses them so that they
are placed 180 from each other s2 (t) = s1 (t), i.e., antipodal signaling.
The error probability does not depend on the signal shapes but only on the
distance between them.
129/291
130/291
Example 5.9
2 (t )
s1 (t ) 0T
1
2
1T s 2 (t )
1 (t )
(a) Determine and sketch the two signals s1 (t) and s2 (t).
131/291
(b) The two signals s1 (t) and s2 (t) are used for the transmission of equally likely bits 0 and 1,
respectively, over an additive white Gaussian noise (AWGN) channel. Clearly draw the
decision boundary and the decision regions of the optimum receiver. Write the expression for
the optimum decision rule.
1 (t) and
2 (t) such that the optimum
(c) Find and sketch the two orthonormal basis functions
receiver can be implemented using only the projection
r2 of the received signal r(t) onto the
2 (t). Draw the block diagram of such a receiver that uses a matched filter.
basis function
(d) Consider now the following argument put forth by your classmate. She reasons that since the
1 (t) is not useful at the receiver in determining which bit
component of the signals along
was transmitted, one should not even transmit this component of the signal. Thus she
modifies the transmitted signal as follows:
(M)
1 (t)
s1 (t) = s1 (t) component of s1 (t) along
(M)
1 (t)
s2 (t) = s2 (t) component of s2 (t) along
(M)
(M)
Clearly identify the locations of s1 (t) and s2 (t) in the signal space diagram. What is the
average energy of this signal set? Compare it to the average energy of the original set.
Comment.
132/291
s 2 (t )
s1 (t )
133/291
2 (t )
0
CDEFGFHI JHKILMNO
0D
1D
2 ( t )
s1M (t )
s1 (t ) 0T
1
2
=
s2M (t )
1T s 2 (t )
1 (t)
2 (t)
cos(/4)
sin(/4)
1 (t )
1
1 (t) = [1 (t) 2 (t)],
2
EE450 Wireless Communications, Duy Tan University, Da Nang, Vietnam
1 (t)
2 (t)
sin(/4)
cos(/4)
1 (t )
"
1
2
1
2
2
1
#
1 (t)
2 (t)
1
2 (t) = [1 (t) + 2 (t)].
2
134/291
1 (t )
2 (t )
2
1/ 2
1/ 2
h (t ) = 2 (1 t )
t =1
r (t )
1/ 2
r2 0 P 0D
r2 < 0 P 1D
135/291
s1 (t )
2Tb
Tb
Tb
2Tb
3Tb
4Tb
s2 (t 3Tb )
sT (t) =
k=
gk (t),
gk (t) =
s1 (t kTb ),
s2 (t kTb ),
with probability P1
.
with probability P2
X
1 1 Tb + P2 S2 Tb
P1 P2
2
f n .
SsT (f ) =
|S1 (f ) S2 (f )| +
Tb
Tb
Tb
n=
136/291
AC
k=
Sv (f ) =
n=
|Dn |2
n
Tb
, Dn =
1
Tb
P1 S1
n
Tb
+ P2 S2
n
Tb
2
n
n
P S
X
1 1 Tb + P2 S2 Tb
f n .
Sv (f ) =
Tb
Tb
n=
P1 P2
E{|GT (f )|2 }
2
= =
|S1 (f ) S2 (f )| .
T
Tb
2
n
P S
n
X
1 1 Tb + P2 S2 Tb
P1 P2
2
f n .
SsT (f ) =
|S1 (f ) S2 (f )| +
Tb
T
T
b
b
n=
For the special, but important case of antipodal signalling, s2 (t) = s1 (t) = p(t), and equally
likely bits, P1 = P2 = 0.5, the PSD of the transmitted signal is solely determined by the
Fourier transform of p(t):
|H(f )|2
|P (f )|2
=
SsT (f ) =
Tb
Tb
137/291
...
...
...
t
S/P
bits-tolevels
mapping
N
N=
Tsym
T
pulse
shaping
filter
DAC
correction
filter
hT [n ] = p(nT )
DAC
cos( w c n )
NCO
fc
wc = w c Fs
138/291
s (t )
x[ n d ]
x[ n]
hR [n ] = p( nT )
xc [ n ]
k =n
Tsym
T
cos( c n )
fc
sin( c n )
c d =
x s [n ]
139/291
10
12
14
16
18
20
18
20
18
20
18
20
n
Output of the transmit pulse shaping filter Rectangular
2
0
2
10
12
14
16
t/Tb
Output of the transmit pulse shaping filter Halfsine
2
0
2
10
12
14
16
t/Tb
Output of the transmit pulse shaping filter SRRC ( =0.5)
2
0
2
10
12
14
16
t/Tb
140/291
bk
h(t ) = p(Tb t )
141/291
bk
h(t ) = p (Tb t )
142/291
Outputs of HS/HS matched filter (red) and HS/rect mismatched filter (pink)
143/291
Bits are mapped into two voltage levels for direct transmission without any
frequency translation.
Various baseband signaling techniques (line codes) were developed to satisfy
typical criteria:
1
2
3
4
5
144/291
Binary Data
Clock
Tb
V
(a) NRZ Code
Time
(b) NRZ - L
145/291
V
(c) RZ Code
(d) RZ - L
V
(e) Bi - Phase
146/291
(f) Bi - Phase - L
V
(g) Miller Code
V
(h) Miller - L
147/291
Miller Code
Has at least one transition every two bit interval and there is never more than two
transitions every two bit interval.
Bit 1 is encoded by a transition in the middle of the bit interval. Depending on
the previous bit this transition may be either upward or downward.
Bit 0 is encoded by a transition at the beginning of the bit interval if the
previous bit is 0. If the previous bit is 1, then there is no transition.
V
(f) Bi - Phase - L
V
(g) Miller Code
V
(h) Miller - L
148/291
NRZ-L Code
s1 (t )
s 2 (t )
UVWXYU
UYZWU
1 (t )
V
Tb
0
Tb
Tb
\]^
QRS
Choose 0T
b Choose 1T
s 2 (t )
s1 (t )
ENRZ-L
Tb
1 (t )
ENRZ-L
_`a
P [error]NRZ-L = Q
p
2ENRZ-L /N0 .
149/291
RZ-L Code
1 (t )
2 (t )
s 2 (t )
s1 (t )
gkjlhg
ghijg
1 Tb
Tb
Tb
Tb 2 Tb
cde
Tb
Tb 2 Tb
mno
1 Tb
2 (t )
ERZ-L
s 2 (t )
Choose 0T
Choose 1T
s1 (t )
P [error]RZ-L = Q
q
ERZ-L /N0 .
1 (t )
ERZ-L
qrs
150/291
s 2 (t )
s1 (t )
x|}zx
xyz{|x
V
Tb
Tb
Tb
0
V
tuv
1 Tb
Choose 0T
Choose 1T
s1 (t )
EBi -L
Tb
s 2 (t )
1 (t )
E Bi -L
P [error]Bi-L = Q
q
2EBi-L /N0
151/291
Miller-Level (M-L)
s1 ( t )
s 2 (t )
Tb
Tb
s3 ( t )
s4 ( t )
Tb
0
Tb
0
1 Tb
1 (t )
Tb
Tb
2 (t )
Tb
Tb
152/291
2 (t )
s 2 (t )
s 3 (t )
s1 (t )
1 (t )
s 4 (t )
153/291
2 (t )
r2
r2
s 2 (t )
0.5E M-L
s 3 (t )
0
450
s1 (t )
1 (t )
EM - L
s4 ( t )
r1
r1
h
p
i2
P [error]M-L = 1 1 Q
EM-L /N0
154/291
Performance Comparison
10
10
ML
RZL
P[error]
10
10
10
6
8
E /N (dB)
b
10
12
14
Tb Eb (joules/bit).
v
u
u 2Eb
,
P [error]NRZ-L = P [error]Bi-L = Q t
N0
u
u
u Eb
uE
, P [error]
t b .
P [error]RZ-L = Q t
M-L 2Q
N0
N0
155/291
Spectrum
SNRZ-L (f )
1
sin2 (f Tb )
=
(1 2P )2 (f ) + 4P (1 P )
.
E
Tb
(f Tb )2
SBi (f )
E
X
n
1
2 2
f
(1 2P )2
Tb
n
Tb
n=
sin4 (f Tb /2)
.
4P (1 P )
(f Tb /2)2
n6=0
SM-L (f )
1
=
(23 2 cos 22 cos 2 12 cos 3
E
2 2 (17 + 8 cos 8)
+5 cos 4 + 12 cos 5 + 2 cos 6 8 cos 7 + 2 cos 8),
where = f Tb and P2 = P1 = 0.5.
156/291
2.5
MillerL
1.5
NRZL
1
BiL
0.5
0
0
0.5
1
1.5
Normalized frequency (fT )
157/291
158/291
t
0
Tb
2Tb
3Tb
4Tb
5Tb
6Tb
7Tb
8Tb
9Tb
V
V
V
V
159/291
s1 (t) = 0,
s2 (t) = V cos(2fc t),
0T
, 0 < t Tb , fc = n/Tb
1T
s1 (t )
s 2 (t )
1 (t )
EBASK
t = Tb
Tb
r1 Th 1D
r1
()dt
r (t )
Comparator
r1 < Th 0 D
1 (t )
Th =
(b)
Choose 0T
E BASK
P
N0
ln 1 +
2
2 E BASK P2
Choose 1T
s1 (t )
s 2 (t )
Tb
r1 = r (t )1 (t )dt
0
EBASK 2
P [error]BASK = Q
EBASK
2N0
160/291
PSD of BASK
SBASK (f )
V2
(f fc ) + (f + fc ) +
16
sin2 [Tb (f + fc )]
sin2 [Tb (f fc )]
+
.
2
2
2
2
Tb (f + fc )
Tb (f fc )
SBASK ( f )
V2
16
f
0
fc
1
Tb
fc
fc +
1
Tb
Approximately 95% of the total transmitted power lies in a band of 3/Tb (Hz),
centered at fc .
161/291
s1 (t )
s 2 (t )
EBPSK
V2
4
1 (t ) =
2
cos(2 f ct )
Tb
EBPSK
P [error]BPSK = Q
SBPSK (f ) =
if 0T
, 0 < t Tb ,
if 1T
2EBPSK
N0
sin2 [(f fc )Tb ]
sin2 [(f + fc )Tb ]
+
.
2 (f fc )2 )Tb
2 (f + fc )2 Tb
162/291
s (t )
s1 (t) = V cos(2f1 t + 1 ),
s2 (t) = V cos(2f2 t + 2 ),
if 0T
,
if 1T
0 < t Tb .
(f )min
1
.
2Tb
(f )min
1
.
Tb
163/291
s1 (t)
1 (t) =
,
EBFSK
s2 (t)
2 (t) =
.
EBFSK
r2 2 (t )
[2 (t ) 1 (t )]
Decision boundary
s 2 (t )
( 0,
EBFSK
when P2 = P1
Choose 0T
Choose 1T
P [error]BFSK = Q
EBFSK , 0
s1 (t )
EBFSK
N0
r1
1 (t )
164/291
PSD of BFSK
SBFSK (f )
V2
sin2 [Tb (f + f2 )]
sin2 [Tb (f f2 )]
(f f2 ) + (f + f2 ) +
+
16
2 Tb (f + f2 )2
2 Tb (f f2 )2
sin2 [Tb (f f1 )]
sin2 [Tb (f + f1 )]
V2
.
+
(f f1 ) + (f + f1 ) +
16
2 Tb (f + f1 )2
2 Tb (f f1 )2
Bandwidth W = ( f 2 f1 ) + 3 / Tb
V2
16
f1
1.5
Tb
f1
V2
16
f1 +
1
Tb
f2
1
Tb
f2
f2 +
1.5
Tb
165/291
10
10
P[error]
10
BPSK
4
10
10
10
P [error]BPSK = Q
2
s
2Eb
N0
4
!
6
8
Eb/N0 (dB)
10
, P [error]BASK = P [error]BFSK = Q
12
s
14
Eb
N0
166/291
Basic idea behind QPSK: cos(2fc t) and sin(2fc t) are orthogonal over [0, Tb ] when
fc = k/Tb , k integer Can transmit two different bits over the same frequency band at
the same time.
The symbol signaling rate (i.e., the baud rate) is rs = 1/Ts = 1/(2Tb ) = rb /2
(symbols/sec), i.e., halved.
Bit Pattern
Message
00
01
11
10
m1
m2
m3
m4
Signal Transmitted
s1 (t)
s2 (t)
s3 (t)
s4 (t)
m1 = 00
= V cos(2fc t),
= V sin(2fc t),
= V cos(2fc t),
= V sin(2fc t),
m2 = 01
m3 = 11
0
0
0
0
t
t
t
t
Ts
Ts
Ts
Ts
=
=
=
=
2Tb
2Tb
2Tb
2Tb
m4 = 10
2Tb
4Tb
6Tb
8Tb
167/291
Ts
si (t)dt =
s1 (t)
1 (t) =
,
Es
V2
2
Ts = V Tb = Es ,
2
s2 (t)
2 (t) =
.
Es
2 (t )
s 2 (t )
Es
s1 (t )
s3 (t )
0
1 (t )
s4 ( t )
168/291
r = (r1 , r2 , , rm )
1
Choose s1 (t ) or m1
4
Choose s4 (t ) or m4
Choose s2 (t ) or m2
Choose s3 (t ) or m3
P [correct]
P1 f (~
r |s1 (t))d~
r+
P3 f (~
r |s3 (t))d~
r+
P2 f (~
r|s2 (t))d~
r
P4 f (~
r|s4 (t))d~
r.
Choose si (t) if Pi f (~
r |si (t)) > Pj f (~
r|sj (t)), j = 1, 2, 3, 4; j 6= i.
169/291
Choose si (t) if
N0 ln P + r s + r s > N0 ln P + r s + r s
1 i1
2 i2
1 j1
2 j2
i
j
2
2
j = 1, 2, 3, 4; j 6= i.
t = Ts = 2Tb
Ts
r1
( ) dt
0
r (t )
1 (t )
t = Ts = 2Tb
Ts
( ) dt
r2
Compute
N
r1 s j1 + r2 s j 2 + 0 ln( Pj )
2
for j = 1, 2,3, 4
and choose
Decision
the largest
2 (t )
170/291
Minimum-Distance Receiver
Choose si (t) if (r1 si1 )2 + (r2 si2 )2 is the smallest
2 (t ), r2
s 2 (t )
/4
s 3 (t )
s1 (t )
1 (t ), r1
s4 ( t )
171/291
s 2 (t )
r1
r2
/4
s3 ( t )
s1 (t )
r1
/2
Es
/2
s4 ( t )
"
Es
N0
!#2
172/291
Joint pdf f(
r1 , r2 )
Observation r2
Observation r1
Observation r2
2
1
0
1
2
2
1
0
1
Observation r1
173/291
r2
s 2 (t )
r1
r2
Es
/4
s3 ( t )
s1 (t )
r1
&&%$
#"
!
P [m2 |m1 ]
P [m3 |m1 ]
P [m4 |m1 ]
/2
/2
s4 ( t )
P [bit error]
=
=
v
u
u
u Es
u Es
1 Q t
Q t
N0
N0
v
u
E
u
s
2
,
Q t
N0
v
v
u
u
u Es
u Es
1 Q t
Q t
N0
N0
Gray mapping: Nearest neighbors are mapped to the bit pairs that differ in only one bit.
174/291
V cos(2f c t )
a I (t )
'()*+,-.+(/(0
a(t )
s (t )
aQ (t )
Bit sequence
a0
a1
a2
V sin(2f ct )
a3
a4
a5
a6
a7
a8
7Tb
8Tb
9Tb
1
a (t )
Tb
2Tb
3Tb 4Tb
5Tb 6Tb
1
a0
a2
a6
a4
a8
1
a I (t )
t
0
Tb
2Tb
3Tb 4Tb
5Tb 6Tb
7Tb
a3
a5
a7
3Tb 4Tb
5Tb
8Tb
9Tb
1
a1
1
aQ (t )
t
0
Tb
2Tb
6Tb
7Tb 8Tb
9Tb
175/291
a (t)Vcos(2f t)
I
V
0
V
0
2Tb
4Tb
6Tb
a (t)Vsin(2f t)
Q
8Tb
10Tb
8Tb
10Tb
V
0
V
0
2Tb
4Tb
6Tb
(t)=a (t)Vcos(2f t)+a (t)Vsin(2f t)
QPSK
V
0
V
s(t)
=
=
aI (t)V
cos(2f
t)
0
2Tb
4Tb sin(2fc
6Tb
8Tb
10Tb
c t) + aQ (t)V
t
!!
r
1 aQ (t)
2
2
a (t) + a (t)V cos 2fc t tan
2V cos[2fc t (t)].
=
I
Q
aI (t)
(t) =
/4,
/4,
3/4,
3/4,
if
if
if
if
aI
aI
aI
aI
=
=
=
=
+1, aQ
+1, aQ
1, aQ
1, aQ
=
=
=
=
176/291
1 (t) =
2 (t) =
V cos(2fc t)
V 2 Tb
V sin(2fc t)
V 2 Tb
2 (t ) =
2
sin(2 fc t )
Ts
aI = 1, aQ = 1 1 (t ) = / 4
aI = 1, aQ = 1 3 (t ) = 3 / 4
V 2Ts
/4
0
aI = 1, aQ = 1 4 (t ) = 3 / 4
1 (t ) =
2
cos(2 fc t )
Ts
aI = 1, aQ = 1 2 (t ) = / 4
177/291
s1(Q ) (t )
V 2Ts
s2( I ) (t )
2 (t )
V 2Ts
2
s1( I ) (t )
V 2Ts
1 (t )
V 2Ts
2
t = Ts
Ts
r1
( ) dt
Threshold = 0
a I (t )
r (t )
Multiplexer
1 (t )
t = Ts
r2
Ts
( ) dt
a (t )
aQ (t )
Threshold = 0
2 (t )
s
P [bit error] = Q
V 2 Ts
= = Q
N0
2Eb
N0
178/291
M -ARY COMMUNICATION
SYSTEMS FOR AWGN CHANNELS
179/291
There are benefits to be gained when M -ary (M = 4) signaling methods are used
rather than straightforward binary signaling.
In general, M -ary communication is used when one needs to design a
communication system that is bandwidth efficient.
Unlike QPSK and its variations, the gain in bandwidth is accomplished at the
expense of error performance.
To use M -ary modulation, the bit stream is blocked into groups of bits the
number of bit patterns is M = 2 .
The symbol transmission rate is rs = 1/Ts = 1/(Tb ) = rb / symbols/sec
there is a bandwidth saving of 1/ compared to binary modulation.
Shall consider M -ary ASK, PSK, QAM (quadrature amplitude modulation) and
FSK.
180/291
56789:
mi
;6<7=>?68
@A8>BCDE??:8F
r (t )
si (t )
G:D6<7=>?68
@H:9:EI:8F
m i
w (t )
N0
2
Receiver needs to make the decision on the transmitted signal based on the
received signal r(t) = si (t) + w(t).
The determination of the optimum receiver (with minimum error) proceeds in a
manner analogous to that for the binary case.
181/291
sik
si (t) + w(t)
182/291
r = (r1 , r2 , K , rM )
1
Choose s1 (t ) or m1
Choose s M (t ) or mM
2
Choose s2 (t ) or m2
PN
Choose mi if
P
2
sik )2 < N
k=1 (rk sjk ) ;
j = 1, 2, . . . , M ; j 6= i.
k=1 (rk
183/291
si (t)
2
cos(2fc t), 0 t Ts
Ts
s
2
[(i 1)]1 (t), 1 (t) =
cos(2fc t), 0 t Ts ,
Ts
Vi
i = 1, 2, . . . , M.
s1 (t )
s2 ( t )
s3 (t )
sk (t )
M sM 1 (t )
(k 1)
(M 2)
sM (t )
1 (t )
(M 1)
t = kTs
si (t )
r (t )
kTs
O ( )dt
( k 1) Ts
r1
PQRSTSUV
PQWSRQ
m i
w (t )
1 (t )
N
WGN, strength 0 watts/Hz
2
184/291
sk (t),
Choose
s (t),
1
sM (t),
if
if
if
k 32 < r1 < k 12 , k = 2, 3, . . . , M 1
.
r1 <
2
r1 > M 23
f (r1 sk (t ) )
r1
0
(k 1)
Choose s1 (t )
X Choose sM (t )
Choose sk (t )
185/291
r1
P [error] =
M
X
i=1
p
P [error|si (t)] = 2Q / 2N0 , i = 2, 3, . . . , M 1.
p
P [error|si (t)] = Q / 2N0 , i = 1, M.
P [error] =
2(M 1) p
Q / 2N0 .
M
186/291
si (t) = (2i 1 M)
cos(2fc t), 0 t Ts , i = 1, 2, . . . , M.
2
Ts
{z
}
|
Vi
\]^
\_^
[
Es
Eb =
PM
i=1
Ei
3
2
1 (t )
1 (t )
M
2 X
(M 2 1)2
.
(2i 1 M)2 =
4M i=1
12
Es
(M 2 1)2
=
=
log2 M
12 log2 M
187/291
P [error] =
2(M 1)
M
P [bit error] =
v
u
u
Q t
v
u
6Es
2(M 1)
u 6 log2 M Eb
=
.
Q t
2
2
(M 1)N0
M
M 1 N0
P [symbol error] =
2(M 1)
M log2 M
10
10
v
u
u 6 log2 M Eb
(with Gray mapping)
Q t
M 2 1 N0
P[symbol error]
M=16
(W=1/4T )
10
M=8
(W=1/3T )
10
M=4
(W=1/2T )
10
M=2
(W=1/T )
10
10
10
15
20
25
Eb/N0 (dB)
W is obtained by using the W Ts = 1 rule-of-thumb. Here 1/Tb is the bit rate (bits/s).
188/291
Tb
2Tb
3Tb
4Tb
5Tb
6Tb
7Tb
8Tb
9Tb
10Tb
7Tb
8Tb
9Tb
10Tb
BPSK Signalling
2
0
2
0
Tb
2Tb
3Tb
4Tb
5Tb
6Tb
4ASK Signalling
2
0
2
0
2Tb
4Tb
6Tb
8Tb
10Tb
189/291
(i 1)2
si (t) = V cos 2fc t
,
M
0 t Ts ,
, 2 (t) =
.
Es
Es
p
p
(i 1)2
(i 1)2
, si2 = Es sin
.
si1 = Es cos
M
M
The signals lie on a circle of radius Es , and are spaced every 2/M radians around
the circle.
1 (t) =
190/291
0 t Ts ,
2 (t )
s3 (t ) 011
010 s4 (t )
s2 (t ) 001
Es
110 s5 (t )
0
s1 ( t ) 000
1 (t )
s8 (t ) 100
111 s6 (t )
s7 (t ) 101
191/291
0 t Ts ,
2 (t )
s 2 (t )
Es
2 M
0
s1 (t )
2 M
1 (t )
s M (t )
192/291
r1
Ts
a ()dt
Compute
r (t )
1 (t )
for i = 1, 2,`, M
t = Ts
and choose
r2
Ts
b ()dt
m i
the smallest
2 (t )
r2
P [error]
Region 2
Choose s2 (t )
P [error|s1 (t)]
ZZ
r1 ,r2 Region 1
s2 ( t )
Es
M
0
s1 (t )
r1
Region 1
Choose s1 (t )
193/291
r2
Region 2
Choose s2 (t )
r2
s2 ( t )
s2 ( t )
1
Es
Es sin ( M )
s1 (t )
r1
Region 1
Choose s1 (t )
P [error|s1 (t)]
>
P [error|s1 (t)]
>
s1 (t )
Es ,0
r1
194/291
s2 ( t )
Es sin ( M )
r2
Region 2
Choose s2 (t )
s1 (t )
M
0
Es ,0
Es ,0
r1
s2 ( t )
Es
M
0
r2
s1 (t )
r1
Region 1
Choose s1 (t )
P [error]
<
P [error]
<
r1
s1 (t )
Es sin ( M )
s M (t )
195/291
10
M=32
2
10
M=16
P[symbol error]
10
M=8
Exact
10
M=4
10
M=2
6
10
10
Lower bound
Upper bound
10
15
20
25
Eb/N0 (dB)
sin2 M
.
P [bit error]M -PSK log1 M Q
N
2
196/291
M-QAM constellations are two-dimensional and they involve inphase (I) and quadrature (Q)
carriers:
s
2
cos(2fc t), 0 t Ts ,
I (t) =
Ts
s
2
Q (t) =
sin(2fc t), 0 t Ts ,
Ts
The ith transmitted M-QAM signal is:
s
s
2
2
cos(2fc t) + VQ,i
sin(2fc t),
si (t) = VI,i
Ts
Ts
s
p
2
=
Ei
cos(2fc t i )
Ts
0 t Ts
i = 1, 2, . . . , M
VI,i and VQ,i are the information-bearing discrete amplitudes of the two quadrature carriers,
2
2
Ei = VI,i
+ VQ,i
and i = tan1 (VQ,i /VI,i ).
197/291
(1,7)
(4,4)
Rectangle
Triangle
M =8
M =4
(1,3)
Rectangle
198/291
R2
(4,12)
(8,8)
Triangle
R2
R1
(1,5,10)
M = 16
R1
R1
R2
Hexagon
Rectangle
199/291
With the same minimum distance of all the constellations, a more efficient signal
constellation is the one that has smaller average transmitted energy.
pqrstuvw
hijklmnoi
xyzy{
cdefg
M =8
Es for the rectangular, triangular, (1,7) and (4,4) constellations are found to be
1.502 , 1.1252 , 1.1622 and 1.1832 , respectively.
200/291
Rectangular M -QAM
Q (t )
M = 64
M = 16
M = 32
M =8
I (t )
M =4
The signal components take value from the set of discrete values {(2i 1 M )/2},
.
i = 1, 2, . . . , M
2
201/291
Inphase bits
Select VI ,i
Infor. bits
|}~
}}
2
cos(2f c t )
Ts
Inphase ASK
si (t )
Quadrature bits
Select VQ,i
2
sin( 2f c t )
Ts
Quadrature ASK
}
202/291
t = Ts
Inphase
ASK
decision
Ts
( ) dt
0
r (t ) = si (t ) + w (t )
I ( t ) =
2
cos(2 f c t )
Ts
Ts
( ) dt
0
Q (t ) =
2
sin(2 f ct )
Ts
Decision
Multiplexer
t = Ts
Quadrature
ASK
decision
(b) Receiver
The most practical rectangular QAM constellation is one which I = Q = /2, i.e., M is a
perfect square and the rectangle is a square.
203/291
1
[error] = 2 1
M
M
3Es
(M 1)N0
"
1 1 2Q
4Q
3Es
(M 1)N0
!
!#2
3Eb
(M 1)N0
204/291
10
M=256
10
P[symbol error]
M=64
10
M=16
10
10
M=4
5
M=2
10
10
Exact performance
Upper bound
5
10
15
20
25
Eb/N0 (dB)
205/291
10
P[symbol error]
10
10
4QAM or QPSK
4
10
M=4, 8, 16, 32
5
10
10
MASK
MPSK
MQAM
5
10
E /N (dB)
b
15
20
206/291
0 t Ts
, i = 1, 2, . . . , M,
elsewhere
V cos(2fi t),
0,
1
(k i)
, (coherently orthogonal)
2Ts
fi =
(k i) 1 ,
(noncoherently orthogonal)
Ts
, i = 0, 1, 2, . . .
2 (t )
s 2 (t )
Es
0
s3 ( t )
Es
s1 (t )
1 (t )
Es
3 (t )
207/291
M
X
Choose mi if
M
X
(rk sjk )2
(rk sik )2 <
k=1
k=1
ri > rj ,
j = 1, 2, . . . , M ; j 6= i,
Choose mi if
j = 1, 2, . . . , M ; j 6= i.
t = Ts
( )dt
r1
Ts
r (t )
1 (t ) =
s1 (t )
Es
t = Ts
Ts
()dt
Decision
rM
M ( t ) =
s M (t )
Es
208/291
P [correct|s1 (t)] = P [(r2 < r1 ) and and (rM < r1 )|s1 (t) sent].
Z
P [(r2 < r1 ) and and (rM < r1 )|{r1 = r1 , s1 (t)}]f (r1 |s1 (t))dr.
r1 =
P [correct]
"Z
r1
M
Y
j=2
r1
1
N0
exp
exp
2
N0
N0
(
)
1
(r1 Es )2
exp
dr1 .
N0
N0
r1 =
2
N0
d.
#M 1
209/291
10
10
M=64
P[symbol error]
10
M=4
M=32
M=2
10
M=16
5
10
M=8
6
10
10
6
8
Eb/N0 (dB)
10
12
14
16
"
#M 1
u
1 Z
1
1 Z y
u 2 log2 M Eb
x2 /2
dy.
P [error] = 1
e
dx
exp y t
2
2
2
N0
210/291
Pr[symbol error]
X
21
=
Pr[symbol error].
k
2 1
2 1
k
k=1
21
Pr[symbol error].
2 1
211/291
Introduction
We have considered that the transmitted signal is only degraded by AWGN and it
might be subjected to filtering.
There are communication channels (e.g., mobile wireless channels) where the
received signal is not subjected to a known transformation or filtering.
Typically the gain and/or phase of a digitally modulated transmitted signal is not
known precisely at the receiver.
It is common to model these parameters as random.
Shall consider channel models where the amplitude and/or phase of the received
signal is random.
212/291
r2
r2
0D
a E
2
0D
1D
a E
0 R 0T
0D
1D
E a E
a E
1T
0R
a E
1D
r1
0T
0R
1R
1T
a E
r1 = r2
r1
1R
0T
r1
1R
1T
213/291
P[error]
P[error]
E{P[error]}
E{P[error]}
!
2Eb
a
(antipodal),
N0
s
!
Eb
a
(orthogonal).
N0
s
!
2Eb
fa (a)da (antipodal),
N0
s
!
Eb
fa (a)da (orthogonal).
a
N0
a
214/291
215/291
If all the signal points lie at distance of Es from the origin (i.e., equal energy), then the
optimum decision regions are invariant to any scaling by a, provided that a 0.
The matched-filter or correlation receiver structure is still optimum, one does not even need
to know fa (a).
r2
The error performance, however,
depends crucially on a and fr2a (a).
a =1
r1
r1
a = 0.5
216/291
2
sin(2 f c t )
Tb
locus of s2R (t )
Q ( t )
s2R (t ) 1R
E
s1R (t ) = s1 (t )
s2R (t ) 1R
s2 ( t )
0 R 0T
s1 (t )
I (t ) =
1T
s 2 (t )
0T
I (t )
1T
2
cos(2 f ct )
Tb
s1R (t ) 0 R
1,Q (t )
s1R (t ) 0 R
2,Q (t )
s2R (t ) 1R
2,I (t )
E
s1 (t )
0T
s2 (t )
1,I (t )
1T
217/291
w(t),
if 0T
q 2
.
E T cos(2fc t ) + w(t), if 1T
b
wI ,
if 0T
wQ ,
if 0T
rI =
, rQ =
,
E cos + wI , if 1T
E sin + wQ , if 1T
r(t) =
f (rI , rQ |1T )
E
N0
I0
2
0
rI
1
exp
N0
E cos
2
2
+ rQ E sin
1
d
N0
2
r2 +r2
1 IN Q NE
2 Eq 2
2
0
0 I0
e
e
.
rI + rQ
=
N
N0
!1
q0
E
1D
q
D
2 E
N0 1
2
2 R
R 1
e N0 .
rI2 + rQ
rI2 + rQ
I0
N0
2 E
0
0
D
218/291
s2R (t ) 1R
1 1D
E
0 0D
0R 0T
1T
rI2 + rQ2 = Th =
I (t )
N 0 1 E N 0
I0 e
2 E
t = kTb
kTb
( ) dt
rI
( k 1)Tb
r (t )
1D
I (t ) =
2
cos(2 f ct )
Tb
>
()
<
0D
t = kTb
kTb
( ) dt
rQ
( k 1)Tb
Q (t ) =
Th =
N 0 1 E N 0
I0 e
2 E
Th
2
sin(2 f c t )
Tb
219/291
y(t)
=
=
2
cos(2fc (t ))[u(t ) u(t Tb ]d
Tb
q
yQ (t)
2 (t) cos 2f
,
t tan1
yI2 (t) + yQ
c
yI (t)
r()
q
2 (t) is the envelope. At the sampling instant, t = kT , then y (kT ) = r
where yI2 (t) + yQ
I
I
b
b
and yQ (kTb ) = rQ .
r (t )
2
cos(2 f ct ), 0 t Tb y ( t )
h(t ) = Tb
0,
elsewhere
t = kTb
1D
r +r
2
I
2
Q
Th =
>
N0
2 E
I 01 e
<
0D
E
N0
Th
220/291
Error Performance
P [error|0T ]
ZZ
P [error|1T ]
1
N0
=
where Q(, ) =
xe
ZZ
=0
2
N0
2 +r2
rI
Q
N0
drI drQ ,
dd = eTh /N0 .
=Th
1 P [correct|1T ] = 1
1Q
1
e
N0
2E
,
N0
ZZ
2
Th
N0
x2 +2
2
221/291
There is about 0.3qdB penalty in power when using such a simpler suboptimum
Eb
threshold, 2E =
.
2
0
10
Noncoherent BASK
with threshold of 0.5E1/2
1
10
Noncoherent BASK
with the optimum threshold
10
P[error]
Noncoherent BFSK
3
10
Noncoherent DBPSK
4
10
Coherent BPSK
5
10
10
5 6 7 8 9 10 11 12 13 14 15
SNR per bit (Eb/N0) in dB
222/291
r(t) =
E T2 cos(2f1 t ) + w(t),
q 2b
E T cos(2f2 t ) + w(t),
b
r1,Q
0T
E cos + w1,I
E sin + w1,Q
r2,I
w2,I
r2,Q
w2,Q
r1,I
if 0T
if 1T
r1,I
r1,Q
r2,I
r2,Q =
r2 +r2
w
1,I
1T
w1,Q
E cos + w2,I
E sin + w2,Q
1 1,IN 1,Q
0
e
f (r1,I , r1,Q , r2,I , r2,Q |0T ) =
N0
!
2
q
r
+r2
1 2,IN 2,Q
2 E
E
2
2
0
e N0 I0
r1,I
+ r1,Q
,
e
N0
N0
r2
+r2
1 1,IN 1,Q
0
e
N0
!
q
2 E
E
2
2
e N0 I0
.
r2,I
+ r2,Q
N0
2 +r2
r2,I
2,Q
N0
223/291
I0
I0
q
2 E
2
r2,I
N0
q
2 E
2
r1,I
N0
2
+ r2,Q
2
+ r1,Q
1D
R 1
0D
1D
2
2
r2,I
+ r2,Q
R
0D
2
2
r1,I
+ r1,Q
.
2
cos(2 f 2 t ), 0 t Tb
h2 (t ) = Tb
0,
elsewhere
t = kTb
r2,2 I + r2,2 Q
+
r (t )
Bandpass filter centered at f1
2
cos(2 f1t ), 0 t Tb
h1 (t ) = Tb
0,
elsewhere
t = kTb
1D
>
<
0D
r1,2I + r1,2Q
The demodulator finds the envelope at the two frequencies and chooses the larger one at the
sampling instant.
224/291
r22,I + r22,Q r21,I + r21,Q .
h
i
P error0T , r21,I + r21,Q = R2 =
P (r2,I , r2,Q ) falls outside the circle of radius R0T
1
=
N0
P [error]
E
Z
=0
=0
r1,I =
2
r1,I
N0
2
r1,Q
=e
2 +r2
r1,I
1,Q
N0
2 +r2
r1,I
1,Q
N0
E cos ,
N0
E sin ,
2
N0
2
dr1,I
sin 1
1 Eb
1 E 2N
1 E
0
d = e 2N0 = e 2N0 .
e
2
2
2
2
cos
1 E 2N
0
e
2
N0
r1,Q =
=0
r1,I =
dd =
2
R
e N0
r1,Q =
2
N0
=R
dr1,Q f ()d.
225/291
Noncoherent BASK is about 0.3 dB more power efficient than noncoherent BFSK.
0
10
Noncoherent BASK
with threshold of 0.5E1/2
1
10
Noncoherent BASK
with the optimum threshold
10
P[error]
Noncoherent BFSK
3
10
Noncoherent DBPSK
4
10
Coherent BPSK
5
10
10
5 6 7 8 9 10 11 12 13 14 15
SNR per bit (Eb/N0) in dB
226/291
Differential BPSK
Coherent BPSK is 3 dB better than coherent BASK or BFSK: Is it possible to use BPSK on
a channel with phase uncertainty?
Possible if a phase reference can be established at the receiver that is matched to the
received signal.
If the phase uncertainty changes relatively slowly with time, the received signal in one bit
interval can act as a phase reference for the succeeding bit interval.
r ( t ) = Eb
t = kTb
2
cos(2 f c t ) + w(t )
Tb
kTb
( ) dt
rk
( k 1)Tb
Delay Tb
1
Eb
2
1
cos(2 f c (t Tb ) ) +
w (t Tb )
Tb
Eb
227/291
0T : no phase change,
1T : phase change.
The decision rule is:
1D
rk R 0.
0D
1 Eb /N0
e
.
2
The only difference is rather than Eb joules/bit, the energy in DBPSK becomes
2Eb . This is because the received signal over two bit intervals is used to make a
decision.
228/291
10
Noncoherent BASK
with threshold of 0.5E1/2
1
10
Noncoherent BASK
with the optimum threshold
10
P[error]
Noncoherent BFSK
3
10
Noncoherent DBPSK
4
10
Coherent BPSK
5
10
10
5 6 7 8 9 10 11 12 13 14 15
SNR per bit (Eb/N0) in dB
229/291
!
"!!#!
-.+/01
9741 4:7:/.8
*+, -.+/01 2/310144 5678810
230/291
q
Consider the transmitted signal sT (t) = s(t) cos(2fc t), where s(t) = Eb T2 over
b
s(t)
X
j
X
j
j cos (2fc t j )
where j represents the attenuation and tj the delay along the jth path, which are random
variables. Also because s(t) is lowpass, we approximate s(t) s(t tj ).
Since tj 1/fc , the random phase j lies in the range [0, 2). Now
X
X
r(t) = s(t)
j cos j cos(2fc t) +
j sin j sin(2fc t) .
j
231/291
nF,I =
P
j cos j
E {nF,I } =
E
X
j
and nF,Q =
P
j sin j
n
o
o
X n 2o n
2
n2F,I =
E j E cos2 j = F ,
2
j
X
j
E{j }E{sin j } = 0,
n
o
X n 2o n 2 o
2
2
nF,Q =
E j E sin j = F ,
2
j
X
X
k sin k
j cos j
E {nF,I nF,Q } = E
XX
j
E {j k } E {cos j sin k } = 0,
{z
}
|
=0
Since the number of multipaths is large, the central limit theorem says that nF,I , nF,Q are
Gaussian random variables.
232/291
nF,I and nF,Q are statistically independent Gaussian random variables, zero-mean, variance
2
F
/2:
!
!
2
2
fnI ,nQ (nI , nQ ) = fnI (nI )fnQ (nQ ) = N 0, F N 0, F .
2
2
The received signal is therefore:
r(t)
s(t) [ cos(2fc t )] ,
n
q
2
2
where = nF,I + nF,Q , = tan1 nF,Q and
=
F,I
f ()
f ()
1
2
(uniform),
(
)
2
2
exp
u()
2
2
F
F
(Rayleigh).
233/291
s(t) =
Eb T2 cos(2f1 t),
q 2b
Eb T cos(2f2 t),
b
r(t)
if 0T
if 1T
Eb T2 cos(2f1 t ) + w(t), if 0T
.
q 2b
Eb T cos(2f2 t ) + w(t), if 1T
b
s
s
2
2
+
cos(2f
t)
sin(2f1 t) +w(t),
E
n
E
n
1
F,I
F,Q
b
b
Tb
Tb
{z
}
{z
}
|
|
1,I (t)
1,Q (t)
s
s
2
2
E n
cos(2f2 t) + Eb nF,Q
sin(2f2 t) +w(t),
b F,I
Tb
Tb
|
|
{z
}
{z
}
2,I (t)
0T ,
1T ,
2,Q (t)
The transmitted signal lies entirely within the signal space spanned by 1,I (t), 1,Q (t), 2,I (t)
and 2,Q (t).
234/291
r1,I
r1,Q
0T
p
Eb nF,I + w1,I
p
Eb nF,Q + w2,Q
1T
r1,I
w1,I
r1,Q
=
=
=
w2,Q
p
Eb nF,I + w2,I
p
Eb nF,Q + w2,Q
r2,I
w2,I
r2,I
r2,Q
w2,Q
r2,Q
w1,I , w1,Q , w2,I , w2,Q are due to thermal noise, are Gaussian, statistically independent,
zero-mean, and variance N0 /2.
2
nF,I and nF,Q , are also Gaussian, statistically independent, zero-mean and variance F
/2.
The sufficient statistics are Gaussian, statistically independent, zero-mean, with a variance of
2
either N0 /2 or Eb F
/2 + N0 /2, depending on whether a 0T or 1T .
Computing the likelihood ratio gives the following decision rule:
1D
2
2
2
2
R r1,I
+ r1,Q
,
r2,I
+ r2,Q
0D
235/291
J 2
cos(2 f 2 t ), 0 t Tb
K
h2 (t ) = L Tb
K
0,
elsewhere
M
t = kTb
;<=>?@A>
B>C>DC@E
r2,2 I + r2,2 Q
+
r (t )
Bandpass filter centered at f1
F 2
cos(2 f1t ), 0 t Tb
G
h1 (t ) = H Tb
G
0,
elsewhere
I
t = kTb
;<=>?@A>
B>C>DC@E
1D
>
<
0D
r1,2I + r1,2Q
236/291
q
hq
i
r22,I + r22,Q r21,I + r21,Q 0T .
Fix the value of r21,I + r21,Q at a specific value, say R2 and compute
P
ZZ
q
hq
i
r22,I + r22,Q R 0T , r21,I + r21,Q = R =
Z
=0
=R
e N0 dd = e
N0
2 +r2
r1,I
1,Q
N0
1
e
N0
2 +r2
r2,I
2,Q
N0
dr2,I dr2,Q
r2,Q
r2, I
237/291
2
2
r1,I +r1,Q
N
0
E e
0T
r1,I =
1
2
2 + F
Eb
N0
2 +r
r1,I
1,Q
N0
2
r1,Q =
2
Eb F
can be interpreted as the received energy per bit.
1
SNR ,
In the log-log plot of the P [error] versus SNR in dB, the error performance curve appears to
be a straight line of slope 1 in the high SNR region.
238/291
10
Noncoherent FSK
10
Coherent FSK
Coherent PSK
2
P[error]
10
3 Noncoherent
10
FSK
Noncoherent
DPSK
Random phase
10
10
Coherent PSK
Coherent ASK & FSK
10
10
15
20
25
Received SNR per bit (dB)
30
35
239/291
If the random phase introduced by fading can be perfectly estimated, then coherent
demodulation can be achieved The situation is the same as detection in random
amplitude.
With a Rayleigh fading channel, is a Rayleigh random variable.
0D
P [error] = E
Eb
N0
!)
u
u 2 Eb
1
u
F N0
= 1 t
.
E
2
2 + 2 b
F N0
1D
u
u 2 Eb
1
u
F N0
.
P [error] = 1 t
E
2
1 + 2 b
F N0
240/291
Coherent BPSK is 3 dB more efficient that coherent BFSK, which in turn is 3 dB more efficient
than the noncoherent BFSK.
0
10
Noncoherent FSK
10
Coherent FSK
Coherent PSK
2
P[error]
10
3 Noncoherent
10
FSK
Noncoherent
DPSK
Random phase
10
10
Coherent PSK
Coherent ASK & FSK
10
10
15
20
25
Received SNR per bit (dB)
30
35
241/291
Diversity
All communication schemes over a Rayleigh fading channel have the same
1
discouraging performance behavior of P [error] SNR
.
The reason is that it is very probable for the channel to exhibit what is called a
deep fade, i.e, the received signal amplitude becomes very small.
Diversity technique: multiple copies of the same message are transmitted over
independent fading channels in the hope that at least one of them will not
experience a deep fade.
Time diversity: Achieved by transmitting the same message in different time slots.
Frequency diversity: Accomplished by sending the message copies in different frequency
slots.
Antenna diversity: Achieved with the use of antenna arrays
242/291
cos(2f1 t),
E
q b q Tb
s(t) =
Eb T cos(2f2 t),
b
rj (t)
if 0T
if 1T
q q
E
j cos(2f1 t j ) + w(t), 0T
q b q Tb
Eb T j cos(2f2 t j ) + w(t), 1T
b
s
s
q
q
2
2
cos(2f1 t) + Eb nj,Q
sin(2f1 t) +w(t),
Eb nj,I
Tb
Tb
|
|
{z
}
{z
}
(1)
(1)
(t)
(t)
j,I
j,Q
s
s
q
q
2
2
Eb nj,I
cos(2f2 t) + Eb nj,Q
sin(2f2 t) +w(t),
Tb
Tb
{z
}
{z
}
|
|
(2)
(2)
(t)
(t)
j,I
0T
1T
j,Q
243/291
r
r
(1)
2 cos(2f t), (1) (t) =
2 sin(2f t),
(t) =
1
1
j,I
T
j,Q
T
r b
r b
(2)
2 cos(2f t), (2) (t) =
2 sin(2f t),
(t) =
2
2
j,I
j,Q
Tb
Tb
(j 1)Tb t jTb , j = 1, . . . , N.
0T
(1)
=
1,I
(1)
r
=
N,I
(1)
(1)
E n
+w
1,I
b 1,I
(1)
=
1,Q
(1)
r
=
N,Q
.
.
.
(1)
(1)
E n
+w
b N,I
N,I
(2)
(2)
= w
1,I
1,I
.
.
.
(2)
(2)
r
= w
N,I
N,I
(1)
(1)
E n
+w
1,Q
b 1,Q
.
.
.
(1)
(1)
E n
+w
b N,Q
N,Q
(2)
(2)
=w
1,Q
1,Q
.
.
.
(2)
(2)
r
=w
N,Q
N,Q
1T
(1)
(1)
= w
1,I
1,I
.
.
.
(1)
(1)
r
= w
N,I
N,I
r
(2)
(2)
(2)
r
=
+w
E n
1,I
1,I
b 1,I
r
(2)
r
=
N,I
.
.
.
(2)
(2)
E n
+w
b N,I
N,I
(1)
(1)
=w
1,Q
1,Q
.
.
.
(1)
(1)
r
=w
N,Q
N,Q
r
(2)
(2)
(2)
r
=
E n
+w
1,Q
1,Q
b 1,Q
r
(2)
r
=
N,Q
.
.
.
(2)
(2)
E n
+w
b N,Q
N,Q
244/291
j=1
2N
Q
j=1
2
2
1
erj /(2w )
2w
r 2 /(2t2 )
1
e j
2t
4N
X
4N
Q
j=2N+1
4N
Q
j=2N+1
2
2
1
erj /(2t ) 1
2t
D
2N
1D X
rj2 R
j=2N+1
R 1,
2 ) 0
r 2 /(2w
D
1
e j
2w
rj2 .
0D j=1
245/291
Consider y = x21 + x22 + + x2N where the xi s are zero-mean, statistically independent
Gaussian random variables with identical variances, 2 . To find fy (y) determine the characteristic
function y (f ) and then inverse transform it.
)
( N
N
N
j2 P x2k
o
n
Y j2f x2
Y
2
j2f y
k=1
k
e
E ej2f xk .
=
=E
=E e
y (f ) = E e
k=1
k=1
n
R j2f x2 x2 /(22 )
1
ke
k
1
e
dxk =
. Therefore
2
1j42 f
R
j2yf
1
1
and
f
(y)
=
e
df , where y 0. From
y (f ) =
y
(1j42 f )N/2
(1j42 f )N/2
R
2p1 ep
ipx
the identity ( ix) e
dx = N 1
u(p),
where R() > 0 and R() > 0,
2
()
ey/(2 )
y2
the pdf is
u(y),
fy (y) =
N
2 2 N N
2
R x1 t
where (x) = 0 t
e dt = (x 1)! for x integer.
E
ej2f xk
246/291
Define 1 =
P4N
j=2N+1
r2j and 0 =
P2N
2
j=1 rj .
1D
1 R 0 .
0D
P [error] = P [error|0T ] =
f (0 |0T )
Z
f (1 |0T )d1 d0 .
f (1 |0T ) =
N1 e0 /(2t )
N1
e1 /(2w )
1
u(1 ), f (0 |0T ) = 0 N 2N
u(0 ).
2N (N )
2N w
2 t (N )
N 2 Nj
X
j=1
2
w
1+
t2
2
w
2Nj
(2N j)
.
(N )(N j + 1)
247/291
N1
X N 1 + k 1 + T k
1
.
N
k
(2 + T ) k=0
2 + T
N1
X N 1 + k
1 2N 1
1
.
=
N
N
k
(T ) k=0
(T )N
The error performance now decays inversely with the N th power of the received
SNR.
The exponent N of the SNR is generally referred to as the diversity order of the
modulation scheme.
248/291
10
10
P[error]
10
N=6
N=2
10
10
N=8
Noncoherent FSK
in random phase only
10
N=4
10
10
15
20
25
Received SNR per transmission (dB)
30
35
249/291
Optimum Diversity
250/291
2
E b F
N0
.
10
10
10
P[error]
N=2
3
10
N=4
4
10
N=6
Noncoherent FSK
in random phase only
10
N=8
6
10
10
15
20
25
Received SNR per bit (dB)
30
35
251/291
10
(b)
(a)
14
SNR per bit ranges from 2 to 16 dB in 1dB steps
12
10
10
P[error]
10
10
8
6
4
2
10
5
10
15
Diversity order N (number of transmissions per bit)
20
0
2
6
8
10
12
Average received SNR per bit (dB)
14
16
252/291
Central limit theorem states that under certain general conditions the sum of n
statistically independent continuous random variables has a pdf that approaches a
Gaussian pdf as n increases.
P
Let x = n
independent
random variables
i=1 xi . where the xi s are statistically
2
2
with mean, E{xi } = mi , and variance,
i . Then x is a
Pn E (xi mi ) 2 =
P
2
random variable with mean mx = i=1 mi , variance x = n
i=1 i and a pdf of
fx (x) = fx1 (x) fx2 (x) fxn (x).
By the central limit theorem fx (x) approaches a Gaussian pdf as n increases, i.e.,
fx (x)
1
2x
(xmx )2
2
2x
253/291
Actual distribution
Gaussian fit
n=2
0.45
0.4
n=4
0.35
fx(x)
0.3
0.25
n=6
0.2
n=10
0.15
0.1
0.05
0
8
0
x
254/291
0.35
0.3
n=2
f (x)
0.25
n=4
0.2
n=6
0.15
n=10
0.1
0.05
0
5
10
x
15
20
25
255/291
INTRODUCTION TO SPACE-TIME
CODING
256/291
Pulse
shaping
filter
VI [ m]
Information bits
sI ( t )
p(t )
Symbol
mapper
QAM signal
Pulse
shaping
filter
VQ [ m]
sQ ( t )
p(t )
Slowly varying
flat fading
Channel, h(t)
s (t )
cos(2 f ct )
sin(2 f ct )
t = mTs
Matched
filter
Inphase
decision
p (Ts t )
cos(2 f ct )
r (t ) = h (t ) s (t ) + w(t )
Detected bits
Mux
t = mTs
Matched
filter
Quad.
decision
p (Ts t )
sin(2 f ct )
257/291
Input/Output Model
2x
exp
2
h
x2
2
h
x0
(3)
1
exp
2
h
x
2
h
x0
(4)
The Rayleigh fading model applies if there are many small reflectors.
258/291
If the line-of-sight path (or a specular path) is large and has a known magnitude,
and that there are also a large number of independent paths, then h[m] can be
modeled as
r
r
1
2
h[m] =
)
h ej +
CN (0, h
(5)
+1
+1
=energy in the specular path/energy in the scattered paths (called K-factor).
The magnitude of h[m] is said to have a Rician distribution:
"
#
( + 1)x2
2x( + 1)
exp
I0
2
2
h
h
2x
( + 1)
2
h
x0
(6)
259/291
1
2
ez cos() d =
ez cos d
z
e
,
2z
z2
1+ 4 ,
z 1,
z 1.
(7)
Nakagami-m fading is even a more general model, which includes both Rayleigh
and Rician (approximately) fading as special cases.
I (z)
60
40
20
0
0
3
z
260/291
Rician Distribution
1.5
0.5
0
0
0.5
1.5
x
2.5
261/291
Nakagami-m Distributions
"
#
2mm x2m1
mx2
exp 2
,
2m
(m)h
h
Z
tm1 et dt
(m) =
m 0.5
(8)
(9)
(10)
Gamma Function
(m)
150
100
50
0
0
3
m
262/291
Nakagami-m Distributions
The case m = 1 reduces to Rayleigh fading. For m = ( + 1)2 /(2 + 1) gives an
approximately Rician fading. Also m = yields an AWGN channel.
2.5
m=1
m=2
m=4
m=8
Nakagamim Distribution
1.5
0.5
0
0
0.5
1.5
x
2.5
263/291
(11)
x[m]=a
x[m]=a
Then
Pe
=
=
=
1
1
P (u > 0|x[m] = a) + P (u < 0|x[m] = +a)
2
2
!
a
Q p
=Q
2SNR where SNR = a2 /N0 = Eb /N0
N0 /2
264/291
Q-function
caZO
fa]TV
fa]TV
caUUa]X^ daeS][ZV
daeS][ZV WXSYZS]^ gRRShaZS]^
1 2
e
2
_NQO
`Qa^b_
Area = Q ( x )
1
Q(x)
2
exp
2
2
d.
10
10
Q(x)
10
10
10
10
10
3
x
265/291
(12)
(13)
To find the overall error performance, need to average over the random gain h.
For Rayleigh fading, model h as h CN (0, 1) so that the average received SNR
is a2 /N0 , the same as in the case of AWGN channel.
The distribution of |h|2 is exp(x).
266/291
1
v
exp
2SNR
2SNR
1
Pe|h = Pe|v = Q( v) =
/2
( v)2
exp
d
2
2 sin
267/291
Pe
Q( v)fv (v)dv
=2
1
2
exp
1
v2
v
ddv
exp
2
2 sin 2SNR
2SNR
0
0
Z /2 Z
1
1
1
+
dv d
exp v
2
2SNR
2SNR
2
sin
0
0
1
Z /2
Z /2
1
SNR
sin2
d =
d
1+
2
sin
SNR
+ sin2
0
0
"
#
Z /2
Z
2SNR
SNR
1 /2
1
1
d
d =
1cos(2)
2
0
(2SNR + 1) cos(2)
0
SNR +
2
Z
2SNR
1
d
0 (2SNR + 1) cos
Z
/2
dx
0 1+a cos x
SNR
1 + SNR
1a2
1
4SNR
we obtain:
(14)
268/291
10
pe
10
10
10
15
10
20
10
SNR (dB)
20
30
40
269/291
Over fading channels, the root cause of poor performance is that there is only
one signal path & there is a significant probability that this path will be in a deep
fade.
Diversity solution: Ensure that the information symbols pass through multiple
signal paths, each of which fades independently .
Typical diversity techniques:
Time Diversity (e.g., coding and interleaving)
Frequency Diversity (e.g., OFDM)
Space Diversity (e.g., MIMO)
Signal Space Diversity
Macro Diversity (e.g, in cellular network)
270/291
Time Diversity
Time diversity is achieved by averaging the fading over time.
Typically, the coherence time is of the order of hundreds of symbols. Therefore,
the channel is highly correlated across consecutive symbols.
To ensure the symbols are transmitted through independent fading gains,
interleaving over codewords is required.
hl
L=4
x0
x1
x2
x3
271/291
(15)
Using interleaving, {xl } are transmitted sufficiently far apart in time and we can
assume hl s are independent.
L is called the number of diversity branches.
The simplest code is repetition code, where xl =x1 y = hx1 + w for all l.
The detection problem becomes the scalar detection:
h
h
h
y = ||h||x1 +
w, where
w CN (0, N0 )
||h||
||h||
||h||
(16)
272/291
(17)
22L Distribution
0.5
L=1
L=2
L=3L=4
L=5
0
0
10
1
2
L L1
X
l=0
L1+l
l
1+
2
l
with =
SNR
1 + SNR
273/291
10
L=1
10
pe
L=2
10
L=3
10
L=4
10
15
20
10
SNR (dB)
20
30
40
274/291
275/291
Receive Diversity
l = 1, . . . , L
276/291
Transmit Diversity
y[ m] = ( h1[ m] + h2 [ m]) x[ m] + w[ m]
x[m]
x[m]
h1[m]
h2 [ m]
x[ m]
= h1* [ m], h2* [ m]
+ w[ m]
[
]
x
m
h* [ m ]
x[ m ]
= h [ m]x[ m] + w[ m]
*
If only the receiver knows the channel (h1 [m] and h2 [m]), only
diversity order of 1 is achieved!
If the transmitter also knows the channel, transmit beamforming can
be done to achieve diversity order of 2:
x[m] =
h[m]
x[m] y[m] = kh[m]k x[m] + w[m]
kh[m]k
277/291
278/291
Alamouti Scheme
Time slot 1 ( m = 1)
x1[ 2] = u 2*
x1[1] = u1
h1
x2 [1] = u 2
Time slot 2 (m = 2)
x2 [2] = u1*
h2
[y[1] y[2]] =
y[1]
=
y [2]
h1
y[1]
y[ 2]
h2
u1 u2
[h1 h2 ]
+ [w[1] w[2]]
u2
u1
h1
h2
u1
w[1]
+
h2 h1
u2
w [2]
Transmit two complex symbols (u1 and u2 ) over two time slots.
279/291
Alamouti Scheme
Time slot 2 (m = 2)
Time slot 1 ( m = 1)
x1[ 2] = u 2*
x1[1] = u1
h1
x2 [1] = u 2
y[1]
y [2]
x2 [2] = u1*
h2
h1
y[1]
|
h1
h2
y[ 2]
h2
u1
w[1]
h2
+
u2
w [2]
h1
{z
}
H
i = 1, 2
280/291
Alamouti Scheme
Time slot 1 ( m = 1)
Time slot 2 (m = 2)
x1[ 2] = u 2*
x1[1] = u1
h1
x2 [1] = u 2
h1
y[1]
h2
x2 [2] = u1*
y[ 2]
h2
y[1]
= = |h1 |2 + |h2 |2 u1 + w
1
y [2]
y[1]
2
= = |h1 |2 + |h2 |2 u2 + w
r2 = [h2 h1 ]
y [2]
r1 = [h1 h2 ]
281/291
Time slot 1 ( m = 1)
x1[1] = u1
x2 [1] = u 2
y1[1]
y2 [1]
Time slot 2 (m = 2)
x1[ 2] = u 2*
x2 [2] = u1*
y1[2]
y2 [ 2]
u
X= 1
u 2
u 2*
u1*
282/291
Pr (XA XB ) = E Q
2
Apply unitary transformation: (XA XB )(XA XB ) = UU ,
where U is unitary and = diag(21 , 22 , . . . , 2L ), with l s are the
singular values of (XA XB ).
283/291
q
PL 2 2
L
Y
SNR l=1 |h
l | l
1
Pr (XA XB ) = E Q
1 + SNR2l /4
2
l=1
4L
4L
= U h
=
, where h
Q
L
SNRL det[(XA XB )(XA XB ) ]
SNRL l=1 2l
Rank Criterion: If all the 2l are strictly positive, the the maximum
diversity order of L is achieved Should maximize the number of
positive eigenvalues 2l , which is also the rank of the codeword
difference matrix.
Determinant Criterion: The coding gain is determined by the
minimum of the determinant det[(XA XB )(XA XB ) ] over all
codeword pairs Should maximize the minimum determinant.
284/291
(a) It has been shown that the coherent ML detector is the maximum
ratio combiner (MRC). Show that the instantaneous SNR at the
PL
output of the MRC is given by = l=1 l . How does the above
result depend on the fading statistics of the diversity branches?
(b) Equal gain combiner (EGC) is simpler than MRC since it only
co-phases the signal on each diversity branch and then combines
them with equal weighting. Find the expression for the SNR at the
output of the EGC.
285/291
(c) Now consider selection combiner (SC) which picks the diversity
branch with the highest instantaneous SNR and uses it for detection.
Such a combining scheme requires only one RF chain (as opposed to
L RF chains in both MRC and EGC) in the receiver, hence reducing
the implementation cost. Assume that the fading on all diversity
branches are identically distributed with hl CN (0, 1) (i.e., Rayleigh
fading). Show that the pdf of the instantaneous SNR at the output
of the SC is:
iL1
L h
e SNR
(19)
1 e SNR
f () =
SNR
286/291
(e) Finally consider the hybrid combiner (HC) which uses J RF chains
where 1 < J < L and L > 2. The HC chooses J receive signals with
the highest instantaneous SNR and combines them using MRC. It
can be shown that for Rayleigh fading with hl CN (0, 1) on all
branches, the average SNR at the output of the HC is:
"
#
L
X
1
HC = SNR J 1 +
(20)
m
m=J+1
287/291
7
6
5
4
3
MRC
EGC
SC
HC J=2
HC J=4
2
1
0
0
4
6
L (# of diversity branches)
10
288/291
Laboratory Exercise
This laboratory exercise is concerned with Matlab simulation of a wireless
communication system using QPSK modulation together with the
Alamouti space-time code.
You will be provided a complete Matlab program that simulates QPSK
modulation over an AWGN channel. Please study the Matlab program
carefully and understand all major functionalities (such as setting SNR
values, generating information bits, forming QPSK symbols, simulate
AWGN channel, performing demodulation, counting error bits, etc.).
The specific tasks are as follows:
1. Modify the program to simulate the transmission and detection of
QPSK signals over a Rayleigh fading channel. The only change is to
include a random channel gain h(m) to simulate Rayleigh fading and
how to handle this gain as the receiver. Note that you are supposed
to know h(m) at the receiver.
As a specific requirement for this part, you need to obtain and plot
the BER curves, both by theory and simulation, on the same graph
for SNR = [0 : 5 : 30] dB. If your program is correct, the results
should look the same as the red curve on page 13 of the lecture slides.
289/291
290/291
THANK YOU!
291/291