Equalization in A Wideband TDMA System: - Three Basic Equalization Methods

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 29

Equalization in a

wideband TDMA system


Three basic equalization methods
Linear equalization (LE)
Decision feedback equalization (DFE)
Sequence estimation (MLSE-VA)
Example of channel estimation circuit

Three basic equalization methods (1)


Linear equalization (LE):
Performance is not very good when the frequency response
of the frequency selective channel contains deep fades.
Zero-forcing algorithm aims to eliminate the intersymbol
interference (ISI) at decision time instants (i.e. at the
center of the bit/symbol interval).
Least-mean-square (LMS) algorithm will be investigated in
greater detail in this presentation.
Recursive least-squares (RLS) algorithm offers faster
convergence, but is computationally more complex than
LMS (since matrix inversion is required).

Three basic equalization methods (2)


Decision feedback equalization (DFE):
Performance better than LE, due to ISI cancellation of tails
of previously received symbols.
Decision feedback equalizer structure:
Feed-back
filter (FBF)
Input

Feed-forward
filter (FFF)
Adjustment of
filter coefficients

Output
+

Symbol
decision

Three basic equalization methods (3)


Maximum Likelihood Sequence Estimation using
the Viterbi Algorithm (MLSE-VA):
Best performance. Operation of the Viterbi algorithm can be
visualized by means of a trellis diagram with m K-1 states,
where m is the symbol alphabet size and K is the length of
the overall channel impulse response (in samples).
State trellis
diagram

Allowed transition
between states

State
Sample time
instants

Linear equalization, zero-forcing algorithm


Basic idea:
Raised
cosine
spectrum

Z f B f H f E f

Transmitted
symbol
spectrum

Channel frequency
response
(incl. T & R filters)

Equalizer
frequency
response

B f
H f
E f
Z f
0

fs = 1/T

Zero-forcing equalizer
Transmitted
impulse
sequence
Overall
channel

Communication
channel
FIR filter contains
2N+1 coefficients

Channel impulse response

hk

r k

h k n

c k

Coefficients of
equivalent FIR filter

fk

FIR filter contains


2M+1 coefficients

Input to
decision
circuit

Equalizer impulse response

n N

Equalizer

z k

m M
M

m M

cm hk m

cm k m

( M k M )

(in fact the equivalent FIR filter consists of 2M+1+2N coefficients,


but the equalizer can only handle 2M+1 equations)

Zero-forcing equalizer
We want overall filter response
to be non-zero at decision time
k = 0 and zero at all other
sampling times k 0 :

fk

ch

m M

m k m

1, k 0

0, k 0

h0 c M h1c M 1 ... h2 M cM 0

(k = M)

h1c M h0 c M 1 ... h2 M 1cM 0


This leads to
a set of 2M+1
equations:

:
hM c M hM 1c M 1 ... h M cM 1

(k = 0)

:
h2 M 1c M h2 M 2 c M 1 ... h1cM 0
h2 M c M h2 M 1c M 1 ... h0cM 0

(k = M)

Minimum Mean Square Error (MMSE)


The aim is to minimize:

J E ek

ek zk bk (or bk zk depending on the source)


Input to
decision
circuit

Estimate
of k:th
symbol

Error

ek

s k

Channel

r k

Equalizer

zk

z k

bk
b k

MSE vs. equalizer coefficients


J E ek
2

quadratic multi-dimensional function of


equalizer coefficient values

J
c2
c1

Illustration of case for two real-valued


equalizer coefficients (or one complexvalued coefficient)

MMSE aim: find minimum value directly (Wiener solution),


or use an algorithm that recursively changes the equalizer
coefficients in the correct direction (towards the minimum
value of J)!

Wiener solution
We start with the Wiener-Hopf equations in matrix form:

Rc opt p
R = correlation matrix (M x M) of received (sampled) signal
values

rk

p = vector (of length M) indicating cross-correlation between


received signal values

rk

and estimate of received symbol

bk

copt = vector (of length M) consisting of the optimal equalizer


coefficient values

(We assume here that the equalizer contains M taps, not


2M+1 taps like in other parts of this presentation)

Correlation matrix R & vector p


R E r k r*T k
where

r k rk , rk 1,..., rk M 1

*
p E r k bk

M samples

Before we can perform the stochastical expectation operation,


we must know the stochastical properties of the transmitted
signal (and of the channel if it is changing). Usually we do not
have this information => some non-stochastical algorithm like
Least-mean-square (LMS) must be used.

Algorithms
Stochastical information (R and p) is available:
1. Direct solution of the Wiener-Hopf equations:

Rc opt p

copt R p

Inverting a large
matrix is difficult!

2. Newtons algorithm (fast iterative algorithm)


3. Method of steepest descent (this iterative algorithm is slow
but easier to implement)

R and p are not available:


Use an algorithm that is based on the received signal sequence
directly. One such algorithm is Least-Mean-Square (LMS).

Conventional linear equalizer of LMS type


Received complex
signal samples

rk M

c M

c1 M

Widrow

Transversal FIR filter


with 2M+1 filter taps

cM 1

Complex-valued tap coefficients


of equalizer filter

LMS algorithm for


adjustment of
tap coefficients

rk M

ek

cM

zk

bk

Estimate of k:th symbol


after symbol decision

Joint optimization of coefficients and phase


r k

zk

Equalizer filter

e
Coefficient
updating

bk

Phase
synchronization

ek
Godard
Proakis, Ed.3, Section 11-5-2

Minimize:

J E ek

ek zk bk cm rk m exp j bk
m M

Least-mean-square (LMS) algorithm


(derived from method of steepest descent)
for convergence towards minimum mean square error (MMSE)

Real part of n:th coefficient:

Re cn i 1 Re cn i

Imaginary part of n:th coefficient:

ek ek ek
2

2 2M 1 1
equations

ek

Re cn

Im cn i 1 Im cn i

ek
Phase: i 1 i

Iteration index

ek

Im cn

Step size of iteration

LMS algorithm (cont.)


After some calculation, the recursion equations are obtained in
the form

j M


Re cn i 1 Re cn i 2 Re e cm rk m bk rk n e j

m M

j M


Im cn i 1 Im cn i 2 Im e cm rk m bk rk n e j

m M

j M

i 1 i 2 Im bk e cm rk m
m M

ek

Effect of iteration step size

smaller

larger

Slow acquisition

Poor stability

Poor tracking
performance

Large variation
around optimum
value

Decision feedback equalizer


bk 1

q1

qQ 1

c M

c1 M

bk

qQ

FBF

rk M

bk Q
?
zk

cM 1
FFF

rk M

cM

ek
LMS
algorithm
for tap
coefficient
adjustment

Decision feedback equalizer (cont.)


The purpose is again to minimize J E ek
where

ek zk bk

m M

ek

m k m

qn bk n bk
n 1

Feedforward filter (FFF) is similar to filter in linear equalizer


tap spacing smaller than symbol interval is allowed
=>
fractionally spaced equalizer
=> oversampling by a factor of 2 or 4 is common
Feedback filter (FBF) is used for either reducing or canceling
(difference: see next slide) samples of previous symbols at
decision time instants
tap spacing must be equal to symbol interval

Decision feedback equalizer (cont.)


The coefficients of the feedback filter (FBF) can be
obtained in either of two ways:
Recursively (using the LMS algorithm) in a similar
fashion as FFF coefficients
Proakis, Ed.3, Section 11-2
By calculation from FFF coefficients and channel
coefficients (we achieve exact ISI cancellation in
this way, but channel estimation is necessary):

qn

m M

cm hn m

n 1, 2,

,Q

Proakis, Ed.3, Section 10-3-1

Channel estimation circuit


Proakis, Ed.3, Section 11-3

Estimated
symbols

bk

LMS
algorithm

Filter length = CIR length


T

c0
rk

rk
+

k:th sample of received signal

c1 cM 1

cM

bk M
hm cm

Estimated channel coefficients

Channel estimation circuit (cont.)


1. Acquisition phase
Uses training sequence
Symbols are known at receiver, bk bk .
2. Tracking phase
Uses estimated symbols (decision directed mode)
Symbol estimates are obtained from the decision
circuit (note the delay in the feedback loop!)
Since the estimation circuit is adaptive, time-varying
channel coefficients can be tracked to some extent.
Alternatively: blind estimation (no training sequence)

Channel estimation circuit in receiver


Mandatory for MLSE-VA, optional for DFE

b k
Training
symbols
(no
errors)

Symbol estimates (with errors)


Estimated channel coefficients

Channel
estimation
circuit

h m

Equalizer
& decision
circuit

b k

r k
Received signal samples

Clean output symbols

Theoretical ISI cancellation receiver


(extension of DFE, for simulation of matched filter bound)

bk P

bk 1

Precursor cancellation
of future symbols

rk P

Filter matched to
sampled channel
impulse response

bk 1

bk Q

Postcursor cancellation
of previous symbols

bk
+

If previous and future symbols can be estimated without error


(impossible in a practical system), matched filter performance
can be achieved.

MLSE-VA receiver structure


r t

Matched
filter

NW
filter

y k

MLSE
(VA)

b k

f k
f k

Channel
estimation circuit

MLSE-VA circuit causes delay of estimated symbol sequence


before it is available for channel estimation
=> channel estimates may be out-of-date
(in a fast time-varying channel)

MLSE-VA receiver structure (cont.)


The probability of receiving sample sequence y (note: vector form)
of length N, conditioned on a certain symbol sequence estimate and
overall channel estimate:
Since we have AWGN

p y b, f p yk b, f
exp 2
N 2
N
k 1
2
2

Objective:
find symbol
sequence that
maximizes this
probability

This is allowed since


noise samples are
uncorrelated due to NW
(= noise whitening) filter

Length of f (k)
K 1

y
k 1

n 0

f b
n k n

Metric to be
minimized

(select best b
..
using VA)

MLSE-VA receiver structure (cont.)


We want to choose that symbol sequence estimate and overall
channel estimate which maximizes the conditional probability.
Since product of exponentials <=> sum of exponents, the
metric to be minimized is a sum expression.
If the length of the overall channel impulse response in
samples (or channel coefficients) is K, in other words the time
span of the channel is (K-1)T, the next step is to construct a
state trellis where a state is defined as a certain combination
of K-1 previous symbols causing ISI on the k:th symbol.

f k
0

Note: this is overall CIR,


including response of matched
filter and NW filter
K-1

MLSE-VA receiver structure (cont.)


At adjacent time instants, the symbol sequences causing ISI
are correlated. As an example (m=2, K=5):
At time k-3

1 0 0 1 0

At time k-2

1 0 0 1 0 0

At time k-1

1 0 0 1 0 0 1

At time k

1 0 0 1 0 0 1 1

Bit detected at time instant


Bits

causing ISI

:
16 states

not causing ISI at time instant

MLSE-VA receiver structure (cont.)


State trellis diagram
Number
of states

The best state


sequence is
estimated by
means of Viterbi
algorithm (VA)

mK 1
Alphabet
size

k-3

k-2

k-1

k+1

Of the transitions terminating in a certain state at a certain


time instant, the VA selects the transition associated with
highest accumulated probability (up to that time instant) for
further processing.

Proakis, Ed.3, Section 10-1-3

You might also like