Adaptive Filters and Applications: Supervised by Prof. Dr. Ehab A. Hussein

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 41

Adaptive Filters and Applications

Prepared by:
Ali Shaban & Dhurgham Mohammed

Supervised By
Prof. Dr. Ehab A. Hussein
Outlines
 Introduction
 concept of adaptive filtering
 Wiener Filter and Least Mean Square Algorithm
 LMS algorithm by using the steepest descent algorithm
 LMS algorithm in terms of sample-based processing
 The Recursive Least-Squares (RLS) Algorithm
 Applications of adaptive filter
1. Noise Cancellation
2. System Modeling
3. Line Enhancement Using Linear Prediction
 Other Application Examples
1. Canceling Periodic Interferences Using Linear Prediction
2. Electrocardiography Interference Cancellation
3. Echo Cancellation in Long-Distance Telephone Circuits
 References
Introduction
 An adaptive filter is a digital filter that has self-adjusting characteristics. It is capable of
adjusting its filter coefficients automatically to adapt the input signal via an adaptive
algorithm.

 Adaptive filters play an important role in modern digital signal processing (DSP) products
in areas such as telephone echo cancellation, noise cancellation, equalization of
communications channels, biomedical signal enhancement, active noise control, and
adaptive control systems.

 Adaptive filters work generally for adaptation of signal-changing environments, spectral


overlap between noise and signal, and unknown, or time-varying, noise.
concept of adaptive filtering
 First, look at an illustrative example of the simplest noise canceler to see how adaptive filter works
concept of adaptive filtering
  The first microphone with ADC is used to capture the desired speech s(n). However, due to a noisy
environment, the signal is contaminated and the ADC channel produces a signal with the noise;
that is, . The second microphone is placed where only noise is picked up and the second ADC
channel captures noise x(n), which is fed to the adaptive filter.

 Note that the corrupting noise n(n) in the first channel is uncorrelated to the desired signal s(n), so
that separation between them is possible. The noise signal x(n) from the second channel is
correlated to the corrupting noise n(n) in the first channel, since both come from the same noise
source. Similarly, the noise signal x(n) is not correlated to the desired speech signal s(n).

 The filter adjustable coefficient 𝑤𝑛 is adjusted based on the LMS algorithm in the following

= +0.01.𝑒(𝑛).𝑥(𝑛)

Where is the coefficient used currently, while is the coefficient obtained from the LMS algorithm and
will be used for the next coming input sample. The value of 0.01 controls the speed of the coefficient
change.
concept of adaptive filtering
  For example the initial coefficient set to be
and leads to:

x(n)
e(n) = d(n) + y(n)
= +0.01.𝑒(𝑛).𝑥(𝑛)

 The corrupted signal is generated


by adding noise to a sine wave.
The corrupted signal and noise reference
are shown in Figure below:
concept of adaptive filtering
Example1: How to perform adaptive filtering for several samples using the values for the corrupted signal and
reference noise in table, and LMS algorithm has the initial coefficient set to be 𝑤𝑛=0.3.

Solution:
concept of adaptive filtering
In general, the FIR filter with multiple-taps is used and has the
 

following format:

y(n) = x(n) + x(n-1) + x(n-2)……………… +


Wiener Filter and Least Mean Square Algorithm
Many adaptive algorithms can be viewed as approximations of the discrete Wiener filter
shown in Figure below:

 Consider a single-weight case of 𝑦(𝑛)= 𝑤𝑥(𝑛), and note that the error signal 𝑒(𝑛) is given by
𝑒(𝑛)=𝑑(𝑛)− 𝑤𝑥(𝑛)
Now let us solve the best weight 𝑤. Taking the square of the output error leads To

Taking the statistical expectation of Equation, we have


𝐸 ()=𝐸 ()
  Using the notations in statistics, we define

𝐽= 𝐸 ()= 𝑀𝑆𝐸 (mean squared error)


= 𝐸 ())= Power of corrupted signal
𝑃= 𝐸(𝑑(𝑛)𝑥(𝑛))=Cross-correlation between 𝑑(𝑛) and 𝑥(𝑛)
𝑅= 𝐸())= Autocorrelation
 We can view the statistical expectation as an average of the N signal terms, each being a product of two
individual signal samples:

For a sufficiently large sample number of N. We can write Equation as

𝐽= − 2𝑤𝑃+ 𝑅.
 Since , 𝑃, and 𝑅 are constants, 𝐽 is a quadratic function of 𝑤 that may be plotted in Figure below:

 Thebest weight (optimal)𝑤∗ is at the location where the minimum MSE is achieved. To
obtain 𝑤∗, taking a derivative of 𝐽 and setting it to zero leads to

Solving Equation
 Example2 Given a quadratic MSE function for the Wiener filter: 𝐽 = 40 − 20𝑤 + 10 , Find the optimal solution
for 𝑤∗ to achieve the minimum MSE and determine .

Solution

Taking a derivative of the MSE function and setting it to zero, we have = −20 + 10 × 2𝑤 = 0
Solving the equation leads to 𝑤∗ = 1.
Finally, substituting 𝑤∗ = 1 into the MSE function, we get the as
=
= 40 − 20 × 1 + 10 ∗ = 30

Note
 If a larger number of coefficients (weights) are used, the inverse matrix of may require a larger number of
computations. This will make real-time implementation impossible.

 The optimal solution is based on the statistics, assuming that the size of the data block, N, is sufficiently long.
This will cause a long processing delay that will make real-time implementation impossible.
LMS algorithm by using the steepest descent algorithm
  to minimize the MSE sample by sample and locate the filter coefficient(s). We first study the steepest descent
algorithm as illustrated in Equation:

where = constant controlling speed of convergence.


As shown in figure below (first plot) if < 0 notice that 0. The new coefficient will be increased to approach the
optimal value , On the other hand, if > 0 as shown in sconed plot we see that 0. The new coefficient will be
decreased to approach the optimal value When = 0, the best coefficient is reached.
 Example3 Given a quadratic MSE function for the Wiener filter:
𝐽 = 40 − 20𝑤 + 10
a. Use the steepest descent method with an initial guess as and = 0.04 to find the optimal solution for w
and determine by iterating three times.

Solution Taking the derivative of the MSE function


= −20 + 10 × 2
When n = 0, we calculate
= .
Applying the steepest descent algorithm, it follows that
= 0-(-0.8) = 0.8
Similarly for n = 1, we get
=
= 0.8 -(-0.16) = 0.96
 and for n = 2, it follows that
=
= 0.96 -(-0.032) = 0.992

Finally, substituting w3 = 0.992 into the MSE function, we get the minimum as

 As we can see, after three iterations, the filter coefficient and minimum MSE values are very close to the
theoretical values obtained in Example2

 Application of the steepest descent algorithm still needs an estimation of the derivative of the MSE
function that could include statistical calculation of a block of data. To change the algorithm to do
sample-based processing, an LMS algorithm must be used.
LMS algorithm in terms of sample-based processing
 To develop the LMS algorithm in terms of sample-based processing, we take the statistical expectation out of
J and then take the derivative to obtain an approximate of , that is,

Substituting into the steepest descent algorithm in Equation

we achieve the LMS algorithm for updating a single-weight case as

The convergence factor is chosen to be


 where 𝑃𝑥 is the input signal power. In practice, if the ADC has 16-bit data, the maximum signal amplitude
should be 𝐴= . Then the maximum input power must be less than

Hence, we may make a selection of the convergence parameter as

We further neglect time index for and use the notation since only the currently updated coefficients are
needed for the next sample adaptation.

We conclude the implementation of the LMS algorithm by the following steps:


1. Initialize 𝑤(0),𝑤(1),...,𝑤(𝑁− 1) to arbitrary values.
2. Read 𝑑(𝑛),𝑥(𝑛), and perform digital filtering:

3. Compute the output error:

4. Update each filter coefficient using the LMS algorithm:


for 𝑖=0,...,𝑁−1
The Recursive Least-Squares (RLS) Algorithm
 
 LMS algorithm is computationally simple, but has the following drawbacks:
1. LMS convergence slow, particularly when the eigen values of the autocorrelation matrix
are widely spread.
2. Step parameter is to be properly chosen.
3. Excess mean-square error is high.

 The RLS algorithm converges much faster than the LMS algorithm, where in the LMS
equation make the direct solution computationally complex. The RLS algorithm finds the
above invers recursively.
 Convergence speed is not strongly dependent on the input statistics.
 Weighted sum square error (SSE)

e(k): filtering error at instant k due to h(n) & k=0,1,….,n

0< <1: weighting factor known as forgetting factor.

Past errors are given smaller and smaller weights, this allow the filter coefficients to adapt.

Minimize with respect to w(n) is given by


  autocorelation

Similarly crosscorelation

can be written as follow:


 

This shows that the autocorrelation matrix can be recursively computed from its previous values and present
data vector.

Similarly the cross correlation vector is given by

To get the inverse of the sum matrix, matrix inversion lemma is used.
 Denote P(n) = )

Where is gain vector and given by

K(n) is important to interpret adaptation. It bis also related to the current data vector x(n) by
 Filter updates

 We have to initialize w(-1) 𝑘 =0, 𝑝 ( 𝑛 − 1) 𝑥 (𝑛)


 For faster convergence
( 𝑛)=
(𝜆+ ~
𝑥 ( 𝑛)   P ( n − 1 ) 𝑥 (𝑛 ) )
 RLSalgorithm steps:
 Initialization
1. w(-1) =0, choose

 Operation
For n=0, 1, 2,…. Do
1. Get d(n), x(n)
2. Get
3. Calculate

4. Update the filter parameters w (n) = w(n-1) + k(n) e(n)

5. Update the matrix


6. Repeat: Step 3 until w (n) converges or continue indefinitely.
Applications of adaptive filter
 Noise Cancellation
Example4 Given the DSP system for the noise cancellation application using an adaptive filter with two
coefficients shown in Figure below:
a. Set up the LMS algorithm for the adaptive filter.
b. Perform adaptive filtering to obtain outputs e(n) = n = 0, 1, 2 given the following inputs and outputs:

and initial weights: w(0) = w (1)= 0


convergence factor is set to be = 0.1:
 Solution
a. The adaptive LMS algorithm is set up as:

Initialization: w(0) =0, w(1)= 0


Digital filtering: y(n) = w(n)x(n) + w(1)x(n-1)
Computing the output error = output: e(n) =d(n) - y(n)
Updating each weight for the next coming sample:
W(i) = w(i) +2 e(n) x(n-i), for I = 0, 1
W(0) = w(0) + 2 e(n) x(n)
W(1) = w(1) + 2 e(n) x(n-1)

b. We can see the adaptive filtering operations as follows:


For n = 0
Digital filtering:
y(0) = w(0)x(0) + w(1)x( -1) = 0 1 + 0 0 = 0
Computing the output:
 Updating coefficients:

For 𝑛=1
Digital filtering:

Computing the output:

Updating coefficients:

For 𝑛=2
Digital filtering:

Computing the output:

Updating coefficients:

Hence, the adaptive filter outputs for the first three samples are listed as
System Modeling
 Another application of the adaptive filter is system modeling. The adaptive filter can keep tracking the behavior
of an unknown system by using the unknown system’s input and output, as depicted in Figure below:

 As shown in the figure, y(n) is going to be as close as the unknown system’s output. Since both the
unknown system and the adaptive filter use the same input, the transfer function of the adaptive filter will
approximate that of the unknown system.
 Example5 Given the system modeling described and using the single-weight adaptive filter y(n) = w x(n)
to perform the system modeling task,
a. Set up the LMS algorithm to implement the adaptive filter, assuming that the initial w = 0 and = 0.5
b. Perform adaptive filtering to obtain y(0), y(1), y(2), and y(3) given

Solution
c. Adaptive filtering equations are set up as
𝑤 = 0 and 2𝜇 = 2 × 0.5 = 1

b. Adaptive filtering:
 

 For this particular case, the system is actually a digital amplifier with a gain of 2 .
Line Enhancement Using Linear Prediction

If a signal frequency content is very narrow compared with the bandwidth and changes with time, then the
signal can efficiently be enhanced by the adaptive filter, which is line enhancement. Figure below shows line
enhancement using the adaptive filter where the LMS algorithm is used. The value of Δ is usually determined
by experiments or experience in practice to achieve the best enhanced signal.
Other Application Examples
 The topics include periodic interference cancellation, ECG interference cancellation, and echo
cancellation in long-distance telephone circuits.

1. Canceling Periodic Interferences Using Linear Prediction


An audio signal may be corrupted by periodic interference and no noise reference is available. Such
examples include the playback of speech or music with the interference of tape hum, turntable
rumble, or vehicle engine or power line interference. We can use the modified line enhancement
structure as shown in Figure below:
 
 The adaptive filter uses the delayed version of the corrupted signal x(n) to predict the
periodic interference. The number of delayed samples is selected by the experiment of the
adaptive filter performance. Note that a two-tap adaptive FIR filter can predict a one
sinusoid, as noted earlier. After convergence, the adaptive filter would predict the
interference as

 Therefore, the error signal contains only the desired audio signal
2. Electrocardiography Interference Cancellation

 In recording of electrocardiograms, there often exists unwanted 60-Hz interference, along with its
harmonics, in the recorded data. This interference comes from the power line, including effects from
magnetic induction, displacement currents in leads or in the body of the patient, and equipment
interconnections and imperfections.
 Figure below illustrates the application of adaptive noise canceling in ECG.
 
 The primary input is taken from the ECG preamplifier, while a 60-Hz reference input is taken from a
wall outlet with proper attenuation. After proper signal conditioning, the digital interference x(n) is
acquired by the digital signal (DS) processor. The digital adaptive filter uses this reference input signal
to produce an estimate, which approximates the 60-Hz interference n(n) sensed from the ECG amplifier.

 Here, an FIR adaptive filter with N taps and the LMS algorithm can be used for this application:

 Then after convergence of the adaptive filter, the estimated interference is subtracted from the primary
signal of the ECG preamplifier to produce the output signal e(n), in which the 60-Hz interference is
canceled:

 With enhanced ECG recording, doctors in clinics can give more accurate diagnoses for patients.
3. Echo Cancellation in Long-Distance Telephone Circuits
  For example, in Figure below, if speaker B talks, the speech indicated as will pass the transmission line to
reach user A, and a portion of at site A is leaked and transmitted back to the user B, forcing caller B to
hear his or her own voice. This is known as an echo for speaker B. A similar echo illustration can be
conducted for speaker A. When the telephone call is made over a long distance (more than 1,000 miles,
such as with geostationary satellites), the echo can be delayed by as much as 540 ms.

 To circumvent the problem of echo in long-distance communications, an adaptive filter is applied at


each end of the communication system.
 The incoming signal is from speaker B, while the outgoing signal contains the speech from speaker A and a
portion of leakage from the hybrid circuit . If the leakage returns back to speaker B, it becomes an
annoying echo. To prevent the echo, the adaptive filter at the speaker A site uses the incoming signal from
speaker B as an input and makes its output approximate to the leaked speaker B signal by adjusting its filter
coefficients; that is:

The estimated echo is subtracted from the outgoing signal, thus producing the signal that contains only
speech A; that is, . As a result, the echo of speaker B is removed. We can
illustrate similar operations for the adaptive filter used at the speaker B site. In practice, the FIR adaptive
filter with several hundred coefficients or more is commonly used to effectively cancel the echo.
Tutorials:
References:

[1] Li Tan, “Digital signal processing: fundamentals and


applications, Chapter 10”, DeVry University Decatur, Georgia,
Elsevier, 2008.
[2] A. Zaknich, “Principles of Adaptive Filters and Self learning
Systems, Part IV P240”, Springer-Verlag London Limited 2005

You might also like