Recursive Least-Squares (RLS) Adaptive Filters
Recursive Least-Squares (RLS) Adaptive Filters
(RLS)
Adaptive Filters
two time-indices
n: outer, i: inner
Weighting factor
Forgetting factor
Regularisation term
Prewindowing is assumed! Smooths and stabilises the solution
(not the covariance method) : regularisation parameter
Hence, the optimum (in the LS sense) filter coefficients should satisfy
(-1 always exists!)
Similarly
Let
Now, letting
After substituting the recursion for P(n) into the first term we obtain
where
Then
Hence
Then,
Then
But, mean-square-deviation is
We know that
Observations
The ensemble-average learning curve of the RLS algorithm
converges in about 2M iterations
Typically an order of magnitude faster than LMS
As the number of iterations n→∞ the MSE J’(n) approaches the
final value σo2 which is the variance of the measur. error eo(n).
in theory RLS produces zero excess MSE!.
Convergence of the RLS algorithm in the mean square is
independent of the eigenvalues of the ensemble-average
correlation matrix R of the input vector u(n).