Efficient Recursive Total Least Mean Fourth Algorithm
Efficient Recursive Total Least Mean Fourth Algorithm
Abstract—In this work, an iterative total least mean fourth the LMF algorithm and found that the convergence of their
(TLMF) algorithm is devised to solve adaptive filtering problems algorithm depends on the initial conditions used.
when both the input and output signals are corrupted with noise.
The proposed algorithm is based on a stochastic approach related
to the existing total least mean square (TLMS) algorithm. The
cost function of the TLMF is defined in terms of the fourth power Δ𝑑(𝑖)
of the error and minimized iteratively to reach the optimal weight
solution. The unknown system is evaluated at various levels of 𝑥(𝑖) Unknown 𝑑(𝑖)
signal-to-noise ratio (SNR). The simulation results showed that system
∑
+
the proposed algorithm produced interesting results when both
𝑑(𝑖)
the noise is white or coloured.
Index Terms—Adaptive filtering, cost function, total least mean
fourth, total least mean square, least mean squares. + −
Adaptive 𝑦(𝑖) 𝑒(𝑖)
∑ filter ∑
Δ𝑥(𝑖) 𝑥(𝑖)
I. I NTRODUCTION
In the past decades, several researchers and scientists have
proposed different adaptive filtering algorithms and offered
various least mean square-based (LMS) solutions in an attempt
Fig. 1. Schematic block diagram of Unknown system identification with noisy
to improve their efficiency and effectiveness [1]- [2]. input signal.
The conventional LMS algorithm has been modified to
form other adaptive algorithms. The least mean fourth (LMF) Also, the authors of [10] presented a new mean square con-
algorithm [3] is considered as a special case of the LMS vergence analysis of the LMF algorithm under non-Gaussian
algorithm where the cost function, i.e., the mean fourth error noise and showed that the algorithm stability depends strongly
is minimized. Particularly in noisy environments with non- on the weights considered. In [6], the authors presented and
Gaussian noise, the LMF converges faster and shows better deivised an LMF analysis in non-Gaussian conditions derived
performance than its LMS counterpart [3]–[5]. It has been from the combined idea of both the LMF and the fractional
shown that the LMF algorithm produces an optimal filter order calculus. In addition, the method eliminates the Gamma
coefficient when compared to the LMS algorithm at the same function which tends to increase the computational complexity
speed of convergence [3]. of the system. Hence, the algorithm converges faster with
The conventional least mean fourth algorithm is popularly minimal error and less computational complexity. In a similar
known for its ability to estimate the parameters of an unknown manner, the author in [13] presented for the first time the
system from the input and output data [6]–[9]. This technique convergence and steady-state behaviors of the normalized
often produces an optimal solution to the system identification LMF (NLMF) algorithm. However, sub-optimal solutions of
problem with interference in the output only or with no the adaptive coefficient were obtained when both system
interference in either the input or output of the unknown input and output are contaminated [1]. Later other appropriate
system [10], [11]. However, the conventional least mean fourth modifications were incorporated to the NLMF algorithm for
algorithm becomes biased when the interference occurs in both the sake of improving its stability and convergence properties
the input and output of the unknown system [6] as shown in [11], [14]–[17].
Fig. 1. Several approaches have been reported in the literature The authors in [18] and [19] employed the total least
for analyzing different parameters that enhance the conver- squares (TLS) algorithm with the help of the singular value
gence of the LMF algorithm. The authors in [12] presented decomposition (SVD) to analyze an unknown system with
the statistical investigation of the convergence performance for interference in both the input and output. Similarly, [20]
˜ = d(i) + ∆d(i),
d(i) (2) where µ is the step size and the gradient ∇h̃i J˜ is obtained as
follows:
where d(i) = hTi xi is the system’s output, and ∆d(i) is an ∂ 4
AWGN with zero mean and variance σ∆d 2
. ∇h̃i J˜ = ζ (i)
∂ h̃i
Initially, in an adaptive filtering problem, it’s assumed that ∂
the input signal is known and the interference occurs only at = 4ζ 3 (i) ζ(i)
the system’s output [23], [24]. However, this assumption is ∂ h̃i
impractical because interference may occur also at the input ∂ h̃Ti g̃i
due to either sampling, quantization, or modeling. In such = 4ζ 3 (i) q
cases, it is more practical to incorporate interference to both ∂ h̃i h̃T h̃
i i
the input and output signals. Figure 1 represents the unknown 3
q
4ζ (i) T T h̃i
adaptive system with interference at both input and output. = g̃i h̃i h̃i − h̃i g̃i q
The corrupted input vector can be expressed as h̃Ti h̃i h̃Ti h̃i
4ζ 3 (i)
x̃i = xi + ∆xi , (3) T T
= g̃i · h̃i h̃i − h̃i g̃i · h̃i . (10)
h iT (h̃Ti h̃i )3/2
where ∆xi = ∆x(i), ∆x(i − 1), ..., ∆x(i − N + 1) ∈ RN q
denotes the noise in the input vector xi and ∆x(i) is an Now, using e(i) = g̃iT h̃i = h̃Ti g̃i and ∥h̃i ∥ = h̃Ti h̃i ,
AWGN that is uncorrelated to the output noise with zero equation (10) becomes
2
mean and variance σ∆x . It can be deduced from Fig. 1 that
˜ 4 2 3 4
the system estimates an output y(i) = h̃Ti x̃i from each ∇h̃i J = ∥h̃i ∥ e (i)g̃i − e (i)h̃i , (11)
∥h̃i ∥6
input signal x̃(i) = x(i) + ∆x(i). The system’s error is
estimated by comparing the output y(i) with d(i) ˜ and the substituting (11) into (9), the recursive equation of TLMF
autocorrelation matrix Rx̃i of the corrupted input vector x̃i algorithm becomes
is Rx̃i = E{x̃i x̃Ti } and the cross-correlation of the desired 4e3 (i)µ
output signal is Px̃i = E{d(i)˜ x̃i }. h̃i+1 = h̃i + e(i)h̃i − ∥h̃i ∥2 g̃i . (12)
∥h̃i ∥6
785
Authorized licensed use limited to: King Fahd University of Petroleum and Minerals. Downloaded on February 07,2024 at 10:43:26 UTC from IEEE Xplore. Restrictions apply.
The updated TLMF solution is obtained using 0
Normalized Weight Difference
TLMF
TLMS
h̃i+1 (1 : N ) -2 LMS
hi+1 = − . (13)
h̃i+1 (N + 1) -4
NWD (dB)
-6
Finally, Table I presents the TLMF algorithm’s computational
complexity and its related algorithms (LMS, TLMS, LMF). -8
-10
TABLE I -12
C OMPUTATIONAL COMPLEXITY OF DIFFERENT ADAPTIVE FILTERING
ALGORITHM -14
0 500 1000 1500 2000 2500 3000 3500 4000
Iterations
Algorithm Multiplication Addition
LMS N N Fig. 2. NWD of LMS, TLMS and TLMF at 10 dB SNR.
TLMS 2N + 2 2N + 1
LMF 3N 3N
Normalized Weight Difference
TLMF 3N + 7 N +2 0
TLMF
TLMS
LMS
Next, the convergence properties of the devised algorithm
are investigated through simulations. -5
NWD (dB)
III. R ESULT AND D ISCUSSION
This section illustrates the theoretical analysis of the pro- -10
786
Authorized licensed use limited to: King Fahd University of Petroleum and Minerals. Downloaded on February 07,2024 at 10:43:26 UTC from IEEE Xplore. Restrictions apply.
Normalized Weight Difference Normalized Weight Difference
0 0
TLMF TLMF
-2 TLMS TLMS
LMS LMS
-5
-4
-6
-10
NWD (dB)
NWD (dB)
-8
-10
-15
-12
-14 -20
-16
-25
-18 0 500 1000 1500 2000 2500 3000 3500 4000
0 500 1000 1500 2000 2500 3000 3500 4000 Iterations
Iterations
Fig. 6. NWD of LMS, TLMS and TLMF at 20 dB SNR using uniform noise.
Fig. 4. NWD of LMS, TLMS and TLMF at 14 dB SNR.
2007.
[11] E. Eweda, “Mean-square stability analysis of a normalized least mean
-15
fourth algorithm for a markov plant,” IEEE Transactions on Signal
Processing, vol. 62, no. 24, pp. 6545–6553, 2014.
-20 [12] S. H. Cho, S. D. Kim, and K. Y. Jeon, “Statistical convergence of
the adaptive least mean fourth algorithm,” in Proceedings of Third
-25 International Conference on Signal Processing (ICSP’96), vol. 1, 1996,
0 500 1000 1500 2000 2500 3000 3500 4000
Iterations pp. 610–613 vol.1.
[13] A. Zerguine, “Convergence and steady-state analysis of the normalized
Fig. 5. NWD of LMS, TLMS and TLMF at 20 dB SNR using uniform noise. least mean fourth algorithm,” Digital Signal Processing, vol. 17, no. 1,
pp. 17–31, 2007.
[14] A. Zerguine, M. K. Chan, T. Y. Al-Naffouri, M. Moinuddin, and C. F.
Cowan, “Convergence and tracking analysis of a variable normalised
of paramount importance. All of these issues and timely ones lmf (xe-nlmf) algorithm,” Signal Processing, vol. 89, no. 5, pp. 778–
will be considered during the study of this algorithm. 790, 2009.
[15] A. Zerguine, M. Moinuddin, and S. A. A. Imam, “A noise constrained
least mean fourth (nclmf) adaptive algorithm,” Signal Processing,
ACKNOWLEDGMENT vol. 91, no. 1, pp. 136–149, 2011.
[16] E. Eweda and A. Zerguine, “New insights into the normalization of
The authors would like to acknowledge the support pro- the least mean fourth algorithm,” Signal, Image and Video Processing,
vided by the DROC at KFUPM for funding under the In- vol. 7, no. 2, pp. 255–262, 2013.
terdisciplinary Research Center for CSS through project No. [17] S. M. Asad, M. Moinuddin, A. Zerguine, and J. Chambers, “A robust
and stable variable step-size design for the least-mean fourth algorithm
INCS2101. using quotient form,” Signal Processing, vol. 162, pp. 196–210, 2019.
R EFERENCES [18] G. H. Golub and C. F. Van Loan, “An analysis of the total least squares
problem,” SIAM journal on numerical analysis, vol. 17, no. 6, pp. 883–
[1] S. Haykin, Adaptive Filter Theory. Prentice-Hall. Englewood Cliffs, 893, 1980.
NJ, USA, 1991. [19] S. Van Huffel and J. Vandewalle, The total least squares problem:
[2] A. H. Sayed, Fundamentals of adaptive filtering. John Wiley & Sons, computational aspects and analysis. SIAM, 1991.
2003. [20] K. Gao, M. O. Ahmad, and M. Swamy, “Learning algorithm for total
[3] E. Walach and B. Widrow, “The least mean fourth (lmf) adaptive least-squares adaptive signal processing,” Electronics Letters, vol. 4,
algorithm and its family,” IEEE Transactions on Information Theory, no. 28, pp. 430–432, 1992.
vol. 30, no. 2, pp. 275–283, 1984. [21] D.-Z. Feng, Z. Bao, and L.-C. Jiao, “Total least mean squares algorithm,”
[4] M. O. Sayin, N. D. Vanli, and S. S. Kozat, “A novel family of adaptive IEEE Transactions on Signal Processing, vol. 46, no. 8, pp. 2122–2130,
filtering algorithms based on the logarithmic cost,” IEEE Transactions 1998.
on Signal Processing, vol. 62, no. 17, pp. 4411–4424, 2014. [22] S. Javed and N. A. Ahmad, “A stochastic total least squares solution
[5] P. Hubscher and J. Bermudez, “An improved stochastic model for the of adaptive filtering problem,” The Scientific World Journal, vol. 2014,
least mean fourth (lmf) adaptive algorithm,” in 2002 IEEE International 2014.
Symposium on Circuits and Systems (ISCAS), vol. 1, 2002, pp. I–I. [23] X. Wang and J. Han, “Affine projection algorithm based on least mean
[6] S. Khan, N. Ahmed, M. A. Malik, I. Naseem, R. Togneri, and M. Ben- fourth algorithm for system identification,” IEEE Access, vol. 8, pp.
namoun, “Flmf: Fractional least mean fourth algorithm for channel esti- 11 930–11 938, 2020.
mation in non-gaussian environment,” in 2017 International Conference [24] M. M. U. Faiz and I. Kale, “Removal of multiple artifacts from ecg
on Information and Communication Technology Convergence (ICTC), signal using cascaded multistage adaptive noise cancellers,” Array,
2017, pp. 466–470. vol. 14, p. 100133, 2022.
[7] S. Chaithanya, P. Jayaprakash et al., “Power quality improvement [25] C. E. Davila, “An efficient recursive total least squares algorithm for
strategy for dstatcom using adaptive sign regressor lmf control,” in 2020 fir adaptive filtering,” IEEE Transactions on Signal Processing, vol. 42,
IEEE International Conference on Power Electronics, Drives and Energy no. 2, pp. 268–280, 1994.
Systems (PEDES). IEEE, 2020, pp. 1–6. [26] D.-Z. Feng, X.-D. Zhang, D.-X. Chang, and W. X. Zheng, “A fast
[8] Y. He, R. Wang, X. Wang, J. Zhou, and Y. Yan, “Novel adaptive filtering recursive total least squares algorithm for adaptive fir filtering,” IEEE
algorithms based on higher-order statistics and geometric algebra,” IEEE Transactions on Signal Processing, vol. 52, no. 10, pp. 2729–2737, 2004.
Access, vol. 8, pp. 73 767–73 779, 2020.
787
Authorized licensed use limited to: King Fahd University of Petroleum and Minerals. Downloaded on February 07,2024 at 10:43:26 UTC from IEEE Xplore. Restrictions apply.