0% found this document useful (0 votes)
14 views

Efficient Recursive Total Least Mean Fourth Algorithm

The document proposes an efficient recursive total least mean fourth algorithm to solve adaptive filtering problems when both the input and output signals are corrupted with noise. The algorithm is based on minimizing an iterative cost function defined as the fourth power of the error to reach the optimal weight solution. Simulation results showed the proposed algorithm produced interesting results for both white and colored noise.

Uploaded by

ridwankayode55
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

Efficient Recursive Total Least Mean Fourth Algorithm

The document proposes an efficient recursive total least mean fourth algorithm to solve adaptive filtering problems when both the input and output signals are corrupted with noise. The algorithm is based on minimizing an iterative cost function defined as the fourth power of the error to reach the optimal weight solution. Simulation results showed the proposed algorithm produced interesting results for both white and colored noise.

Uploaded by

ridwankayode55
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

2023 20th International Multi-Conference on Systems, Signals & Devices (SSD)

Efficient Recursive Total Least Mean Fourth


Algorithm
Kabiru N. Aliyu∗ , Mohamed Hafez Mohamed∗ , Abdulmajid Lawal∗ , Ali Muqaibel∗ ,
2023 20th International Multi-Conference on Systems, Signals & Devices (SSD) | 979-8-3503-3256-8/23/$31.00 ©2023 IEEE | DOI: 10.1109/SSD58187.2023.10411306

Muhammad Moinuddin† and Azzedine Zerguine∗


∗ Department of Electrical Engineering and the Center for Communication Systems and Sensing,
Kind Fahd University of Petroleum and Minerals, Dhahran, 31261, Saudi Arabia.
† Department of Electrical and Computer Engineering Department,

King Abdulaziz University, Jeddah, 21859, Saudi Arabia

Abstract—In this work, an iterative total least mean fourth the LMF algorithm and found that the convergence of their
(TLMF) algorithm is devised to solve adaptive filtering problems algorithm depends on the initial conditions used.
when both the input and output signals are corrupted with noise.
The proposed algorithm is based on a stochastic approach related
to the existing total least mean square (TLMS) algorithm. The
cost function of the TLMF is defined in terms of the fourth power Δ𝑑(𝑖)
of the error and minimized iteratively to reach the optimal weight
solution. The unknown system is evaluated at various levels of 𝑥(𝑖) Unknown 𝑑(𝑖)
signal-to-noise ratio (SNR). The simulation results showed that system

+
the proposed algorithm produced interesting results when both
𝑑(𝑖)
the noise is white or coloured.
Index Terms—Adaptive filtering, cost function, total least mean
fourth, total least mean square, least mean squares. + −
Adaptive 𝑦(𝑖) 𝑒(𝑖)
∑ filter ∑
Δ𝑥(𝑖) 𝑥(𝑖)
I. I NTRODUCTION
In the past decades, several researchers and scientists have
proposed different adaptive filtering algorithms and offered
various least mean square-based (LMS) solutions in an attempt
Fig. 1. Schematic block diagram of Unknown system identification with noisy
to improve their efficiency and effectiveness [1]- [2]. input signal.
The conventional LMS algorithm has been modified to
form other adaptive algorithms. The least mean fourth (LMF) Also, the authors of [10] presented a new mean square con-
algorithm [3] is considered as a special case of the LMS vergence analysis of the LMF algorithm under non-Gaussian
algorithm where the cost function, i.e., the mean fourth error noise and showed that the algorithm stability depends strongly
is minimized. Particularly in noisy environments with non- on the weights considered. In [6], the authors presented and
Gaussian noise, the LMF converges faster and shows better deivised an LMF analysis in non-Gaussian conditions derived
performance than its LMS counterpart [3]–[5]. It has been from the combined idea of both the LMF and the fractional
shown that the LMF algorithm produces an optimal filter order calculus. In addition, the method eliminates the Gamma
coefficient when compared to the LMS algorithm at the same function which tends to increase the computational complexity
speed of convergence [3]. of the system. Hence, the algorithm converges faster with
The conventional least mean fourth algorithm is popularly minimal error and less computational complexity. In a similar
known for its ability to estimate the parameters of an unknown manner, the author in [13] presented for the first time the
system from the input and output data [6]–[9]. This technique convergence and steady-state behaviors of the normalized
often produces an optimal solution to the system identification LMF (NLMF) algorithm. However, sub-optimal solutions of
problem with interference in the output only or with no the adaptive coefficient were obtained when both system
interference in either the input or output of the unknown input and output are contaminated [1]. Later other appropriate
system [10], [11]. However, the conventional least mean fourth modifications were incorporated to the NLMF algorithm for
algorithm becomes biased when the interference occurs in both the sake of improving its stability and convergence properties
the input and output of the unknown system [6] as shown in [11], [14]–[17].
Fig. 1. Several approaches have been reported in the literature The authors in [18] and [19] employed the total least
for analyzing different parameters that enhance the conver- squares (TLS) algorithm with the help of the singular value
gence of the LMF algorithm. The authors in [12] presented decomposition (SVD) to analyze an unknown system with
the statistical investigation of the convergence performance for interference in both the input and output. Similarly, [20]

979-8-3503-3256-8/23/$31.00 ©2023 IEEE 784


Authorized licensed use limited to: King Fahd University of Petroleum and Minerals. Downloaded on February 07,2024 at 10:43:26 UTC from IEEE Xplore. Restrictions apply.
proposed a TLS solution for both finite impulse response The LMF algorithm minimizes the cost function J =
(FIR) and infinite impulse response (IIR) adaptive filters. The E{e4 (i)} and generates a suboptimal estimate of the solution
proposed method outperformed the LMS and the recursive to the adaptive filtering problem due to the corrupt input data.
least square algorithms. Also, [21] and [22] presented a Here, transforming the adaptive filtering problem into a
statistical approach for the total least mean squares (TLMS) TLMF problem helps in improving the estimate of solutions
algorithm and carried out a different comparative analysis of under corrupted input [25], [26]. The following definition of
convergence. an (N + 1) × 1 extended data vector g̃i provides a more
This work proposes an iterative total least mean fourth generalized form of the signal model of the TLMF. The
(TLMF) algorithm to solve an adaptive filtering problem where extended data vector can be defined as
both the input and the output are corrupted with noise. A
!
 T
˜
T x̃i
stochastic approach related to the TLMS algorithm is adopted g̃i = x̃i d(i) = ˜ . (4)
d(i)
to develop the TLMF algorithm. In the experimental analysis,
both Gaussian and non-Gaussian noises are considered in Equally, the error e(i) in the form of the extended data vector
evaluating the devised algorithm’s convergence performance. is written as
The rest of the paper is arranged as follows: the problem ˜ = hT x̃i − d(i)
˜
e(i) = y(i) − d(i) i (5)
formulation of the TLMF algorithm is presented in Section !
2. While Section 3 discusses the devised algorithm’s results,  T  x̃i
= hi − 1 ˜ (6)
Section 4 summarizes the findings. d(i)
II. P ROBLEM F ORMULATION = h̃Ti g̃i , (7)
 T T
Total Least Mean Fourth (TLMF) Algorithm where h̃i = hi − 1] is an (N + 1) × 1 vector. However, the
Consider an unknown adaptive FIR system of filter length cost function of TLMF algorithm is J(h̃i ) = E{ζ 4 (i)}, where
N and weight response hi in which both the input and the ζ(i) is the total error and it can be written as
output are contaminated with noise as shown in Fig. 1. The e(i) h̃T g̃i g̃ T h̃i
noise-free input data xi is generated as ζ(i) = q = qi = qi . (8)
h iT h̃Ti h̃i h̃Ti h̃i h̃Ti h̃i
xi = x(i), x(i − 1), ..., x(i − N + 1) ∈ RN , (1)
Accordingly, the TLMF weight recursive update equation is
where x(i) is the input signal. Meanwhile, the contaminated written as
desired output is written as h̃i+1 = h̃i − µ∇h̃i J˜, (9)

˜ = d(i) + ∆d(i),
d(i) (2) where µ is the step size and the gradient ∇h̃i J˜ is obtained as
follows:
where d(i) = hTi xi is the system’s output, and ∆d(i) is an ∂  4 
AWGN with zero mean and variance σ∆d 2
. ∇h̃i J˜ = ζ (i)
∂ h̃i
Initially, in an adaptive filtering problem, it’s assumed that ∂
the input signal is known and the interference occurs only at = 4ζ 3 (i) ζ(i)
the system’s output [23], [24]. However, this assumption is ∂ h̃i  
impractical because interference may occur also at the input ∂  h̃Ti g̃i 
due to either sampling, quantization, or modeling. In such = 4ζ 3 (i) q 
cases, it is more practical to incorporate interference to both ∂ h̃i h̃T h̃
i i
the input and output signals. Figure 1 represents the unknown 3
 q 
4ζ (i) T T h̃i
adaptive system with interference at both input and output. = g̃i h̃i h̃i − h̃i g̃i q
The corrupted input vector can be expressed as h̃Ti h̃i h̃Ti h̃i
4ζ 3 (i)
 
x̃i = xi + ∆xi , (3) T T
= g̃i · h̃i h̃i − h̃i g̃i · h̃i . (10)
h iT (h̃Ti h̃i )3/2
where ∆xi = ∆x(i), ∆x(i − 1), ..., ∆x(i − N + 1) ∈ RN q
denotes the noise in the input vector xi and ∆x(i) is an Now, using e(i) = g̃iT h̃i = h̃Ti g̃i and ∥h̃i ∥ = h̃Ti h̃i ,
AWGN that is uncorrelated to the output noise with zero equation (10) becomes
2
mean and variance σ∆x . It can be deduced from Fig. 1 that
 
˜ 4 2 3 4
the system estimates an output y(i) = h̃Ti x̃i from each ∇h̃i J = ∥h̃i ∥ e (i)g̃i − e (i)h̃i , (11)
∥h̃i ∥6
input signal x̃(i) = x(i) + ∆x(i). The system’s error is
estimated by comparing the output y(i) with d(i) ˜ and the substituting (11) into (9), the recursive equation of TLMF
autocorrelation matrix Rx̃i of the corrupted input vector x̃i algorithm becomes
is Rx̃i = E{x̃i x̃Ti } and the cross-correlation of the desired 4e3 (i)µ
 
output signal is Px̃i = E{d(i)˜ x̃i }. h̃i+1 = h̃i + e(i)h̃i − ∥h̃i ∥2 g̃i . (12)
∥h̃i ∥6

785
Authorized licensed use limited to: King Fahd University of Petroleum and Minerals. Downloaded on February 07,2024 at 10:43:26 UTC from IEEE Xplore. Restrictions apply.
The updated TLMF solution is obtained using 0
Normalized Weight Difference

TLMF
TLMS
h̃i+1 (1 : N ) -2 LMS

hi+1 = − . (13)
h̃i+1 (N + 1) -4

NWD (dB)
-6
Finally, Table I presents the TLMF algorithm’s computational
complexity and its related algorithms (LMS, TLMS, LMF). -8

-10

TABLE I -12
C OMPUTATIONAL COMPLEXITY OF DIFFERENT ADAPTIVE FILTERING
ALGORITHM -14
0 500 1000 1500 2000 2500 3000 3500 4000
Iterations
Algorithm Multiplication Addition
LMS N N Fig. 2. NWD of LMS, TLMS and TLMF at 10 dB SNR.
TLMS 2N + 2 2N + 1
LMF 3N 3N
Normalized Weight Difference
TLMF 3N + 7 N +2 0
TLMF
TLMS
LMS
Next, the convergence properties of the devised algorithm
are investigated through simulations. -5

NWD (dB)
III. R ESULT AND D ISCUSSION
This section illustrates the theoretical analysis of the pro- -10

posed TLMF algorithm. The devised algorithm is compared


with the existing adaptive algorithms namely the LMS and the
-15
TLMS. The evaluation criteria adopted here is the normalized 0 500 1000 1500 2000 2500 3000 3500 4000
Iterations
weighted difference (NWD) expressed as
! Fig. 3. NWD of LMS, TLMS and TLMF at 12 dB SNR.
∥ĥ − hopt ∥2
N W D(dB) = 10log10 , (14)
∥hopt ∥2
The SNR used is 20 dB. This figure shows that the TLMF
where ĥ is the estimated weight and hopt is the optimal outperforms the LMS and TLMS with a low steady state.
weight. In this experiment, the unknown impulse response of Finally, Fig. 6 shows the convergence behavior of the LMS,
 T
the system considered is hopt = − 0.3, −0.9, 0.8, −0.7, 0.6 TLMS, and TLMF algorithms when uniform noise is used
with a length N = 5. The input and output signals were at different learning rates for each algorithm. The learning
contaminated with AWGN with zero mean and equal variance, rates for LMS, TLMS, and TLMF algorithms are 0.006, 0.02,
2 2
i.e., σ∆x = σ∆d = 0.001. The maximum and minimum and 0.089, respectively, these are used to achieve the same
learning rates used are µmax = 0.3 and µmin = 0.01. Four convergence rate. The remaining parameters remained the
different SNR levels of 10 dB, 12 dB, 14 dB, and 20 dB were same as those used in Fig. 5. This figure shows that the TLMF
considered in the simulations. The results of the Montecarlo performance surpasses those of the LMS and TLMS with a
simulation are the average of 100 independent runs. low steady state.
Figures 2, 3, and 4 present the comparative results for the
LMS, TLMS, and proposed TLMF algorithm at SNR levels IV. C ONCLUSION
of 10 dB, 12 dB, and 14 dB, respectively. In these figures, the
proposed TLMF algorithm reaches the best convergence state In this paper, we presented an efficient recursive TLMF
with an NWD of −12.5 dB, −14 dB, and −16.3 dB at around algorithm in which the input and output signals were corrupted
1000, 1500, and 2500 iterations, respectively. Meanwhile, the with the same level of AWGN noise. A stochastic TLMS
LMS and the TLMS attain a steady state at the same iterations. approach was adopted to devise a TLMF algorithm that
The LMS algorithm attains a convergence state at an NWD minimizes a cost function to the fourth power. The proposed
of −7 dB, −9 dB, and −11 dB. Similarly, the TLMS reaches TLMF algorithm demonstrated good performance compared
a convergence state at an NWD of −9.5 dB, −10.5 dB, and to the existing LMS and TLMS algorithms. Moreover, the
−11.5 dB, respectively. It is shown that the LMS and TLMS proposed TLMF algorithm performs better compared to its
converge faster with high NWD than the proposed TLMF. counterpart with a low steady state when uniform noise is used.
Figure 5 depicts the convergence behavior of the LMS, Future work will concentrate on the convergence analysis,
TLMS, and TLMF algorithms when uniform noise is used the effect of the noise on the performance of the TLMF
with a learning rate of 0.2. The unknown system response used algorithm. The level of the noise at the input is also an
 T
is hopt = − 0.9, −0.3, −0.2, −0.2, 0.1 and the variance of important parameter that must be looked at. The application
the input and output noise are 0.1 and 0.099, respectively. of the newly devised algorithm into real applications is also

786
Authorized licensed use limited to: King Fahd University of Petroleum and Minerals. Downloaded on February 07,2024 at 10:43:26 UTC from IEEE Xplore. Restrictions apply.
Normalized Weight Difference Normalized Weight Difference
0 0
TLMF TLMF
-2 TLMS TLMS
LMS LMS
-5
-4

-6
-10

NWD (dB)
NWD (dB)
-8

-10
-15

-12

-14 -20

-16
-25
-18 0 500 1000 1500 2000 2500 3000 3500 4000
0 500 1000 1500 2000 2500 3000 3500 4000 Iterations
Iterations

Fig. 6. NWD of LMS, TLMS and TLMF at 20 dB SNR using uniform noise.
Fig. 4. NWD of LMS, TLMS and TLMF at 14 dB SNR.

Normalized Weight Difference


0 [9] S. Guan, Q. Cheng, and F. Liu, “One optimized lmf algorithm in low
TLMF
TLMS snr,” Procedia Computer Science, vol. 199, pp. 26–33, 2022.
LMS
-5 [10] P. I. Hubscher, J. C. M. Bermudez, and V. H. Nascimento, “A mean-
square stability analysis of the least mean fourth adaptive algorithm,”
-10
IEEE Transactions on Signal Processing, vol. 55, no. 8, pp. 4018–4028,
NWD (dB)

2007.
[11] E. Eweda, “Mean-square stability analysis of a normalized least mean
-15
fourth algorithm for a markov plant,” IEEE Transactions on Signal
Processing, vol. 62, no. 24, pp. 6545–6553, 2014.
-20 [12] S. H. Cho, S. D. Kim, and K. Y. Jeon, “Statistical convergence of
the adaptive least mean fourth algorithm,” in Proceedings of Third
-25 International Conference on Signal Processing (ICSP’96), vol. 1, 1996,
0 500 1000 1500 2000 2500 3000 3500 4000
Iterations pp. 610–613 vol.1.
[13] A. Zerguine, “Convergence and steady-state analysis of the normalized
Fig. 5. NWD of LMS, TLMS and TLMF at 20 dB SNR using uniform noise. least mean fourth algorithm,” Digital Signal Processing, vol. 17, no. 1,
pp. 17–31, 2007.
[14] A. Zerguine, M. K. Chan, T. Y. Al-Naffouri, M. Moinuddin, and C. F.
Cowan, “Convergence and tracking analysis of a variable normalised
of paramount importance. All of these issues and timely ones lmf (xe-nlmf) algorithm,” Signal Processing, vol. 89, no. 5, pp. 778–
will be considered during the study of this algorithm. 790, 2009.
[15] A. Zerguine, M. Moinuddin, and S. A. A. Imam, “A noise constrained
least mean fourth (nclmf) adaptive algorithm,” Signal Processing,
ACKNOWLEDGMENT vol. 91, no. 1, pp. 136–149, 2011.
[16] E. Eweda and A. Zerguine, “New insights into the normalization of
The authors would like to acknowledge the support pro- the least mean fourth algorithm,” Signal, Image and Video Processing,
vided by the DROC at KFUPM for funding under the In- vol. 7, no. 2, pp. 255–262, 2013.
terdisciplinary Research Center for CSS through project No. [17] S. M. Asad, M. Moinuddin, A. Zerguine, and J. Chambers, “A robust
and stable variable step-size design for the least-mean fourth algorithm
INCS2101. using quotient form,” Signal Processing, vol. 162, pp. 196–210, 2019.
R EFERENCES [18] G. H. Golub and C. F. Van Loan, “An analysis of the total least squares
problem,” SIAM journal on numerical analysis, vol. 17, no. 6, pp. 883–
[1] S. Haykin, Adaptive Filter Theory. Prentice-Hall. Englewood Cliffs, 893, 1980.
NJ, USA, 1991. [19] S. Van Huffel and J. Vandewalle, The total least squares problem:
[2] A. H. Sayed, Fundamentals of adaptive filtering. John Wiley & Sons, computational aspects and analysis. SIAM, 1991.
2003. [20] K. Gao, M. O. Ahmad, and M. Swamy, “Learning algorithm for total
[3] E. Walach and B. Widrow, “The least mean fourth (lmf) adaptive least-squares adaptive signal processing,” Electronics Letters, vol. 4,
algorithm and its family,” IEEE Transactions on Information Theory, no. 28, pp. 430–432, 1992.
vol. 30, no. 2, pp. 275–283, 1984. [21] D.-Z. Feng, Z. Bao, and L.-C. Jiao, “Total least mean squares algorithm,”
[4] M. O. Sayin, N. D. Vanli, and S. S. Kozat, “A novel family of adaptive IEEE Transactions on Signal Processing, vol. 46, no. 8, pp. 2122–2130,
filtering algorithms based on the logarithmic cost,” IEEE Transactions 1998.
on Signal Processing, vol. 62, no. 17, pp. 4411–4424, 2014. [22] S. Javed and N. A. Ahmad, “A stochastic total least squares solution
[5] P. Hubscher and J. Bermudez, “An improved stochastic model for the of adaptive filtering problem,” The Scientific World Journal, vol. 2014,
least mean fourth (lmf) adaptive algorithm,” in 2002 IEEE International 2014.
Symposium on Circuits and Systems (ISCAS), vol. 1, 2002, pp. I–I. [23] X. Wang and J. Han, “Affine projection algorithm based on least mean
[6] S. Khan, N. Ahmed, M. A. Malik, I. Naseem, R. Togneri, and M. Ben- fourth algorithm for system identification,” IEEE Access, vol. 8, pp.
namoun, “Flmf: Fractional least mean fourth algorithm for channel esti- 11 930–11 938, 2020.
mation in non-gaussian environment,” in 2017 International Conference [24] M. M. U. Faiz and I. Kale, “Removal of multiple artifacts from ecg
on Information and Communication Technology Convergence (ICTC), signal using cascaded multistage adaptive noise cancellers,” Array,
2017, pp. 466–470. vol. 14, p. 100133, 2022.
[7] S. Chaithanya, P. Jayaprakash et al., “Power quality improvement [25] C. E. Davila, “An efficient recursive total least squares algorithm for
strategy for dstatcom using adaptive sign regressor lmf control,” in 2020 fir adaptive filtering,” IEEE Transactions on Signal Processing, vol. 42,
IEEE International Conference on Power Electronics, Drives and Energy no. 2, pp. 268–280, 1994.
Systems (PEDES). IEEE, 2020, pp. 1–6. [26] D.-Z. Feng, X.-D. Zhang, D.-X. Chang, and W. X. Zheng, “A fast
[8] Y. He, R. Wang, X. Wang, J. Zhou, and Y. Yan, “Novel adaptive filtering recursive total least squares algorithm for adaptive fir filtering,” IEEE
algorithms based on higher-order statistics and geometric algebra,” IEEE Transactions on Signal Processing, vol. 52, no. 10, pp. 2729–2737, 2004.
Access, vol. 8, pp. 73 767–73 779, 2020.

787
Authorized licensed use limited to: King Fahd University of Petroleum and Minerals. Downloaded on February 07,2024 at 10:43:26 UTC from IEEE Xplore. Restrictions apply.

You might also like