0% found this document useful (0 votes)
22 views5 pages

GSRS

Machine Learning
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views5 pages

GSRS

Machine Learning
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

200 IEEE SIGNAL PROCESSING LETTERS, VOL.

30, 2023

Generalized Soft-Root-Sign Based Robust


Sparsity-Aware Adaptive Filters
Vinal Patel , Sankha Subhra Bhattacharjee , and Mads Græsbøll Christensen , Senior Member, IEEE

Abstract—Robust adaptive filters utilizing hyperbolic cosine and was proposed [8], which was later generalized for different
correntropy functions have been successfully employed in non- non-Gaussian noise distributions as generalized MCC (GMCC)
Gaussian noisy environments. However, these filters suffer from in [9] with several applications [10], [11], [12]. Recently, the
high steady-state misalignment due to significant weight update logarithmic hyperbolic cosine adaptive filter (LHCAF) was
in the presences of outliers. In addition, several practical systems proposed [13], [14], which has become quite popular due to
exhibit sparse characteristics, which is not taken into account by
these filters. In this paper, a generalized soft-root-sign (GSRS)
its improved robust performance and was adopted in several
function is proposed and the corresponding GSRS adaptive filter scenarios [4], [15], [16]. However, as we will see, the gradient
is designed. The proposed GSRS provides negligible weight update of the logarithmic hyperbolic cost (LHC) saturates for high
in the occurrence of large outliers and thereby results in lower values of error (which corresponds to high amplitude impulsive
steady-state misalignment. To further improve modelling perfor- disturbances), which limits the robustness of the LHC. Several
mance for sparse systems and to achieve robustness, sparsity-aware other robust learning methods have also been reported [17], [18],
GSRS algorithms are also developed in this paper. The bound [19], however, these methods also suffer from high steady-state
on learning rate and the computational complexity of proposed misalignment in presence of large outliers.
algorithm is also investigated. Simulation studies confirmed the In several applications such as television transmission chan-
improved convergence characteristics achieved by the proposed nels [20], underwater acoustics [21], feedback paths [22], it
algorithms over existing algorithms.
has been observed that the system to be modelled has sparse
Index Terms—Robust adaptive filter, l0 -norm, hyperbolic cosine characteristics and incorporating a sparsity-aware penalty in the
functions, non-Gaussian noise, system identification. cost function allows to exploit this a priori knowledge to improve
adaptive filter convergence and steady state [2], [20], [23]. In
this class of algorithms, the zero attraction based sparsity aware
I. INTRODUCTION algorithms, namely the zero attraction LMS (ZA-LMS) and
DAPTIVE filters have found a wide range of applications, reweighted ZA-LMS (RZA-LMS) algorithms were proposed
A like system identification (SI), echo cancellation, room
equalization, feedback cancellation, power electronics, to name
in [24] and have been used extensively in several applica-
tions [21], [22]. To achieve sparsity-aware modelling while hav-
a few [1], [2], [3]. The most commonly used adaptive algorithm ing robustness to non-Gaussian and impulsive disturbances, [25]
for these applications is the least mean square (LMS) algorithm, incorporated the ZA and RZA penalty into the GMCC based
which is the simplest algorithm in the class of stochastic gradient robust algorithm.
descent (GD) algorithms which minimize the mean square error To improve the achieved robustness and to reduce steady-state
(MSE) cost. In many practical applications involving adaptive misalignment, we propose a generalized soft-root-sign (GSRS)
filters, like underwater acoustics, switched mode power supplies, cost function and derive the corresponding robust adaptive algo-
dimmers, uninsulated electrical switches, sinusoidal estimation, rithm. The gradient of the GSRS function dies down for very high
communication [3], [4], [5], [6], the background noise can be values of errors and hence has the ability to achieve a high degree
of non-Gaussian or impulsive nature. In such cases, adaptive of robustness. Moreover, to model sparse systems while achiev-
algorithms based on the MSE cost, like the LMS algorithm do ing robustness, we incorporate the ZA and RZA zero attraction
not provide optimal performance and may even diverge [7]. penalty in the proposed GSRS cost function and designed the
To tackle this issue, several robust cost functions were pro- ZA-GSRS and RZA-GSRS algorithms. To further improve the
posed recently. The maximum correntropy criterion (MCC) performance in sparse scenarios, a novel sparsity inducing norm,
which maximizes the similarity of desired and output signals, namely the multivariate Laplace function (LF) [26], is combined
with the GSRS robust cost and the sparsity aware & robust
LF-GSRS algorithm is proposed.
Manuscript received 6 January 2023; revised 19 February 2023; accepted 19
February 2023. Date of publication 3 March 2023; date of current version 13
March 2023. The associate editor coordinating the review of this manuscript
and approving it for publication was Dr. Yunlong Cai. (Corresponding author: II. PROPOSED METHOD
Vinal Patel.)
Vinal Patel is with the Department of Electrical and Electronics Engineering, Let us consider a SI problem, where x(n) is the input signal
Indian Institute of Information Technology and Management, Gwalior 474015, and x(n) ∈ RL×1 is the tap-delayed input signal vector with
India (e-mail: [email protected]). n being the discrete sample index. The unknown system is
Sankha Subhra Bhattacharjee and Mads Græsbøll Christensen are with the modelled using an adaptive filter, with the weights w ∈ RL×1 ,
Audio Analysis Lab, Department of Architecture, Design and Media Technol-
ogy, Aalborg University, 9220 Aalborg, Denmark (e-mail: [email protected];
with L as the filter length. The schematic of SI scenario used
[email protected]). to model an acoustic path is shown in Fig. 1. In a conventional
Digital Object Identifier 10.1109/LSP.2023.3252412 LMS algorithm, the cost function e2 (n) is used which is an

1070-9908 © 2023 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See https://www.ieee.org/publications/rights/index.html for more information.

Authorized licensed use limited to: INDIAN INST OF INFO TECH AND MANAGEMENT. Downloaded on May 03,2025 at 10:21:30 UTC from IEEE Xplore. Restrictions apply.
PATEL et al.: GENERALIZED SOFT-ROOT-SIGN BASED ROBUST SPARSITY-AWARE ADAPTIVE FILTERS 201

Fig. 1. Schematic of acoustic path SI using adaptive filter.

instantaneous and stochastic approximation of mean square


error (MSE), defined as E{e2 (n)}, where error signal, e(n)
is given as e(n) = d(n) − y(n) = d(n) − xT (n)w(n), where
d(n) = wTopt x(n) + v(n) is the output of the unknown system,
wopt is the optimal vector representing the impulse response of
the unknown system and v(n) is the background noise. However,
the conventional LMS algorithm based on E[e2 (n)] ≈ e2 (n) is
not robust to non-Gaussian and impulsive noises [9], [15], [27],
[28], [29], [30].

A. Generalized Soft-Root-Sign Based Robust Adaptive Filter


To overcome the issue of sub-optimal steady-state perfor-
Fig. 2. Comparison of: (a) J[e(n)] and (b) J  [e(n)] with β = 1, γ = 2 and
mance present in the conventional robust learning methods, a different α; (c) J[e(n)] and (d) J  [e(n)] with α = 1, γ = 2 and different β;
generalized robust cost function which is based on soft-root-sign (e) J[e(n)] and (f) J  [e(n)] with α = 1, β = 1 and different γ; for LHCAF
(SRS) function called as Generalized SRS (GSRS) is presented and GSRS.
in this paper. The proposed cost function is given by
 
|e(n)|γ outliers are present in the error signal, the GSRS has a very small
J[e(n)] = |e(n)|γ (1)
+ e−β|e(n)|γ gradient (→ 0) which leads to very small or insignificant change
α
in the adaptive filter weights. When large outliers are present
where, α is the scaling parameter, β is slope parameter, and the gradient of LHCAF is saturated and hence the adaptive
γ is shape parameter. By using the cost function given in (1), filter weights change significantly due to spurious impulsive
a generalized soft-root-sign (GSRS) algorithm is proposed in disturbances, which causes the filter weights to diverge away
this paper. By applying stochastic GD approach [31] on (1), the significantly from the optimal weights of the SI problem. Hence,
weight update of the GSRS is derived as the proposed GSRS algorithm can provide improved robustness
∂J[e(n)] over LHCAF. The rate at which the derivative of GSRS becomes
w(n + 1) = w(n) + μ (2) zeros for large error values is controlled by parameter β as shown
∂w(n)
in Fig. 2(c) and (d), i.e. larger β value shows slower convergence
= w(n) + μξ(n)x(n) (3) to zero for large errors. Moreover, sensitivity for small error is
controlled by parameter γ in GSRS as shown in Fig. 2(e) and (f).
exp(β |e(n)|γ ) |e(n)|γ−1 sgn[e(n)] (β |e(n)|γ + 1) The proposed GSRS algorithm provides more degree of freedom
ξ(n) = 2
[exp(β |e(n)|γ ) |e(n)|γ + α] on selecting the bandwidth (range of error values) by parameter
(4) (α), steepness for large error values is controlled by parameter
(β), and sensitiveness (slope) for small error values is controlled
and μ = μ α2 γ is learning the rate. by parameter (γ) as shown in Fig. 2.
To illustrate the robust behaviour of the GSRS algorithm
proposed in (3), we compare the cost function (1) and its
derivative with that of recently proposed LHC function [13] and B. Sparsity-Aware Generalized Soft-Root-Sign Based Robust
its derivative, J  [e(n)], for different parameters variations, in Adaptive Filter
Fig. 2. It can be seen in Fig. 2(a) that the cost function based LHCAF [13] and its sparsity-aware versions ZA-LHCAF and
on the GSRS shows less steepness for large error value i.e. in RZA-LHCAF [15] suffer from high steady-state misalignment
the presence of large outliers/impulsive disturbances [32]. From due to significant weight update in the presence of large outliers.
Fig. 2(b), it can be seen that derivative of the LHC cost function To improve the robustness of the filter along with exploiting
saturates to a large non-zero value for large outliers in error. On sparsity, we have applied zero-attraction techniques along with
the other hand, the derivative of the GSRS is strictly increasing GSRS and proposed a set of sparsity-aware GSRS algorithms.
for small and moderate values of error when outliers are much The cost function for zero attraction GSRS (ZA-GSRS) algo-
less likely to be present, and when the error signal reaches a rithm is given as
very high value due to presence of large outliers the derivative
of the GSRS is gradually decreasing towards zero. Hence, when Js [e(n)] = J[e(n)] + φw(n)1 , (5)
outliers are less likely to be present in the error signal, the GSRS
behaves similar to an MSE based algorithm where the adaptive where J[e(n)] is defined in (1) and φ is the regularization
filter weights are updated as usual, but when large spurious parameter. Applying stochastic GD method on (5), the weight

Authorized licensed use limited to: INDIAN INST OF INFO TECH AND MANAGEMENT. Downloaded on May 03,2025 at 10:21:30 UTC from IEEE Xplore. Restrictions apply.
202 IEEE SIGNAL PROCESSING LETTERS, VOL. 30, 2023

update expression of ZA-GSRS algorithm is written as TABLE I


COMPUTATIONAL COMPLEXITY COMPARISON
w(n + 1) = w(n) + μξ(n)x(n) − ρsgn[w(n)], (6)
where ξ(n) is same as given in (4), ρ = μφ is the zero attraction
control parameter. However, the performance of zero-attracted
sparse aware algorithms deteriorates when system character-
istics change from sparse to non-sparse [7], [24], [25]. To
enhance the SI performance for sparse and non-sparse systems,
a log-sum sparse based penalty function known as re-weighted
zero-attraction (RZA) is introduced in [24]. By incorporating
the RZA penalty term with GSRS cost, the cost function for
RZA-GSRS can be designed as
L−1

Js [e(n)] = J[e(n)] + φ log [1 + |wj (n)|/δ  ] (7)
j=0 w̃(n) is uncorrelated with d(n) and x(n) [31], we get
E[x(n)e(n)] = Rxx E[w̃(n)]. Hence, (11) can be written as
and using stochastic GD, the corresponding weight update ex-
pression for RZA-GSRS can be written as, E[w̃(n + 1)] = [I − μE[f  {e(n)}]Rxx ] E[w̃(n)] (13)
Hence, to ensure convergence of the GSRS algorithm in mean
sgn[w(n)]
w(n + 1) = w(n) + μx(n)ξ(n) − ρ , (8) sense, μ should be bounded as
1 + δ|w(n)|
2 2
where ξ(n) is given in (4), ρ = μφ/δ  and δ is shrinkage param- 0<μ< ≤
E[f  {e(n)}]tr[Rxx ] E[f  {e(n)}]λmax [Rxx ]
eter. As stated, l0 -norm is an excellent candidate for the sparse (14)
penalty function. However, as direct minimization of l0 -norm where tr[·] and λmax [·] denote the trace and largest eigenvalue of
is known to be NP-hard [20], [23], several approximations of [·]. The bound on μ for the proposed ZA-GSRS, RZA-GSRS, and
l0 -norm have been widely used as a alternative [23]. One of the LF-GSRS is the same as in (14), since the expectation of the zero
widely used approximation includes the multivariate Laplace attraction term, i.e., E[sgn[w(n)]], E[sgn[w(n)]/1 + δ|w(n)|],
function (LF) [23], [26]. By incorporating the penalty function and E[sgn[w(n)]exp(−η|w(n)|)], respectively, are bounded for
of LF in GSRS cost function, the overall cost function is all values of w(n) [15], [22] and since ρ is finite, bounded and
L−1 very small positive number.
 Table I reports the computational complexity of the proposed
Js [e(n)] = J[e(n)] + φ [1 − exp(−η|wj (n)|)] (9)
robust GSRS and its sparsity-aware robust versions along with
j=0
other competing robust and sparsity-aware robust algorithms.
where η controls the smoothness of approximation. The weight The proposed GSRS algorithm and its sparsity-aware versions
update expression is obtained using the stochastic GD approach requires γ + 2 extra multiplications than the LHCAF and its
corresponding sparsity-aware versions, as seen from Table I. The
w(n + 1) = w(n) + μx(n)ξ(n)−ρsgn[w(n)]exp(−η|w(n)|), value of γ in GSRS is usually a small value (0 < γ ≤ 4) as can be
(10) seen from the simulation studies in Section IV (Table II). Hence,
where ξ(n) is as given in (4), ρ = μηφ and (10) is referred to as this slight increase in computational burden of the proposed
the LF-GSRS algorithm. algorithms is compensated by the improvement in convergence
achieved, as will be seen in the simulation studies.
III. BOUND ON μ AND COMPUTATIONAL COMPLEXITY
The following assumptions are considered for the deriva- IV. SIMULATION STUDY
tions [14], [31]: (A1) additive noise v(n) is i.i.d. and inde- A simulation study is presented to investigate the perfor-
pendent of x(n); (A2) x(n) and d(n) are zero mean & jointly mance of the proposed algorithms for acoustic path SI sce-
Gaussian, hence, e(n) is zero mean Gaussian; (A3) weight error narios. The performance is evaluated for different Gaussian
vector w̃(n) = wopt − w(n) is uncorrelated with d(n) & x(n). and non-Gaussian noises in the presence of outliers. Mean
Subtracting (3) from the optimal weight vector wopt and taking square deviation, MSD (dB)= 10 log10 [||wopt − w(n)||2 ] has
expectation E[·] on both sides, we get been considered as a performance metric.
To investigate the performance of proposed robust algorithm
E[w̃(n + 1)] = E[w̃(n)] − μE[x(n)f {e(n)}] (11) for SI problem, the feedback path impulse response of a behind
where f {e(n)} = ξ(n) is a nonlinear function of e(n) as given the ear (BTE) hearing aid having length of 90 samples is consid-
in (1). Using A2 and Lemma 1 of [14], we get ered as wopt [22]. The output of the unknown system has been
corrupted by a background noise given by the model v(n) =
E[x(n)f {e(n)}] = E[e(n)f {e(n)}]E[x(n)e(n)]/E[e2 (n)] [1 − p(n)]F (n) + p(n)G(n), where p(n) is a binary i.i.d. pro-
(12) cess, with Pr{p(n) = 1} = τ and Pr{p(n) = 0} = 1 − τ with
Since e(n) is zero mean Gaussian and f {e(n)} is memoryless, 0 ≤ τ ≤ 1. F (n) and G(n) are mutually independent compo-
using the Bussgang Theorem [33], E[e(n)f {e(n)}] = nents where the variance of F (n) is smaller than G(n) which
E[f  {e(n)}]E[e2 (n)]. Hence, (12) can be written as corresponds to the large impulsive interferences [9]. Moreover,
E[x(n)f {e(n)}] = E[f  {e(n)}]E[x(n)e(n)]. Now, assuming both the noise processes are independent of p(n) [15]. The

Authorized licensed use limited to: INDIAN INST OF INFO TECH AND MANAGEMENT. Downloaded on May 03,2025 at 10:21:30 UTC from IEEE Xplore. Restrictions apply.
PATEL et al.: GENERALIZED SOFT-ROOT-SIGN BASED ROBUST SPARSITY-AWARE ADAPTIVE FILTERS 203

TABLE II
PARAMETERS USED IN SIMULATION STUDY AND STEADY-STATE MSD VALUES IN dB

Fig. 3. I) MSD (dB) curves for LMS [2], GMCC [9], LLAD [19], LHCAF [13], [14], EHCAF [34] and GSRS. II) MSD (dB) curves for (i) RZA-LMS [24],
(ii) ZA-GMCC [25], (iii) RZA-GMCC [25], (iv) ZA-LHCAF [15], (v) RZA-LHCAF [15], (vi) ZA-GSRS, (vii) RZA-GSRS and (viii) LF-GSRS; for different noise
distribution G(n): (a) Gaussian, (b) Laplace, (c) Uniform and (d) Binary.

parameter τ has been taken as 0.06 [9]. The input signal is disturbances, respectively. Fig. 3 (II) illustrates the improved
considered x(n) ∼ N (0, 1). For the noise component F (n) four performance of proposed RZA-GSRS and LF-GSRS algorithms
different distributions are considered as follows: (a) Gaussian in terms of lower MSD achieved over other algorithms; we can
with ∼ N (0, 1) (b) Laplacian with
√ zero
√ mean and unit variance observe that ZA-GSRS achieves lower steady state compared to
(c) Uniformly distributed in [− 3, 3] (d) Binary distributed ZA-LHCAF, RZA-GSRS achieves lower steady state MSD com-
in the set {−1, 1} with Pr{x = −1} = Pr{x = 1} = 0.5. In the pared to RZA-LHCAF and the LF-GSRS algorithm achieves
simulation, G(n) has been considered as having zero mean and the lowest steady state MSD. Simulation parameters used along
1000E[(xT (n)ŵ)2 ] variance [35]. Apart from this background with the Steady-state MSD values achieved by all algorithms
noise, two outliers of magnitude 100 were added at 40000 and are given in Table II.
80000 samples respectively. The length of simulation is 100000 V. CONCLUSION
samples and 100 independent trials are considered. Fig. 3 (I)
shows the convergence characteristics of the proposed GSRS This paper presents a novel robust cost function which is
algorithm along with other existing algorithms for different based on generalized soft-root-sign function and the correspond-
noise distributions. The enhanced steady-state performance of ing adaptive filter GSRS was developed. The proposed GSRS
GSRS algorithm over other existing algorithms, for different algorithm provides negligible weight update in the presence
non-Gaussian + impulsive disturbance, can be observed from of outliers thereby bringing down the misalignment present
Fig. 3 (I). The simulation parameters and steady-state MSD in steady-state and improves robustness. The performance of
values for all algorithms are tabulated in Table II. GSRS algorithm was tested for acoustic path SI scenario for
The performance of the proposed sparsity-aware robust ZA- different noises distribution along with high-amplitude outliers.
GSRS, RZA-GSRS, and LF-GSRS algorithms are compared The simulation studies confirms the enhanced performance of
with other existing algorithms. A sparse SI scenario has been proposed GSRS over existing algorithms. Moreover, to exploit
considered with a network echo path taken from [36] with x(n) the sparsity of unknown sparse system inclusive of robustness
and v(n) are the same as in robust SI. An adaptive filter of to outliers the ZA-GSRS, RZA-GSRS, LF-GSRS algorithms
length 500 is used to model the network echo path. Fig. 3 (II) were developed which show superior performance than existing
(a), (b), (c), and (d) shows convergence behaviour of proposed sparsity-aware robust algorithms. The bound on learning rate
ZA-GSRS, RZA-GSRS, and LF-GSRS algorithms for Gaussian, along with the computational complexity of proposed algorithms
Laplacian, Uniform, and Binary noise distributions + impulsive is also discussed in this paper.

Authorized licensed use limited to: INDIAN INST OF INFO TECH AND MANAGEMENT. Downloaded on May 03,2025 at 10:21:30 UTC from IEEE Xplore. Restrictions apply.
204 IEEE SIGNAL PROCESSING LETTERS, VOL. 30, 2023

REFERENCES [20] Y. Gu, J. Jin, and S. Mei, “l0 norm constraint LMS algorithm for sparse sys-
tem identification,” IEEE Signal Process. Lett., vol. 16, no. 9, pp. 774–777,
[1] A. H. Sayed, Adaptive Filters. Hoboken, NJ, USA: Wiley, 2011. Sep. 2009.
[2] P. S. Diniz, Adaptive Filtering. Berlin, Germany: Springer, 2020. [21] Y. Yu, Z. Huang, H. He, Y. Zakharov, and R. C. de Lamare, “Sparsity-aware
[3] K. Kumar, R. Pandey, M. Karthik, S. S. Bhattacharjee, and N. V. George, robust normalized subband adaptive filtering algorithms with alternating
“Robust and sparsity-aware adaptive filters: A review,” Signal Process., optimization of parameters,” IEEE Trans. Circuits Syst. II: Exp. Briefs,
vol. 189, 2021, Art. no. 108276. vol. 69, no. 9, pp. 3934–3938, Sep. 2022.
[4] D. Liu and H. Zhao, “Sparsity-aware logarithmic hyperbolic cosine nor- [22] S. Pradhan, V. Patel, K. Patel, J. Maheshwari, and N. V. George,
malized subband adaptive filter algorithm with step-size optimization,” “Acoustic feedback cancellation in digital hearing aids: A sparse
IEEE Trans. Circuits Syst. II: Exp. Briefs, vol. 69, no. 9, pp. 3964–3968, adaptive filtering approach,” Appl. Acoust., vol. 122, pp. 138–145,
Sep. 2022. 2017.
[5] Z. Zhou, L. Huang, M. G. Christensen, and S. Zhang, “Robust spectral [23] M. V. S. Lima, T. N. Ferreira, W. A. Martins, and P. S. R. Diniz, “Sparsity-
analysis of multi-channel sinusoidal signals in impulsive noise environ- aware data-selective adaptive filters,” IEEE Trans. Signal Process., vol. 62,
ments,” IEEE Trans. Signal Process., vol. 70, pp. 919–935, 2022. no. 17, pp. 4557–4572, Sep. 2014.
[6] W. Zhu, L. Luo, J. Sun, and M. G. Christensen, “A new variable step size [24] Y. Chen, Y. Gu, and A. O. Hero, “Sparse LMS for system identification,”
algorithm based hybrid active noise control system for gaussian noise with in Proc. IEEE Int. Conf. Acoust., Speech Signal Process., 2009, pp. 3125–
impulsive interference,” in Proc. IEEE 6th Int. Conf. Comput. Commun., 3128.
2020, pp. 1072–1076. [25] W. Ma, J. Duan, B. Chen, G. Gui, and W. Man, “Recursive generalized
[7] W. Ma, H. Qu, G. Gui, L. Xu, J. Zhao, and B. Chen, “Maximum correntropy maximum correntropy criterion algorithm with sparse penalty constraints
criterion based sparse adaptive filtering algorithms for robust channel for system identification,” Asian J. Control, vol. 19, no. 3, pp. 1164–1172,
estimation under non-Gaussian environments,” J. Franklin Inst., vol. 352, 2017.
no. 7, pp. 2708–2727, 2015. [26] J. Trzasko and A. Manduca, “Highly undersampled magnetic resonance
[8] W. Liu, P. P. Pokharel, and J. C. Principe, “Correntropy: Properties and image reconstruction via homotopic 0 -Minimization,” IEEE Trans. Med.
applications in non-Gaussian signal processing,” IEEE Trans. Signal Pro- Imag., vol. 28, no. 1, pp. 106–121, Jan. 2009.
cess., vol. 55, no. 11, pp. 5286–5298, Nov. 2007. [27] H. Zayyani, “Continuous mixed p-norm adaptive algorithm for system
[9] B. Chen, L. Xing, H. Zhao, N. Zheng, and J. C. Príncipe, “Generalized identification,” IEEE Signal Process. Lett., vol. 21, no. 9, pp. 1108–1110,
correntropy for robust adaptive filtering,” IEEE Trans. Signal Process., Sep. 2014.
vol. 64, no. 13, pp. 3376–3387, Jul. 2016. [28] Z. Wang, H. Zhao, and X. Zeng, “Constrained least mean M-estimation
[10] S. Chen, Q. Zhang, T. Zhang, L. Zhang, L. Peng, and S. Wang, “Ro- adaptive filtering algorithm,” IEEE Trans. Circuits Syst. II: Exp. Briefs,
bust state estimation with maximum correntropy rotating geometric un- vol. 68, no. 4, pp. 1507–1511, Apr. 2021.
scented Kalman filter,” IEEE Trans. Instrum. Meas., vol. 71, 2022, Art. [29] Y. Zhu, H. Zhao, X. Zeng, and B. Chen, “Robust generalized
no. 2501714. maximum correntropy criterion algorithms for active noise control,”
[11] Y. Yu, H. He, R. C. de Lamare, and B. Chen, “General robust subband IEEE/ACM Trans. Audio, Speech, Lang. Process., vol. 28, pp. 1282–1292,
adaptive filtering: Algorithms and applications,” IEEE/ACM Trans. Audio, 2020.
Speech, Lang. Process., vol. 30, pp. 2128–2140, 2022. [30] F. Huang, J. Zhang, and S. Zhang, “Complex-valued proportionate affine
[12] Y. Chen and H. Zhao, “Improved robust total least squares adaptive filter projection versoria algorithms and their combined-step-size variants for
algorithms using hyperbolic secant function,” IEEE Trans. Circuits Syst. sparse system identification under impulsive noises,” Digit. Signal Pro-
II: Exp. Briefs, vol. 69, no. 9, pp. 3944–3948, Sep. 2022. cess., vol. 118, 2021, Art. no. 103209.
[13] S. Wang, W. Wang, K. Xiong, H. H. Iu, and C. K. Tse, “Logarithmic [31] A. H. Sayed, Fundamentals of Adaptive Filtering. Hoboken, NJ, USA:
hyperbolic cosine adaptive filter and its performance analysis,” IEEE Wiley, 2003.
Trans. Syst., Man, Cybern. Syst., vol. 51, no. 4, pp. 2512–2524, Apr. 2021. [32] F. Chen, X. Li, S. Duan, L. Wang, and J. Wu, “Diffusion generalized
[14] C. Liu and M. Jiang, “Robust adaptive filter with lncosh cost,” Signal maximum correntropy criterion algorithm for distributed estimation over
Process., vol. 168, 2020, Art. no. 107348. multitask network,” Digit. Signal Process., vol. 81, pp. 16–25, 2018.
[15] K. Kumar, S. S. Bhattacharjee, and N. V. George, “Joint logarithmic [33] A. Papoulis, Random Variables and Stochastic Processes. New York, NY,
hyperbolic cosine robust sparse adaptive algorithms,” IEEE Trans. Circuits USA: McGraw Hill, 1965.
Syst. II: Exp. Briefs, vol. 68, no. 1, pp. 526–530, Jan. 2021. [34] K. Kumar, R. Pandey, S. S. Bhattacharjee, and N. V. George, “Exponential
[16] T. Liang, Y. Li, Y. V. Zakharov, W. Xue, and J. Qi, “Constrained least hyperbolic cosine robust adaptive filters for audio signal processing,” IEEE
lncosh adaptive filtering algorithm,” Signal Process., vol. 183, 2021, Signal Process. Lett., vol. 28, pp. 1410–1414, 2021.
Art. no. 108044. [35] F. Y. Wu, K. Yang, and Y. Hu, “Sparse estimator with l0 -Norm constraint
[17] R. L. Das and M. Narwaria, “Lorentzian based adaptive filters for impulsive kernel maximum-correntropy-criterion,” IEEE Trans. Circuits Syst. II:
noise environments,” IEEE Trans. Circuits Syst. I: Regular Papers, vol. 64, Exp. Briefs, vol. 67, no. 2, pp. 400–404, Feb. 2020.
no. 6, pp. 1529–1539, Jun. 2017. [36] S. S. Bhattacharjee, K. Kumar, and N. V. George, “Nearest kronecker
[18] K. Xiong and S. Wang, “Robust least mean logarithmic square adaptive product decomposition based generalized maximum correntropy and gen-
filtering algorithms,” J. Franklin Inst., vol. 356, no. 1, pp. 654–674, 2019. eralized hyperbolic secant robust adaptive filters,” IEEE Signal Process.
[19] M. O. Sayin, N. D. Vanli, and S. S. Kozat, “A novel family of adaptive Lett., vol. 27, pp. 1525–1529, 2020.
filtering algorithms based on the logarithmic cost,” IEEE Trans. Signal
Process., vol. 62, no. 17, pp. 4411–4424, Sep. 2014.

Authorized licensed use limited to: INDIAN INST OF INFO TECH AND MANAGEMENT. Downloaded on May 03,2025 at 10:21:30 UTC from IEEE Xplore. Restrictions apply.

You might also like