Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2012, IEEE Transactions on Neural Networks and Learning Systems
In this paper, we propose a quantization approach, as an alternative of sparsification, to curb the growth of the radial basis function structure in kernel adaptive filtering. The basic idea behind this method is to quantize and hence compress the input (or feature) space. Different from sparsification, the new approach uses the "redundant" data to update the coefficient of the closest center. In particular, a quantized kernel least mean square (QKLMS) algorithm is developed, which is based on a simple online vector quantization method. The analytical study of the mean square convergence has been carried out. The energy conservation relation for QKLMS is established, and on this basis we arrive at a sufficient condition for mean square convergence, and a lower and upper bound on the theoretical value of the steady-state excess mean square error. Static function estimation and short-term chaotic time-series prediction examples are presented to demonstrate the excellent performance.
2016 International Joint Conference on Neural Networks (IJCNN), 2016
Kernel least mean square is a simple and effective adaptive algorithm, but dragged by its unlimited growing network size. Many schemes have been proposed to reduce the network size, but few takes the distribution of the input data into account. Input data distribution is generally important in view of both model sparsification and generalization performance promotion. In this paper, we introduce an online density-dependent vector quantization scheme, which adopts a shrinkage threshold to adapt its output to the input data distribution. This scheme is then incorporated into the quantized kernel least mean square (QKLMS) to develop a density-dependent QKLMS (DQKLMS). Experiments on static function estimation and short-term chaotic time series prediction are presented to demonstrate the desirable performance of DQKLMS.
Neurocomputing
Kernel adaptive filters (KAF) are a class of powerful nonlinear filters developed in Reproducing Kernel Hilbert Space (RKHS). The Gaussian kernel is usually the default kernel in KAF algorithms, but selecting the proper kernel size (bandwidth) is still an open important issue especially for learning with small sample sizes. In previous research, the kernel size was set manually or estimated in advance by Silvermans rule based on the sample distribution. This study aims to develop an online technique for optimizing the kernel size of the kernel least mean square (KLMS) algorithm. A sequential optimization strategy is proposed, and a new algorithm is developed, in which the filter weights and the kernel size are both sequentially updated by stochastic gradient algorithms that minimize the mean square error (MSE). Theoretical results on convergence are also presented. The excellent performance of the new algorithm is confirmed by simulations on static function estimation and short term chaotic time series prediction.
Signal Processing, 2013
This paper presents a quantized kernel least mean square algorithm with a fixed memory budget, named QKLMS-FB. In order to deal with the growing support inherent in online kernel methods, the proposed algorithm utilizes a pruning criterion, called significance measure, based on a weighted contribution of the existing data centers. The basic idea of the proposed methodology is to discard the center with the smallest influence on the whole system, when a new sample is included in the dictionary. The significance measure can be updated recursively at each step which is suitable for online operation. Furthermore, the proposed methodology does not need any a priori knowledge about the data and its computational complexity is linear with the center number. Experiments show that the proposed algorithm successfully prunes the least ''significant'' centers and preserves the important ones, resulting in a compact KLMS model with little loss in accuracy.
Procedia Computer Science, 2013
2013 IEEE Digital Signal Processing and Signal Processing Education Meeting (DSP/SPE), 2013
Kernel adaptive filtering is a growing field of signal processing that is concerned with nonlinear adaptive filtering. When implemented naïvely, the time and memory complexities of these algorithms grow at least linearly with the amount of data processed. A large number of practical solutions have been proposed throughout the last decade, based on sparsification or pruning mechanisms. Nevertheless, there is a lack of understanding of their relative merits, which often depend on the data they operate on. We propose to study the quality of the solution as a function of either the time or the memory complexity. We empirically test six different kernel adaptive filtering algorithms on three different benchmark data sets. We make our code available through an open source toolbox that includes additional algorithms and allows to measure the complexities explicitly in number of floating point operations and bytes needed, respectively.
1993 IEEE International Symposium on Circuits and Systems, 1993
Abdvact-The effects of quantization in an LMS-Newton adaptive filtering algorithm are investigated. The algorithm considered uses an optimum convergence factor, that forces the output ayoclteriori error to become zero in each iteration. The prOpagation of errors due to quantization in the internal variables of the algorithm is investigated and a closed-form formula for the excess mean square error due to quantization is derived. Fixed-point arithmetic is assumed throughout. Several \simulations confirm the accuracy of the formulas presented.
IEEE transactions on neural networks and learning systems, 2015
In this paper, we present a novel kernel adaptive recurrent filtering algorithm based on the autoregressive-moving-average (ARMA) model, which is trained with recurrent stochastic gradient descent in the reproducing kernel Hilbert spaces. This kernelized recurrent system, the kernel adaptive ARMA (KAARMA) algorithm, brings together the theories of adaptive signal processing and recurrent neural networks (RNNs), extending the current theory of kernel adaptive filtering (KAF) using the representer theorem to include feedback. Compared with classical feedforward KAF methods, the KAARMA algorithm provides general nonlinear solutions for complex dynamical systems in a state-space representation, with a deferred teacher signal, by propagating forward the hidden states. We demonstrate its capabilities to provide exact solutions with compact structures by solving a set of benchmark nondeterministic polynomial-complete problems involving grammatical inference. Simulation results show that th...
Signal Processing, 2009
The linear least mean squares (LMS) algorithm has been recently extended to a reproducing kernel Hilbert space, resulting in an adaptive filter built from a weighted sum of kernel functions evaluated at each incoming data sample. With time, the size of the filter as well as the computation and memory requirements increase. In this paper, we shall propose a new efficient methodology for constraining the increase in length of a radial basis function (RBF) network resulting from the kernel LMS algorithm without significant sacrifice on performance. The method involves sequential Gaussian elimination steps on the Gram matrix to test the linear dependency of the feature vector corresponding to each new input with all the previous feature vectors. This gives an efficient method of continuing the learning as well as restricting the number of kernel functions used.
… Speech and Signal …, 2010
We present a kernel-based recursive least-squares (KRLS) algorithm on a fixed memory budget, capable of recursively learning a nonlinear mapping and tracking changes over time. In order to deal with the growing support inherent to online kernel methods, the proposed method uses a combined strategy of growing and pruning the support. In contrast to a previous sliding-window based technique, the presented algorithm does not prune the oldest data point in every time instant but it instead aims to prune the least significant data point. We also introduce a label update procedure to equip the algorithm with tracking capability. Simulations show that the proposed method obtains better performance than state-of-the-art kernel adaptive filtering techniques given similar memory requirements.
The 2013 International Joint Conference on Neural Networks (IJCNN), 2013
Instead of using single kernel, different approaches of using multiple kernels have been proposed recently in kernel learning literature, one of which is multiple kernel learning (MKL). In this paper, we propose an alternative to MKL in order to select the appropriate kernel given a pool of predefined kernels, for a family of online kernel filters called kernel adaptive filters (KAF). The need for an alternative is that, in a sequential learning method where the hypothesis is updated at every incoming sample, MKL would provide a new kernel, and thus a new hypothesis in the new reproducing kernel Hilbert space (RKHS) associated with the kernel. This does not fit well in the KAF framework, as learning a hypothesis in a fixed RKHS is the core of the KAF algorithms. Hence, we introduce an adaptive learning method to address the kernel selection problem for the KAF, based on competitive mixture of models. We propose mixture kernel least mean square (MxKLMS) adaptive filtering algorithm, where the kernel least mean square (KLMS) filters learned with different kernels, act in parallel at each input instance and are competitively combined such that the filter with the best kernel is an expert for each input regime. The competition among these experts is created by using a performance based gating, that chooses the appropriate expert locally. Therefore, the individual filter parameters as well as the weights for combination of these filters are learned simultaneously in an online fashion. The results obtained suggest that the model not only selects the best kernel, but also significantly improves the prediction accuracy.
Neurocomputing, 2013
As an alternative adaptation criterion, the minimum error entropy (MEE) criterion has been receiving increasing attention due to its successful use in, especially, nonlinear and non-Gaussian signal processing. In this paper, we study the application of error entropy minimization to kernel adaptive filtering, a new and promising technique that implements the conventional linear adaptive filters in reproducing kernel Hilbert space (RKHS) and obtains the nonlinear adaptive filters in original input space. The kernel minimum error entropy (KMEE) algorithm is derived, which is essentially a generalized stochastic information gradient (SIG) algorithm in RKHS. The computational complexity of KMEE is just similar to the kernel affine projection algorithm (KAPA). We also utilize the quantization approach to constrain the network size growth, and develop the quantized KMEE (QKMEE) algorithm. Further, we analyze the mean square convergence of KMEE. The energy conservation relation is derived and a sufficient condition that ensures the mean square convergence is obtained. The performance of the new algorithm is demonstrated in nonlinear system identification and short-term chaotic time series prediction.
2021 IEEE Statistical Signal Processing Workshop (SSP), 2021
In this paper, two new multi-output kernel adaptive filtering algorithms are developed that exploit the temporal and spatial correlations among the input-output multivariate time series. They are multi-output versions of the popular kernel least mean squares (KLMS) algorithm with two different sparsification criteria. The first one, denoted as MO-QKLMS, uses the coherence criterion in order to limit the dictionary size. The second one, denoted as MO-RFF-KLMS, uses random Fourier features (RFF) to approximate the kernel functions by linear inner products. Simulation results with synthetic and real data are presented to assess convergence speed, steady-state performance and complexities of the proposed algorithms.
EURASIP Journal on Advances in Signal Processing, 2016
This paper presents a model-selection strategy based on minimum description length (MDL) that keeps the kernel least-mean-square (KLMS) model tuned to the complexity of the input data. The proposed KLMS-MDL filter adapts its model order as well as its coefficients online, behaving as a self-organizing system and achieving a good compromise between system accuracy and computational complexity without a priori knowledge. Particularly, in a nonstationary scenario, the model order of the proposed algorithm changes continuously with the input data structure. Experiments show the proposed algorithm successfully builds compact kernel adaptive filters with better accuracy than KLMS with sparsity or fixed-budget algorithms.
2014 International Joint Conference on Neural Networks (IJCNN), 2014
Use of multiple kernels in the conventional kernel algorithms addresses the kernel selection problem as well as improves the performance, and is therefore, gaining much popularity recently. Kernel least mean square (KLMS) has been extended to multiple kernels recently using different approaches, one of which is mixture kernel least mean square (MxKLMS). Although this method addresses the kernel selection problem, and improves the performance, it suffers from a problem of linearly growing dictionary like in KLMS. In this paper, we present the quantized MxKLMS (QMxKLMS) algorithm to achieve sublinear growth in dictionary. This method quantizes the input space based on the conventional criteria using Euclidean distance in input space as well as a new criteria using Euclidean distance in RKHS induced by the sum kernel. The empirical results suggest that QMxKLMS using the later metric is suitable in a nonstationary environment with abruptly changing modes as they are able to utilize the information regarding the relative importance of kernels. Moreover, the QMxKLMS using both metrics are compared with the QKLMS and the existing multi-kernel methods MKLMS and MKNLMS-CS, showing an improved performance over these methods.
ArXiv, 2019
Kernel methods form a powerful, versatile, and theoretically-grounded unifying framework to solve nonlinear problems in signal processing and machine learning. The standard approach relies on the kernel trick to perform pairwise evaluations of a kernel function, which leads to scalability issues for large datasets due to its linear and superlinear growth with respect to the training data. A popular approach to tackle this problem, known as random Fourier features (RFFs), samples from a distribution to obtain the data-independent basis of a higher finite-dimensional feature space, where its dot product approximates the kernel function. Recently, deterministic, rather than random construction has been shown to outperform RFFs, by approximating the kernel in the frequency domain using Gaussian quadrature. In this paper, we view the dot product of these explicit mappings not as an approximation, but as an equivalent positive-definite kernel that induces a new finite-dimensional reproduc...
Signal Processing, 2012
In this paper, we study the mean square convergence of the kernel least mean square (KLMS). The fundamental energy conservation relation has been established in feature space. Starting from the energy conservation relation, we carry out the mean square convergence analysis and obtain several important theoretical results, including an upper bound on step size that guarantees the mean square convergence, the theoretical steady-state excess mean square error (EMSE), an optimal step size for the fastest convergence, and an optimal kernel size for the fastest initial convergence. Monte Carlo simulation results agree with the theoretical analysis very well.
The 2012 International Joint Conference on Neural Networks (IJCNN), 2012
In a recent work, we have proposed the quantized kernel least mean square (QKLMS) algorithm, which is quite effective in online learning sequentially a nonlinear mapping with a slowly growing radial basis function (RBF) structure. In this paper, in order to further reduce the network size, we propose a sparse QKLMS algorithm, which is derived by adding a sparsity inducing 1 l norm penalty of the coefficients to the squared error cost. Simulation examples show that the new algorithm works efficiently, and results in a much sparser network while preserving a desirable performance.
2012
of Dissertation Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy FROM FIXED TO ADAPTIVE BUDGET ROBUST KERNEL ADAPTIVE FILTERING By Songlin Zhao December 2012 Chair: Jose C. Principe Major: Electrical and Computer Engineering Recently, owning to universal modeling capacity, convexity in performance surface and modest computational complexity, kernel adaptive filters have attracted more and more attention. Even though these methods achieve powerful classification and regression performance in complicated nonlinear problems, they have drawbacks. This work focuses on how to improve kernel adaptive filters performance both on accuracy and computational complexity. After reviewing some existing adaptive filters cost functions, we introduce an information theoretic objective function, Maximal Correntropy Criterion (MCC), that contains high order statistical information. Here we propose to adopt t...
Neural processing letters, 2003
A method for function approximation in reinforcement learning settings is proposed. The action-value function of the Q-learning method is approximated by the radial basis function neural network and learned by the gradient descent. Those radial basis units that are unable to fit the local action-value function exactly enough are decomposed into new units with smaller widths. The local temporal-difference error is modelled by a two-class learning vector quantization algorithm, which approximates distributions of the positive and of the negative error and provides the centers of the new units. This method is especially convenient in cases of smooth value functions with large local variation in certain parts of the state space, such that non-uniform placement of basis functions is required. In comparison with four related methods, it has the smallest requirements of basis functions when achieving a comparable accuracy.
IJCRT, 2021
The Adaptive filter techniques being used for implementations of noise removal in signal processing systems are very significant. The use of different algorithm specific makes this more versatile and effective because of their intensive arithmetic architectural nature and thereby improving the efficiency of the results. This research paper deals with the implementation of the LMS, NLMS, RLS and Affine projection algorithms along with the kernel versions of LMS and RLS that is KLMS and KRLS. Experiments performed give the brief ideology about how these filters operate for noise and echo cancellation systems. The effect of learning rates of this filter plays a very significant aspect as it decides the stability and efficiency of the respective filters. Learning rates and step sizes make the use of adaptive filters more application specific. The kernel versions are operating on the previous iterated values with different regularization parameters. Adaptive filters can be diversely utilized at application specific purposes while using smart antenna systems, advanced FPGA signal processors and many other digital analytic machines.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.