Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2016, 2016 International Joint Conference on Neural Networks (IJCNN)
…
6 pages
1 file
Kernel least mean square is a simple and effective adaptive algorithm, but dragged by its unlimited growing network size. Many schemes have been proposed to reduce the network size, but few takes the distribution of the input data into account. Input data distribution is generally important in view of both model sparsification and generalization performance promotion. In this paper, we introduce an online density-dependent vector quantization scheme, which adopts a shrinkage threshold to adapt its output to the input data distribution. This scheme is then incorporated into the quantized kernel least mean square (QKLMS) to develop a density-dependent QKLMS (DQKLMS). Experiments on static function estimation and short-term chaotic time series prediction are presented to demonstrate the desirable performance of DQKLMS.
IEEE Transactions on Neural Networks and Learning Systems, 2012
In this paper, we propose a quantization approach, as an alternative of sparsification, to curb the growth of the radial basis function structure in kernel adaptive filtering. The basic idea behind this method is to quantize and hence compress the input (or feature) space. Different from sparsification, the new approach uses the "redundant" data to update the coefficient of the closest center. In particular, a quantized kernel least mean square (QKLMS) algorithm is developed, which is based on a simple online vector quantization method. The analytical study of the mean square convergence has been carried out. The energy conservation relation for QKLMS is established, and on this basis we arrive at a sufficient condition for mean square convergence, and a lower and upper bound on the theoretical value of the steady-state excess mean square error. Static function estimation and short-term chaotic time-series prediction examples are presented to demonstrate the excellent performance.
Neurocomputing
Kernel adaptive filters (KAF) are a class of powerful nonlinear filters developed in Reproducing Kernel Hilbert Space (RKHS). The Gaussian kernel is usually the default kernel in KAF algorithms, but selecting the proper kernel size (bandwidth) is still an open important issue especially for learning with small sample sizes. In previous research, the kernel size was set manually or estimated in advance by Silvermans rule based on the sample distribution. This study aims to develop an online technique for optimizing the kernel size of the kernel least mean square (KLMS) algorithm. A sequential optimization strategy is proposed, and a new algorithm is developed, in which the filter weights and the kernel size are both sequentially updated by stochastic gradient algorithms that minimize the mean square error (MSE). Theoretical results on convergence are also presented. The excellent performance of the new algorithm is confirmed by simulations on static function estimation and short term chaotic time series prediction.
Signal Processing, 2013
This paper presents a quantized kernel least mean square algorithm with a fixed memory budget, named QKLMS-FB. In order to deal with the growing support inherent in online kernel methods, the proposed algorithm utilizes a pruning criterion, called significance measure, based on a weighted contribution of the existing data centers. The basic idea of the proposed methodology is to discard the center with the smallest influence on the whole system, when a new sample is included in the dictionary. The significance measure can be updated recursively at each step which is suitable for online operation. Furthermore, the proposed methodology does not need any a priori knowledge about the data and its computational complexity is linear with the center number. Experiments show that the proposed algorithm successfully prunes the least ''significant'' centers and preserves the important ones, resulting in a compact KLMS model with little loss in accuracy.
The 2012 International Joint Conference on Neural Networks (IJCNN), 2012
In a recent work, we have proposed the quantized kernel least mean square (QKLMS) algorithm, which is quite effective in online learning sequentially a nonlinear mapping with a slowly growing radial basis function (RBF) structure. In this paper, in order to further reduce the network size, we propose a sparse QKLMS algorithm, which is derived by adding a sparsity inducing 1 l norm penalty of the coefficients to the squared error cost. Simulation examples show that the new algorithm works efficiently, and results in a much sparser network while preserving a desirable performance.
2003
Classical nonlinear models for time series prediction exhibit improved capabilities compared to linear ones. Nonlinear regression has however drawbacks, such as overfitting and local minima problems, user-adjusted parameters, higher computation times, etc. There is thus a need for simple nonlinear models with a restricted number of learning parameters, high performances and reasonable complexity. In this paper, we present a method for nonlinear forecasting based on the quantization of vectors concatenating inputs (regressors) and outputs (predictions). Weighting techniques are applied to give more importance to inputs and outputs respectively. The method is illustrated on standard time series prediction benchmarks.
2014 International Joint Conference on Neural Networks (IJCNN), 2014
Use of multiple kernels in the conventional kernel algorithms addresses the kernel selection problem as well as improves the performance, and is therefore, gaining much popularity recently. Kernel least mean square (KLMS) has been extended to multiple kernels recently using different approaches, one of which is mixture kernel least mean square (MxKLMS). Although this method addresses the kernel selection problem, and improves the performance, it suffers from a problem of linearly growing dictionary like in KLMS. In this paper, we present the quantized MxKLMS (QMxKLMS) algorithm to achieve sublinear growth in dictionary. This method quantizes the input space based on the conventional criteria using Euclidean distance in input space as well as a new criteria using Euclidean distance in RKHS induced by the sum kernel. The empirical results suggest that QMxKLMS using the later metric is suitable in a nonstationary environment with abruptly changing modes as they are able to utilize the information regarding the relative importance of kernels. Moreover, the QMxKLMS using both metrics are compared with the QKLMS and the existing multi-kernel methods MKLMS and MKNLMS-CS, showing an improved performance over these methods.
Neurocomputing, 2013
As an alternative adaptation criterion, the minimum error entropy (MEE) criterion has been receiving increasing attention due to its successful use in, especially, nonlinear and non-Gaussian signal processing. In this paper, we study the application of error entropy minimization to kernel adaptive filtering, a new and promising technique that implements the conventional linear adaptive filters in reproducing kernel Hilbert space (RKHS) and obtains the nonlinear adaptive filters in original input space. The kernel minimum error entropy (KMEE) algorithm is derived, which is essentially a generalized stochastic information gradient (SIG) algorithm in RKHS. The computational complexity of KMEE is just similar to the kernel affine projection algorithm (KAPA). We also utilize the quantization approach to constrain the network size growth, and develop the quantized KMEE (QKMEE) algorithm. Further, we analyze the mean square convergence of KMEE. The energy conservation relation is derived and a sufficient condition that ensures the mean square convergence is obtained. The performance of the new algorithm is demonstrated in nonlinear system identification and short-term chaotic time series prediction.
IEEE Transactions on Signal Processing, 2009
Kernel-based algorithms have been a topic of considerable interest in the machine learning community over the last ten years. Their attractiveness resides in their elegant treatment of nonlinear problems. They have been successfully applied to pattern recognition, regression and density estimation. A common characteristic of kernel-based methods is that they deal with kernel expansions whose number of terms equals the number of input data, making them unsuitable for online applications. Recently, several solutions have been proposed to circumvent this computational burden in time series prediction problems. Nevertheless, most of them require excessively elaborate and costly operations. In this paper, we investigate a new model reduction criterion that makes computationally demanding sparsification procedures unnecessary. The increase in the number of variables is controlled by the coherence parameter, a fundamental quantity that characterizes the behavior of dictionaries in sparse approximation problems. We incorporate the coherence criterion into a new kernel-based affine projection algorithm for time series prediction. We also derive the kernel-based normalized LMS algorithm as a particular case. Finally, experiments are conducted to compare our approach to existing methods.
IEEE Transactions on Signal Processing, 2012
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Proceedings of the 5th international conference on Computer systems and technologies - CompSysTech '04, 2004
Future Generation Computer Systems, 2005
IEEE Transactions on Signal Processing, 2000
Pattern Analysis and Applications, 2011
Neural Network World, 2017
IFAC-PapersOnLine, 2015
IEEE transactions on neural networks and learning systems, 2015
IEEE Transactions on Neural Networks, 2011
2008 The Second International Conference on Advanced Engineering Computing and Applications in Sciences, 2008
International Conference on Pattern Recognition, 2004
Neural Processing Letters, 1998
EURASIP Journal on Advances in Signal Processing, 2016