Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2013, The 2013 International Joint Conference on Neural Networks (IJCNN)
…
7 pages
1 file
Instead of using single kernel, different approaches of using multiple kernels have been proposed recently in kernel learning literature, one of which is multiple kernel learning (MKL). In this paper, we propose an alternative to MKL in order to select the appropriate kernel given a pool of predefined kernels, for a family of online kernel filters called kernel adaptive filters (KAF). The need for an alternative is that, in a sequential learning method where the hypothesis is updated at every incoming sample, MKL would provide a new kernel, and thus a new hypothesis in the new reproducing kernel Hilbert space (RKHS) associated with the kernel. This does not fit well in the KAF framework, as learning a hypothesis in a fixed RKHS is the core of the KAF algorithms. Hence, we introduce an adaptive learning method to address the kernel selection problem for the KAF, based on competitive mixture of models. We propose mixture kernel least mean square (MxKLMS) adaptive filtering algorithm, where the kernel least mean square (KLMS) filters learned with different kernels, act in parallel at each input instance and are competitively combined such that the filter with the best kernel is an expert for each input regime. The competition among these experts is created by using a performance based gating, that chooses the appropriate expert locally. Therefore, the individual filter parameters as well as the weights for combination of these filters are learned simultaneously in an online fashion. The results obtained suggest that the model not only selects the best kernel, but also significantly improves the prediction accuracy.
Neurocomputing
Kernel adaptive filters (KAF) are a class of powerful nonlinear filters developed in Reproducing Kernel Hilbert Space (RKHS). The Gaussian kernel is usually the default kernel in KAF algorithms, but selecting the proper kernel size (bandwidth) is still an open important issue especially for learning with small sample sizes. In previous research, the kernel size was set manually or estimated in advance by Silvermans rule based on the sample distribution. This study aims to develop an online technique for optimizing the kernel size of the kernel least mean square (KLMS) algorithm. A sequential optimization strategy is proposed, and a new algorithm is developed, in which the filter weights and the kernel size are both sequentially updated by stochastic gradient algorithms that minimize the mean square error (MSE). Theoretical results on convergence are also presented. The excellent performance of the new algorithm is confirmed by simulations on static function estimation and short term chaotic time series prediction.
EURASIP Journal on Advances in Signal Processing, 2016
This paper presents a model-selection strategy based on minimum description length (MDL) that keeps the kernel least-mean-square (KLMS) model tuned to the complexity of the input data. The proposed KLMS-MDL filter adapts its model order as well as its coefficients online, behaving as a self-organizing system and achieving a good compromise between system accuracy and computational complexity without a priori knowledge. Particularly, in a nonstationary scenario, the model order of the proposed algorithm changes continuously with the input data structure. Experiments show the proposed algorithm successfully builds compact kernel adaptive filters with better accuracy than KLMS with sparsity or fixed-budget algorithms.
2021 IEEE Statistical Signal Processing Workshop (SSP), 2021
In this paper, two new multi-output kernel adaptive filtering algorithms are developed that exploit the temporal and spatial correlations among the input-output multivariate time series. They are multi-output versions of the popular kernel least mean squares (KLMS) algorithm with two different sparsification criteria. The first one, denoted as MO-QKLMS, uses the coherence criterion in order to limit the dictionary size. The second one, denoted as MO-RFF-KLMS, uses random Fourier features (RFF) to approximate the kernel functions by linear inner products. Simulation results with synthetic and real data are presented to assess convergence speed, steady-state performance and complexities of the proposed algorithms.
Neural Computation, 2013
This review examines kernel methods for online learning, in particular, multiclass classification. We examine margin-based approaches, stemming from Rosenblatt's original perceptron algorithm, as well as nonparametric probabilistic approaches that are based on the popular gaussian process framework. We also examine approaches to online learning that use combinations of kernels-online multiple kernel learning. We present empirical validation of a wide range of methods on a protein fold recognition data set, where different biological feature types are available, and two object recognition data sets, Caltech101 and Caltech256, where multiple feature spaces are available in terms of different image feature extraction methods. Neural Computation 25, 567-625 (2013) c 2013 Massachusetts Institute of Technology
Procedia Computer Science, 2013
Neurocomputing, 2014
The optimality of the kernel number and kernel centers plays a significant role in determining the approximation power of nearly all kernel methods. However, the process of choosing optimal kernels is always formulated as a global optimization task, which is hard to accomplish. Recently, an algorithm, namely improved recursive reduced least squares support vector regression (IRR-LSSVR), was proposed for establishing a global nonparametric offline model, which demonstrates significant advantage in choosing representing and fewer support vectors compared with others. Inspired by the IRR-LSSVR, a new adaptive parametric kernel method called WV-LSSVR is proposed in this paper using the same type of kernels and the same centers as those used in the IRR-LSSVR. Furthermore, inspired by the multikernel semiparametric support vector regression, the effect of the kernel extension is investigated in a recursive regression framework, and a recursive kernel method called GPK-LSSVR is proposed using a compound type of kernels which are recommended for Gaussian process regression. Numerical experiments on benchmark data sets confirm the validity and effectiveness of the presented algorithms. The WV-LSSVR algorithm shows higher approximation accuracy than the recursive parametric kernel method using the centers calculated by the k-means clustering approach. The extended recursive kernel method (i.e. GPK-LSSVR) has not shown advantage in terms of global approximation accuracy when validating the test data set without real-time updation, but it can increase modeling accuracy if the real-time identification is involved.
2013 IEEE Digital Signal Processing and Signal Processing Education Meeting (DSP/SPE), 2013
Kernel adaptive filtering is a growing field of signal processing that is concerned with nonlinear adaptive filtering. When implemented naïvely, the time and memory complexities of these algorithms grow at least linearly with the amount of data processed. A large number of practical solutions have been proposed throughout the last decade, based on sparsification or pruning mechanisms. Nevertheless, there is a lack of understanding of their relative merits, which often depend on the data they operate on. We propose to study the quality of the solution as a function of either the time or the memory complexity. We empirically test six different kernel adaptive filtering algorithms on three different benchmark data sets. We make our code available through an open source toolbox that includes additional algorithms and allows to measure the complexities explicitly in number of floating point operations and bytes needed, respectively.
2012
of Dissertation Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy FROM FIXED TO ADAPTIVE BUDGET ROBUST KERNEL ADAPTIVE FILTERING By Songlin Zhao December 2012 Chair: Jose C. Principe Major: Electrical and Computer Engineering Recently, owning to universal modeling capacity, convexity in performance surface and modest computational complexity, kernel adaptive filters have attracted more and more attention. Even though these methods achieve powerful classification and regression performance in complicated nonlinear problems, they have drawbacks. This work focuses on how to improve kernel adaptive filters performance both on accuracy and computational complexity. After reviewing some existing adaptive filters cost functions, we introduce an information theoretic objective function, Maximal Correntropy Criterion (MCC), that contains high order statistical information. Here we propose to adopt t...
IEEE Transactions on Signal Processing, 2012
Signal Processing, 2019
In the last decade, a considerable research effort has been devoted to developing adaptive algorithms based on kernel functions. One of the main features of these algorithms is that they form a family of universal approximation techniques, solving problems with nonlinearities elegantly. In this paper, we present data-selective adaptive kernel normalized least-mean square (KNLMS) algorithms that can increase their learning rate and reduce their computational complexity. In fact, these methods deal with kernel expansions, creating a growing structure also known as the dictionary, whose size depends on the number of observations and their innovation. The algorithms described herein use an adaptive step-size to accelerate the learning and can offer an excellent tradeoff between convergence speed and steady state, which allows them to solve nonlinear filtering and estimation problems with a large number of parameters without requiring a large computational cost. The data-selective update scheme also limits the number of operations performed and the size of the dictionary created by the kernel expansion, saving computational resources and dealing with one of the major problems of kernel adaptive algorithms. A statistical analysis is carried out along with a computational complexity analysis of the proposed algorithms. Simulations show that the proposed KNLMS algorithms outperform existing algorithms in examples of nonlinear system identification and prediction of a time series originating from a nonlinear difference equation.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
2014 International Joint Conference on Neural Networks (IJCNN), 2014
Proc. European Network for Intelligent Technologies …, 2003
IEEE Transactions on Signal Processing
Journal of Machine Learning Research, 2012
Scientific Programming, 2022
The 2011 International Joint Conference on Neural Networks, 2011
IEEE Transactions on Neural Networks and Learning Systems, 2012
IEEE Transactions on Signal Processing, 2009
2012 IEEE 12th International Conference on Data Mining, 2012
2008 The Second International Conference on Advanced Engineering Computing and Applications in Sciences, 2008
Computational Intelligence and Neuroscience
2016 IEEE 26th International Workshop on Machine Learning for Signal Processing (MLSP), 2016
Proceedings. 2005 IEEE International Joint Conference on Neural Networks, 2005., 2005
Pattern Analysis and Applications, 2011
Uncertainty in Artificial Intelligence, 2004
Mathematical and Computer Modelling, 1990