Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2002
This paper describes an on-line method for building ε-insensitive support vector machines for regression as described in (Vapnik, 1995). The method is an extension of the method developed by(Cauwenberghs & Poggio, 2000) for building incremental support vector machines for classification. Machines obtained byusing this approach are equivalent to the ones obtained byapply- ing exact methods like quadratic programming, but theyare obtained more quicklyand allow the incremental addition of new points, removal of exist- ing points and update of target values for existing data. This development opens the application of SVM regression to areas such as on-line prediction of temporal series or generalization of value functions in reinforcement learning.
2002
This paper describes an on-line method for building ∈-insensitive support vector machines for regression as described in [12]. The method is an extension of the method developed by [1] for building incremental support vector machines for classification. Machines obtained by using this approach are equivalent to the ones obtained by applying exact methods like quadratic programming, but they are obtained more quickly and allow the incremental addition of new points, removal of existing points and update of target values for existing data. This development opens the application of SVM regression to areas such as on-line prediction of temporal series or generalization of value functions in reinforcement learning.
Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02., 2002
Support Vector Machines are a general formulation for machine learning. It has been shown to perform extremely well for a number of problems in classification and regression. However, in many difficult problems, the system dynamics may change with time and the resulting new information arriving incrementally will provide additional data. At present, there is limited work to cope with the computational demands of modeling time varying systems. Therefore, we develop the concept of adaptive support vector machines that can learn from incremental data. In this paper, results are provided to demonstrate the applicability of the adaptive support vector machines techniques for pattern classification and regression problems.
2007
Many approaches for obtaining systems with intelligent behavior are based on components that learn automatically from previous experience. The development of these learning techniques is the objective of the area of research known as machine learning. During the last decade, researchers have produced numerous and outstanding advances in this area, boosted by the successful application of machine learning techniques. This thesis presents one of this techniques, an online version of the algorithm for training the support vector machine for regression and also how it has been extended in order to be more flexible for the hyper parameter estimation. Furthermore the algorithm has been compared with a batch implementation
2008
In this work we derive a new on-line parametric model for time series forecasting based on Vapnik-Chervonenkis (VC) theory. Using the strong connection between support vector machines (SVM) and Regularization theory (RT), we propose a regularization operator in order to obtain a suitable expansion of radial basis functions (RBFs) with the corresponding expressions for updating neural parameters. This operator seeks for the "flattest" function in a feature space, minimizing the risk functional. Finally we mention some modifications and extensions that can be applied to control neural resources and select relevant input space in order to avoid high computational effort (batch learning).
2003
In this paper, the problem of simultaneously approximating a function and its derivative is formulated within the Support Vector Machine (SVM) framework. The problem has been solved by using the ε-insensitive loss function and introducing new linear constraints in the approximation of the derivative. The resulting quadratic problem can be solved by Quadratic Programming (QP) techniques. Moreover, a computationally efficient Iterative Re-Weighted Least Square (IRWLS) procedure has been derived to solve the problem in large data sets. The performance of the method has been compared with the conventional SVM for regression, providing outstanding results.
Advances in neural information …, 2001
An on-line recursive algorithm for training support vector machines, one vector at a time, is presented. Adiabatic increments retain the Kuhn-Tucker conditions on all previously seen training data, in a number of steps each computed analytically. The incremental procedure is reversible, and decremental "unlearning" offers an efficient method to exactly evaluate leave-one-out generalization performance. Interpretation of decremental unlearning in feature space sheds light on the relationship between generalization and geometry of the data.
2013
Support Vector-based learning methods are an important part of Computational Intelligence techniques. Recent efforts have been dealing with the problem of learning from very large datasets. This paper reviews the most commonly used formulations of support vector machines for regression (SVRs) aiming to emphasize its usability on large-scale applications. We review the general concept of support vector machines (SVMs), address the state-of-the-art on training methods SVMs, and explain the fundamental principle of SVRs. The most common learning methods for SVRs are introduced and linear programming-based SVR formulations are explained emphasizing its suitability for large-scale learning. Finally, this paper also discusses some open problems and current trends.
1999
In this report we show that the -tube size in Support Vector Machine (SVM) for regression is 2 = p 1 + jjwjj 2 . By using this result we show that, in the case all the data points are inside the -tube, minimizing jjwjj 2 in SVM for regression is equivalent to maximizing the distance between the approximating hyperplane and the farest points in the training set. Moreover, in the most general setting in which the data points live also outside the -tube, we show that, for a xed value of , minimizing jjwjj 2 is equivalent to maximizing the sparsity of the representation of the optimal approximating hyperplane, that is equivalent to minimizing the number of coe cients di erent from zero in the expression of the optimal w. Then, the solution found by SVM for regression is a tradeo between sparsity of the representation and closeness to the data. We also include a complete derivation of SVM for regression in the case of linear approximation.
Lecture Notes in Computer Science, 2001
In this paper, we propose and study a new on-line algorithm for learning a SVM based on Radial Basis Function Kernel: Local Incremental Learning of SVM or LISVM. Our method exploits the "locality" of RBF kernels to update current machine by only considering a subset of support candidates in the neighbourhood of the input. The determination of this subset is conditioned by the computation of the variation of the error estimate. Implementation is based on the SMO one, introduced and developed by Platt [13]. We study the behaviour of the algorithm during learning when using different generalization error estimates. Experiments on three data sets (batch problems transformed into on-line ones) have been conducted and analyzed.
Computational Statistics, 2014
According to the Statistical Learning Theory, the support vectors represent the most informative data points and compress the information contained in training set. However, a basic problem in the standard support vector machine is that when the data is noisy, there exists no guaranteed scheme in support vector machines' formulation to dissuade the machine from learning noise. Therefore, the noise which is typically presents in financial time series data may be taken into account as support vectors. In turn, noisy support vectors are modeled into the estimated function. As such, the inclusion of noise in support vectors may lead to an over-fitting and in turn to a poor generalization. The standard support vector regression (SVR) is reformulated in this article in such a way that the large errors which correspond to noise are restricted by a new parameter E. The simulation and real world experiments indicate that the novel SVR machine meaningfully performs better than the standard SVR in terms of accuracy and precision especially where the data is noisy, but in expense of a longer computation time.
Neural networks : the official journal of the International Neural Network Society, 2015
The ν-Support Vector Regression (ν-SVR) is an effective regression learning algorithm, which has the advantage of using a parameter ν on controlling the number of support vectors and adjusting the width of the tube automatically. However, compared to ν-Support Vector Classification (ν-SVC) (Schölkopf et al., 2000), ν-SVR introduces an additional linear term into its objective function. Thus, directly applying the accurate on-line ν-SVC algorithm (AONSVM) to ν-SVR will not generate an effective initial solution. It is the main challenge to design an incremental ν-SVR learning algorithm. To overcome this challenge, we propose a special procedure called initial adjustments in this paper. This procedure adjusts the weights of ν-SVC based on the Karush-Kuhn-Tucker (KKT) conditions to prepare an initial solution for the incremental learning. Combining the initial adjustments with the two steps of AONSVM produces an exact and effective incremental ν-SVR learning algorithm (INSVR). Theoreti...
European Journal of Control, 2001
In recent years neural networks as multilayer perceptrons and radial basis function networks have been frequently used in a wide range of fields, including control theory, signal processing and nonlinear modelling. A promising new methodology is Support Vector Machines (SVM), which has been originally introduced by Vapnik within the area of statistical learning theory and structural risk minimization. SVM approaches to classification, nonlinear function and density estimation lead to convex optimization problems, typically quadratic programming. However, due to their non-parametric nature, the present SVM methods were basically restricted to static problems. We discuss a method of least squares support vector machines (LS-SVM), which has been extended to recurrent models and use in optimal control problems. We explain how robust nonlinear estimation and sparse approximation can be done by means of this kernel based technique. A short overview of hyperparameter tuning methods is given. SVM methods are able to learn and generalize well in large dimensional input spaces and have outperformed many existing methods on benchmark data sets. Its full potential in a dynamical systems and control context remains to be explored.
1999
In this report we show some consequences of the work done by Pontil et al. in 1]. In particular we show that in the same hypotheses of the theorem proved in their paper, the optimal approximating hyperplane f R found by SVM regression classi es the data. This means that y i f R (x i ) > 0 for points which live externally to the margin between the two classes or points which live internally to the margin but correctly classi ed by SVM classi cation. Moreover y i f R (x i ) < 0 for incorrectly classi ed points. Finally, the zero level curve of the optimal approximating hyperplane determined by SVMR and the optimal separating hyperplane determined by SVMC coincide.
Lecture Notes in Computer Science, 2007
We present a method to find the exact maximal margin hyperplane for linear Support Vector Machines when a new (existing) component is added (removed) to (from) the inner product. The maximal margin hyperplane with the new inner product is obtained in terms of that for the old inner product, without re-computing it from scratch and the procedure is reversible. An algorithm to implement the proposed method is presented, which avoids matrix inversions from scratch. Among the possible applications, we find feature selection and the design of kernels out of similarity measures.
Proceedings of the International Joint Conference on Neural Networks, 2003., 2003
The objective of machine learning is to identify a model that yields good generalization performance. This involves repeatedly selecting a hypothesis class, searching the hypothesis class by minimizing a given objective function over the model's parameter space, and evaluating the generalization performance of the resulting model. This search can be computationally intensive as training data continuously arrives, or as one needs to tune hyperparameters in the hypothesis class and the objective function. In this paper, we present a framework for exact incremental learning and adaptation of support vector machine (SVM) classifiers. The approach is general and allows one to learn and unlearn individual or multiple examples, adapt the current SVM to changes in regularization and kernel parameters, and evaluate generalization performance through exact leave-one-out error estimation.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.