Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2010, The 2010 International Joint Conference on Neural Networks (IJCNN)
Support Vector Machines (SVMs) with various kernels have played dominant role in machine learning for many years, finding numerous applications. Although they have many attractive features interpretation of their solutions is quite difficult, the use of a single kernel type may not be appropriate in all areas of the input space, convergence problems for some kernels are not uncommon, the standard quadratic programming solution has O(m 3 ) time and O(m 2 ) space complexity for m training patterns. Kernel methods work because they implicitly provide new, useful features. Such features, derived from various kernels and other vector transformations, may be used directly in any machine learning algorithm, facilitating multiresolution, heterogeneous models of data. Therefore Support Feature Machines (SFM) based on linear models in the extended feature spaces, enabling control over selection of support features, give at least as good results as any kernel-based SVMs, removing all problems related to interpretation, scaling and convergence. This is demonstrated for a number of benchmark datasets analyzed with linear discrimination, SVM, decision trees and nearest neighbor methods.
arXiv (Cornell University), 2019
Support Vector Machines (SVMs) with various kernels have played dominant role in machine learning for many years, finding numerous applications. Although they have many attractive features interpretation of their solutions is quite difficult, the use of a single kernel type may not be appropriate in all areas of the input space, convergence problems for some kernels are not uncommon, the standard quadratic programming solution has O(m 3) time and O(m 2) space complexity for m training patterns. Kernel methods work because they implicitly provide new, useful features. Such features, derived from various kernels and other vector transformations, may be used directly in any machine learning algorithm, facilitating multiresolution, heterogeneous models of data. Therefore Support Feature Machines (SFM) based on linear models in the extended feature spaces, enabling control over selection of support features, give at least as good results as any kernel-based SVMs, removing all problems related to interpretation, scaling and convergence. This is demonstrated for a number of benchmark datasets analyzed with linear discrimination, SVM, decision trees and nearest neighbor methods.
2021
A support vector machine (SVM) is a state-of-the-art machine learning model rooted in structural risk minimization. SVM is underestimated with regards to its application to real world problems because of the difficulties associated with its use. We aim at showing that the performance of SVM highly depends on which kernel function to use. To achieve these, after providing a summary of support vector machines and kernel function, we constructed experiments with various benchmark datasets to compare the performance of various kernel functions. For evaluating the performance of SVM, the F1-score and its Standard Deviation with 10cross validation was used. Furthermore, we used taylor diagrams to reveal the difference between kernels. Finally, we provided Python codes for all our experiments to enable re-implementation of the experiments. 1. Motivation and Goal SVMs are state-of-the-art machine learning techniques with their root in structural risk minimization [57, 58]. Additionally, SVM...
Bulletin of Electrical Engineering and Informatics, 2021
Artificial intelligence (AI) and machine learning (ML) have influenced every part of our day-today activities in this era of technological advancement, making a living more comfortable on the earth. Among the several AI and ML algorithms, the support vector machine (SVM) has become one of the most generally used algorithms for data mining, prediction and other (AI and ML) activities in several domains. The SVM's performance is significantly centred on the kernel function (KF); nonetheless, there is no universal accepted ground for selecting an optimal KF for a specific domain. In this paper, we investigate empirically different KFs on the SVM performance in various fields. We illustrated the performance of the SVM based on different KF through extensive experimental results. Our empirical results show that no single KF is always suitable for achieving high accuracy and generalisation in all domains. However, the gaussian radial basis function (RBF) kernel is often the default choice. Also, if the KF parameters of the RBF and exponential RBF are optimised, they outperform the linear and sigmoid KF based SVM method in terms of accuracy. Besides, the linear KF is more suitable for the linearly separable dataset.
Wiley Encyclopedia of Operations Research and Management Science, 2010
Kernel methods and support vector machines have become the most popular learning from examples paradigms. Several areas of application research make use of SVM approaches as for instance hand written character recognition, text categorization, face detection, pharmaceutical data analysis and drug design. Also, adapted SVM’s have been proposed for time series forecasting and in computational neuroscience as a tool for detection of symmetry when eye movement is connected with attention and visual perception. The aim of the paper is to investigate the potential of SVM’s in solving classification and regression tasks as well as to analyze the computational complexity corresponding to different methodologies aiming to solve a series of afferent arising sub-problems.
Statistical Science, 2006
Support vector machines (SVMs) appeared in the early nineties as optimal margin classifiers in the context of Vapnik's statistical learning theory. Since then SVMs have been successfully applied to real-world data analysis problems, often providing improved results compared with other techniques. The SVMs operate within the framework of regularization theory by minimizing an empirical risk in a well-posed and consistent way. A clear advantage of the support vector approach is that sparse solutions to classification and regression problems are usually obtained: only a few samples are involved in the determination of the classification or regression functions. This fact facilitates the application of SVMs to problems that involve a large amount of data, such as text processing and bioinformatics tasks. This paper is intended as an introduction to SVMs and their applications, emphasizing their key features. In addition, some algorithmic extensions and illustrative real-world applications of SVMs are shown.
IRJET, 2022
Support vector machine (SVM) is capable of outcompeting every other learned model algorithm in terms of accuracy and other high-performance metrics by its high dimensional data projection for classification. Nevertheless, the performance of the Support vector machine is greatly affected by the choice of the kernel function which helps in the same. This paper discusses the working of SVM and its dependency on the kernel function, along with the explanation of the types of kernels. The focus is on choosing the optimal kernel for three different types of data that vary on volume of features and classes to conclude the optimal choice of the kernel for each type of the three datasets. For performance measures, we used metrics such as accuracy, kappa, specificity and sensitivity. This study statistically examines and compares each type of kernel against the mentioned metrics.
Support vector machines (SVMs) appeared in the early nineties as optimal margin classifiers in the context of Vapnik's statistical learning theory. Since then SVMs have been successfully applied to real-world data analysis problems, often providing improved results compared with other techniques. The SVMs operate within the framework of regularization theory by minimizing an empirical risk in a well-posed and consistent way. A clear advantage of the support vector approach is that sparse solutions to classification and regression problems are usually obtained: only a few samples are involved in the determination of the classification or regression functions. This fact facilitates the application of SVMs to problems that involve a large amount of data, such as text processing and bioinformatics tasks. This paper is intended as an introduction to SVMs and their applications, emphasizing their key features. In addition, some algorithmic extensions and illustrative real-world applications of SVMs are shown.
Neurocomputing, 2003
Support vector machines (SVMs) are currently a very active research area within machine learning. Motivated by statistical learning theory, SVMs have been successfully applied to numerous tasks, among others in data mining, computer vision, and bioinformatics. SVMs are examples of a broader category of learning approaches which utilize the concept of kernel substitution, which makes the task of learning more tractable by exploiting an implicit mapping into a high-dimensional space. SVMs have many appealing properties for machine learning. For example, the classic SVM learning task involves convex quadratic programming, a problem that does not su er from the 'local minima' problem and whose solution may easily be found by using one of the many specially e cient algorithms developed for it in the optimization theory. Furthermore, recently developed model selection strategies can be applied, so that few, if any, learning parameters need to be set by the operator. Above all, they have been found to work very well in practice.
Data Mining and Knowledge Discovery, 1998
The tutorial starts with an overview of the concepts of VC dimension and structural risk minimization. We then describe linear Support Vector Machines (SVMs) for separable and non-separable data, working through a non-trivial example in detail. We describe a mechanical analogy, and discuss when SVM solutions are unique and when they are global. We describe how support vector training can be practically implemented, and discuss in detail the kernel mapping technique which is used to construct SVM solutions which are nonlinear in the data. We show how Support Vector machines can have very large (even infinite) VC dimension by computing the VC dimension for homogeneous polynomial and Gaussian radial basis function kernels. While very high VC dimension would normally bode ill for generalization performance, and while at present there exists no theory which shows that good generalization performance is guaranteed for SVMs, there are several arguments which support the observed high accuracy of SVMs, which we review. Results of some experiments which were inspired by these arguments are also presented. We give numerous examples and proofs of most of the key theorems. There is new material, and I hope that the reader will find that even old material is cast in a fresh light.
Wiley Interdisciplinary Reviews: Computational Statistics, 2009
Support vector machines (SVMs) are a family of machine learning methods, originally introduced for the problem of classification and later generalized to various other situations. They are based on principles of statistical learning theory and convex optimization, and are currently used in various domains of application, including bioinformatics, text categorization, and computer vision. 2009 John Wiley & Sons, Inc. WIREs Comp Stat 2009 1 283-289 S upport vector machines (SVMs), introduced by
Support Vector Machines have acquired a central position in the field of Machine Learning and Pattern Recognition in the past decade and have known to deliver state-of-the-art performance in applications such as text categorization, hand-written character recognition, bio-sequence analysis, etc. In this article we provide a gentle introduction into the workings of Support Vector Machines (also known as SVMs) and attempt to provide some insight into the learning mechanisms involved. We begin with a general introduction to mathematical learning and move on to discuss the learning framework used by the SVM architecture.
18th International Conference on Pattern Recognition (ICPR'06), 2006
In many classification applications, Support Vector Machines (SVMs) have proven to be highly performing and easy to handle classifiers with very good generalization abilities. However, one drawback of the SVM is its rather high classification complexity which scales linearly with the number of Support Vectors (SVs). This is due to the fact that for the classification of one sample, the kernel function has to be evaluated for all SVs. To speed up classification, different approaches have been published, most which of try to reduce the number of SVs. In our work, which is especially suitable for very large datasets, we follow a different approach: as we showed in [12], it is effectively possible to approximate large SVM problems by decomposing the original problem into linear subproblems, where each subproblem can be evaluated in Ω(1). This approach is especially successful, when the assumption holds that a large classification problem can be split into mainly easy and only a few hard subproblems. On standard benchmark datasets, this approach achieved great speedups while suffering only sightly in terms of classification accuracy and generalization ability. In this contribution, we extend the methods introduced in [12] using not only linear, but also non-linear subproblems for the decomposition of the original problem which further increases the classification performance with only a little loss in terms of speed. An implementation of our method is available in [13]. Due to page limitations, we had to move some of theoretic details (e.g. proofs) and extensive experimental results to a technical report [14].
Support vector machines (SVMs) are nothing but machines that build support vectors for classification process. SVMs can formulate both linear and non-linear decision boundaries with good generalization ability and they are based on Statistical Learning Theory (SLT).Generally classification using SVMs is same as solving an optimization problem because strength of SVMs lies in solving optimization techniques. At present SVMs have become excellent areas for research which are also powerful tools for most of the machine learning tasks. These optimization problems deal with not only convex problems but also non convex such as semi infinite programming, bi-level programming and integer programming. The goal of this paper is to thoroughly review SVMs from the optimization point of view. Examining the many aspects of SVM optimization problems, it is useful to understand and apply this popular data mining technique more easily. There are two themes given, first theme specifies the SVM models and algorithms based on standard classification till now regarding optimization point of view. The main reason in constructing several optimization problems is to increase the SVM generalize well and reduce over fitting. And then second theme gives the models concerning enhancements to make SVM more accurate and build better, more rapid and easier to understand learning machines. Since research in SVMs and research in optimization problems have become increasingly coupled so present new challenges are systematically explored to construct new optimization models using SVM.
This set of notes presents the Support Vector Machine (SVM) learning algorithm. SVMs are among the best (and many believe are indeed the best) "off-the-shelf" supervised learning algorithm. To tell the SVM story, we'll need to first talk about margins and the idea of separating data with a large "gap." Next, we'll talk about the optimal margin classifier, which will lead us into a digression on Lagrange duality. We'll also see kernels, which give a way to apply SVMs efficiently in very high dimensional (such as infinitedimensional) feature spaces, and finally, we'll close off the story with the SMO algorithm, which gives an efficient implementation of SVMs.
2011
Abstract Linear support vector machines (svms) have become popular for solving classication tasks due to their fast and simple online application to large scale data sets. However, many problems are not linearly separable. For these problems kernel-based svms are often used, but unlike their linear variant they suer from various drawbacks in terms of computational and memory eciency.
International Journal of Advanced Computer Science and Applications, 2020
In this paper, we introduce a new classification approach that learns class dependent Gaussian kernels and the belongingness likelihood of the data points with respect to each class. The proposed Support Kernel Classification (SKC) is designed to characterize and discriminate between the data instances from the different classes. It relies on the maximization of the intra-class distances and the minimization of the intraclass distances to learn the optimal Gaussian parameters. In fact, a novel objective function is proposed to model each class using one Gaussian function. The experiments conducted using synthetic datasets demonstrated the effectiveness of the proposed algorithm. Moreover, the results obtained using real datasets proved that the proposed classifier outperforms the relevant state of the art approaches.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.