0% found this document useful (0 votes)
56 views23 pages

SVM Class Ppts

SVM
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
56 views23 pages

SVM Class Ppts

SVM
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

SVM slides

Prof [Link] for ML Class


Support Vector Machines for
Learning Linear Discriminants
Maximum Margin Classification
SVM as a 2-class linear classifier
Optimal Separating Hyperplane
(Maximal Margin)
Finding optimal values of parameters
Solving Dual problem to maximize Lp
w.r.t αt subject to gradient of Lp w.r.t
w and w0 are 0s
Calculating the parameters w and w0
• Quadratic optimization methods are used to solve for
Lagrange multipliers αt, and most of them are zeros.
• The data points corresponding to the non-zero α’s are
identified as the support vectors, SV.

• Once the weight vector (w) is found, the w0 can be found


applying the formula given below using any of the
support vectors.
Classifying query points using
Support vectors
Non-Linearly Separable Training Data
Soft-margin Hyperplane using
slack variables,ξ

ξ ξ

• Case2 deals with misclassified points


• Case 3 deals with points which are
not sufficiently away as they are
located in the margin
Finding Soft Margin Hyperplane
using Slack Variables contd…
Lagrange Multipliers are found
with better generalization
Non-linearly separable problems
Transforming the input feature space
to make it linearly separable
Kernel Machines for classification
Kernel Functions
Examples of Kernel Functions
New dimensionality k is larger, but
doesn’t affect complexity
Merits of SVM for Classification
SVMs for Regression
• SVM regression uses ε-sensitive loss function
defined below:
0 𝑖𝑓 𝑟 𝑡 − 𝑓 𝑥𝑡 < ε
• 𝑒ε 𝑟 𝑡 , 𝑓(𝑥 𝑡 )=൝ 𝑡
𝑟 − 𝑓 𝑥𝑡 − ε 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
• Robust regression: Minor errors upto ε are
tolerated and the errors beyond the threshold have
linear effect but not quadratic as in the case of
Squared error metrics. Hence this error function is
more robust to outliers.
Soft margin Optimization
equations for SVM Regression

• Soft margin hyperplane is defined using two slack


variables to account for the deviations from the ε –zone.
• As in case of classification, the result will choose some
training points as support vectors, and the regression
line is written as the weighted sum of those support
vectors.

You might also like