SVM
SVM
Classifiers
1
Outline
• Support Vector Machines for Classification
– Linear Discrimination
– Nonlinear Discrimination
• SVM Mathematically
• Extensions
• Data Classification
• Kernel Functions
2
Definition
• ‘Support Vector Machine is a system for
efficiently training linear learning machines in
kernel-induced feature spaces, while respecting
the insights of generalisation theory and
exploiting optimisation theory.’
q
b
a b a b cosq
4
Decision Function
for binary classification
f (x) R
f ( xi ) 0 yi 1
f xi 0 yi 1
5
Support Vector Machines
• SVMs pick best separating hyperplane according to
some criterion
– e.g. maximum margin
• Training process is an optimisation
• Training set is effectively reduced to a relatively
small number of support vectors
6
Feature Spaces
• We may separate data by mapping to a higher-
dimensional feature space
– The feature space may even have an infinite
number of dimensions!
• We need not explicitly construct the new feature
space
7
Kernels
• We may use Kernel functions to implicitly map to a
new feature space
• Kernel fn:
K x1 , x 2 R
• Kernel must be equivalent to an inner product in
some feature space
8
Example Kernels
Linear: xz
Polynomial: P x z
Gaussian:
exp x z / 2
2
9
Perceptron Revisited: Linear Separators
wTx + b = 0
wTx + b > 0
wTx + b < 0
f(x) = sign(wTx + b)
10
Which of the linear separators is optimal?
11
Best Linear Separator?
12
Best Linear Separator?
13
Best Linear Separator?
14
Best Linear Separator?
15
Find Closest Points in Convex Hulls
d
c
16
Plane Bisect Closest Points
wT x + b =0
w=d-c
d
c
17
Classification Margin
wT x b
• Distance from example data to the separator is r
w
• Data closest to the hyperplane are support vectors.
• Margin ρ of the separator is the width of separation between
classes. ρ
18
Maximum Margin Classification
• Maximizing the margin is good according to intuition and
theory.
• Implies that only support vectors are important; other training
examples are ignorable.
19
Statistical Learning Theory
20
Margins and Complexity
Skinny margin
is more flexible
thus more complex.
21
Margins and Complexity
Fat margin
is less complex.
22
Linear SVM Mathematically
wTxi + b ≥ 1 if yi = 1
wTxi + b ≤ -1 if yi = -1
wT x b
2
• hyperplane is r the margin is:
w w
23
Linear SVMs Mathematically (cont.)
24
Solving the Optimization Problem
f(x) = ΣαiyixiTx + b
• Notice that it relies on an inner product between the test point x and the
support vectors xi – we will return to this later!
• Also keep in mind that solving the optimization problem involved
computing the inner products xiTxj between all training points!
26
Soft Margin Classification
ξi
ξi
27
Soft Margin Classification Mathematically
28
Soft Margin Classification – Solution
• Neither slack variables ξi nor their Lagrange multipliers appear in the dual
problem!
• Again, xi with non-zero αi will be support vectors.
• Solution to the dual problem is: But neither w nor b are
needed explicitly for
w =Σαiyixi classification!
b= yk(1- ξk) - wTxk where k = argmax αk
k f(x) = ΣαiyixiTx + b
29
Theoretical Justification for Maximum Margins
30
Linear SVMs: Overview
• Most “important” training points are support vectors; they define the
hyperplane.
• Both in the dual formulation of the problem and in the solution training
points appear only inside inner products:
Find α1…αN such that f(x) = ΣαiyixiTx + b
Q(α) =Σαi - ½ΣΣαiαjyiyjxiTxj is maximized and
(1) Σαiyi = 0
(2) 0 ≤ αi ≤ C for all αi
31
Non-linear SVMs
• Datasets that are linearly separable with some noise work out great:
0 x
0 x
• How about… mapping data to a higher-dimensional space:
x2
0 x
32
Nonlinear Classification
x a, b
x w w1a w2b
q ( x) a, b, ab, a , b
2 2
33
Non-linear SVMs: Feature spaces
• General idea: the original feature space can always be mapped to some
higher-dimensional feature space where the training set is separable:
Φ: x → φ(x)
34
The “Kernel Trick”
35
36
Positive Definite Matrices
37
What Functions are Kernels?
• For some functions K(xi,xj) checking that K(xi,xj)= φ(xi) Tφ(xj) can be
cumbersome.
• Mercer’s theorem:
Every semi-positive definite symmetric function is a kernel
• Semi-positive definite symmetric functions correspond to a semi-positive
definite symmetric Gram matrix:
K=
… … … … …
K(xN,x1) K(xN,x2) K(xN,x3) … K(xN,xN)
38
Examples of Kernel Functions
39
Non-linear SVMs Mathematically
40
SVM applications
• SVMs were originally proposed by Boser, Guyon and Vapnik in 1992 and
gained increasing popularity in late 1990s.
• Most popular optimization algorithms for SVMs are SMO [Platt ’99] and
SVMlight [Joachims’ 99], both use decomposition to hill-climb over a subset
of αi’s at a time.
• Regression
• Variable Selection
• Boosting
• Density Estimation
• Unsupervised Learning
– Novelty/Outlier Detection
– Feature Detection
– Clustering
42
Support Vector Machine Resources
• SVM Application List
http://www.clopinet.com/isabelle/Projects/SVM/applist.html
• Kernel machines
http://www.kernel-machines.org/
• Pattern Classification and Machine Learning
http://clopinet.com/isabelle/#projects
• R a GUI language for statistical computing and graphics
http://www.r-project.org/
• Kernel Methods for Pattern Analysis – 2004
http://www.kernel-methods.net/
• An Introduction to Support Vector Machines
(and other kernel-based learning methods)
http://www.support-vector.net/
• Kristin P. Bennett web page
http://www.rpi.edu/~bennek
• Isabelle Guyon's home page
http://clopinet.com/isabelle
43
Support Vector Machine References
• Duda R.O. and Hart P.E.; Patter Classification and Scene Analysis. Wiley, 1973.
• T.M. Cover. Geometrical and statistical properties of systems of linear inequalities with applications in
pattern recognition. IEEE Transactions on Electronic Computers}, 14:326--334, 1965.
• V.Vapnik and A.Lerner. Pattern recognition using generalized portrait method. Automation and Remote
Control}, 24, 1963.
• V.Vapnik and A.Chervonenkis. A note on one class of perceptrons. Automation and Remote Control}, 25,
1964.
• J.K. Anlauf and M.Biehl. The adatron: an adaptive perceptron algorithm. Europhysics Letters, 10:687--
692, 1989.
• N.Aronszajn. Theory of reproducing kernels. Transactions of the American Mathematical Society, 68:337--
404, 1950.
• M.Aizerman, E.Braverman, and L.Rozonoer.Theoretical foundations of the potential function method in
pattern recognition learning. Automation and Remote Control 25:821--837, 1964.
• O. L. Mangasarian. Linear and nonlinear separation of patterns by linear programming. Operations
Research , 13:444--452, 1965.
• F. W. Smith. Pattern classifier design by linear programming. IEEE Transactions on Computers , C-
17:367--372, 1968.
• C.Cortes and V.Vapnik. Support vector networks. Machine Learning, 20:273--297, 1995.V.Vapnik. The
Nature of Statistical Learning Theory}. Springer Verlag, 1995.
• V.Vapnik. Statistical Learning Theory}. Wiley, 1998.A.N. Tikhonov and V.Y. Arsenin. Solutions of Ill-
posed Problems. W. H. Winston, 1977.
• B.Schoelkopf, C.J.C. Burges, and A.J. Smola, Advances in kernel methods - support vector learning, MIT
Press, Cambridge, MA, 1999.
• A.J. Smola, P.Bartlett, B.Schoelkopf, and C.Schuurmans, Advances in large margin classifiers, MIT Press,
Cambridge, MA, 1999.
44