0% found this document useful (0 votes)
5 views

SVM

Uploaded by

zacharia Haule
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

SVM

Uploaded by

zacharia Haule
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 44

Support Vector Machine

Classifiers

1
Outline
• Support Vector Machines for Classification
– Linear Discrimination
– Nonlinear Discrimination
• SVM Mathematically
• Extensions
• Data Classification
• Kernel Functions

2
Definition
• ‘Support Vector Machine is a system for
efficiently training linear learning machines in
kernel-induced feature spaces, while respecting
the insights of generalisation theory and
exploiting optimisation theory.’

– AN INTRODUCTION TO SUPPORT VECTOR


MACHINES (and other kernel-based
learning methods)
N. Cristianini and J. Shawe-Taylor
Cambridge University Press
2000 ISBN: 0 521 78019 5
– Kernel Methods for Pattern Analysis
John Shawe-Taylor & Nello Cristianini
Cambridge University Press, 2004
3
The Scalar Product

q
b

a  b  a b cosq

The scalar or dot product is, in some sense, a


measure of Similarity

4
Decision Function
for binary classification

f (x)  R

f ( xi )  0  yi  1
f xi   0  yi  1

5
Support Vector Machines
• SVMs pick best separating hyperplane according to
some criterion
– e.g. maximum margin
• Training process is an optimisation
• Training set is effectively reduced to a relatively
small number of support vectors

6
Feature Spaces
• We may separate data by mapping to a higher-
dimensional feature space
– The feature space may even have an infinite
number of dimensions!
• We need not explicitly construct the new feature
space

7
Kernels
• We may use Kernel functions to implicitly map to a
new feature space
• Kernel fn:
K x1 , x 2   R
• Kernel must be equivalent to an inner product in
some feature space

8
Example Kernels

Linear: xz

Polynomial: P x  z 

Gaussian: 
exp  x  z /  2
2

9
Perceptron Revisited: Linear Separators

• Binary classification can be viewed as the task of


separating classes in feature space:

wTx + b = 0
wTx + b > 0
wTx + b < 0

f(x) = sign(wTx + b)

10
Which of the linear separators is optimal?

11
Best Linear Separator?

12
Best Linear Separator?

13
Best Linear Separator?

14
Best Linear Separator?

15
Find Closest Points in Convex Hulls

d
c

16
Plane Bisect Closest Points

wT x + b =0
w=d-c
d
c

17
Classification Margin
wT x  b
• Distance from example data to the separator is r
w
• Data closest to the hyperplane are support vectors.
• Margin ρ of the separator is the width of separation between
classes. ρ

18
Maximum Margin Classification
• Maximizing the margin is good according to intuition and
theory.
• Implies that only support vectors are important; other training
examples are ignorable.

19
Statistical Learning Theory

• Misclassification error and the function complexity


bound generalization error.
• Maximizing margins minimizes complexity.
• “Eliminates” overfitting.
• Solution depends only on Support Vectors not
number of attributes.

20
Margins and Complexity

Skinny margin
is more flexible
thus more complex.

21
Margins and Complexity

Fat margin
is less complex.

22
Linear SVM Mathematically

• Assuming all data is at distance larger than 1 from the


hyperplane, the following two constraints follow for a
training set {(xi ,yi)}

wTxi + b ≥ 1 if yi = 1
wTxi + b ≤ -1 if yi = -1

• For support vectors, the inequality becomes an


equality; then, since each example’s distance from the

wT x  b 
2
• hyperplane is r the margin is:
w w

23
Linear SVMs Mathematically (cont.)

• Then we can formulate the quadratic optimization problem:

Find w and b such that


2
 is maximized and for all {(xi ,yi)}
w
wTxi + b ≥ 1 if yi=1; wTxi + b ≤ -1 if yi = -1
A better formulation:

Find w and b such that

Φ(w) =½ wTw is minimized and for all {(xi ,yi)}


yi (wTxi + b) ≥ 1

24
Solving the Optimization Problem

Find w and b such that


Φ(w) =½ wTw is minimized and for all {(xi ,yi)}
yi (wTxi + b) ≥ 1

• Need to optimize a quadratic function subject to linear constraints.


• Quadratic optimization problems are a well-known class of mathematical
programming problems, and many (rather intricate) algorithms exist for
solving them.
• The solution involves constructing a dual problem where a Lagrange
multiplier αi is associated with every constraint in the primary problem:
Find α1…αN such that
Q(α) =Σαi - ½ΣΣαiαjyiyjxiTxj is maximized and
(1) Σαiyi = 0
(2) αi ≥ 0 for all αi
25
The Optimization Problem Solution

• The solution has the form:

w =Σαiyixi b= yk- wTxk for any xk such that αk 0

• Each non-zero αi indicates that corresponding xi is a support vector.


• Then the classifying function will have the form:

f(x) = ΣαiyixiTx + b

• Notice that it relies on an inner product between the test point x and the
support vectors xi – we will return to this later!
• Also keep in mind that solving the optimization problem involved
computing the inner products xiTxj between all training points!

26
Soft Margin Classification

• What if the training set is not linearly separable?


• Slack variables ξi can be added to allow misclassification of difficult or
noisy examples.

ξi
ξi

27
Soft Margin Classification Mathematically

• The old formulation:

Find w and b such that


Φ(w) =½ wTw is minimized and for all {(xi ,yi)}
yi (wTxi + b) ≥ 1

• The new formulation incorporating slack variables:

Find w and b such that


Φ(w) =½ wTw + CΣξi is minimized and for all {(xi ,yi)}
yi (wTxi + b) ≥ 1- ξi and ξi ≥ 0 for all i

• Parameter C can be viewed as a way to control overfitting.

28
Soft Margin Classification – Solution

• The dual problem for soft margin classification:


Find α1…αN such that
Q(α) =Σαi - ½ΣΣαiαjyiyjxiTxj is maximized and
(1) Σαiyi = 0
(2) 0 ≤ αi ≤ C for all αi

• Neither slack variables ξi nor their Lagrange multipliers appear in the dual
problem!
• Again, xi with non-zero αi will be support vectors.
• Solution to the dual problem is: But neither w nor b are
needed explicitly for
w =Σαiyixi classification!
b= yk(1- ξk) - wTxk where k = argmax αk
k f(x) = ΣαiyixiTx + b

29
Theoretical Justification for Maximum Margins

• Vapnik has proved the following:


The class of optimal linear separators has VC dimension h bounded from
above as  D 2  
h  min  2 , m0   1
   
where ρ is the margin, D is the diameter of the smallest sphere that can
enclose all of the training examples, and m0 is the dimensionality.

• Intuitively, this implies that regardless of dimensionality m0 we can


minimize the VC dimension by maximizing the margin ρ.

• Thus, complexity of the classifier is kept small regardless of


dimensionality.

30
Linear SVMs: Overview

• The classifier is a separating hyperplane.

• Most “important” training points are support vectors; they define the
hyperplane.

• Quadratic optimization algorithms can identify which training points xi are


support vectors with non-zero Lagrangian multipliers αi.

• Both in the dual formulation of the problem and in the solution training
points appear only inside inner products:
Find α1…αN such that f(x) = ΣαiyixiTx + b
Q(α) =Σαi - ½ΣΣαiαjyiyjxiTxj is maximized and
(1) Σαiyi = 0
(2) 0 ≤ αi ≤ C for all αi

31
Non-linear SVMs

• Datasets that are linearly separable with some noise work out great:

0 x

• But what are we going to do if the dataset is just too hard?

0 x
• How about… mapping data to a higher-dimensional space:
x2

0 x
32
Nonlinear Classification

x   a, b 
x w  w1a  w2b

q ( x)   a, b, ab, a , b 
2 2

q ( x) w  w1a  w2b  w3ab  w4 a  w5b 2 2

33
Non-linear SVMs: Feature spaces

• General idea: the original feature space can always be mapped to some
higher-dimensional feature space where the training set is separable:

Φ: x → φ(x)

34
The “Kernel Trick”

• The linear classifier relies on inner product between vectors K(xi,xj)=xiTxj


• If every datapoint is mapped into high-dimensional space via some
transformation Φ: x → φ(x), the inner product becomes:
K(xi,xj)= φ(xi) Tφ(xj)
• A kernel function is some function that corresponds to an inner product into
some feature space.
• Example:
2-dimensional vectors x=[x1 x2]; let K(xi,xj)=(1 + xiTxj)2,
Need to show that K(xi,xj)= φ(xi) Tφ(xj):
K(xi,xj)=(1 + xiTxj)2,= 1+ xi12xj12 + 2 xi1xj1 xi2xj2+ xi22xj22 + 2xi1xj1 + 2xi2xj2=
= [1 xi12 √2 xi1xi2 xi22 √2xi1 √2xi2]T [1 xj12 √2 xj1xj2 xj22 √2xj1 √2xj2] =
= φ(xi) Tφ(xj), where φ(x) = [1 x12 √2 x1x2 x22 √2x1 √2x2]

35
36
Positive Definite Matrices

A square matrix A is positive definite if


xTAx>0 for all nonzero column vectors
x.

It is negative definite if xTAx < 0 for all


nonzero x.

It is positive semi-definite if xTAx  0.

And negative semi-definite if xTAx  0


for all x.

37
What Functions are Kernels?

• For some functions K(xi,xj) checking that K(xi,xj)= φ(xi) Tφ(xj) can be
cumbersome.
• Mercer’s theorem:
Every semi-positive definite symmetric function is a kernel
• Semi-positive definite symmetric functions correspond to a semi-positive
definite symmetric Gram matrix:

K(x1,x1) K(x1,x2) K(x1,x3) … K(x1,xN)


K(x2,x1) K(x2,x2) K(x2,x3) K(x2,xN)

K=
… … … … …
K(xN,x1) K(xN,x2) K(xN,x3) … K(xN,xN)

38
Examples of Kernel Functions

• Linear: K(xi,xj)= xi Txj

• Polynomial of power p: K(xi,xj)= (1+ xi Txj)p


2
xi x j

2 2
• Gaussian (radial-basis function network): K(xi,xj)= e

• Two-layer perceptron: K(xi,xj)= tanh(β0xi Txj + β1)

39
Non-linear SVMs Mathematically

• Dual problem formulation:


Find α1…αN such that
Q(α) =Σαi - ½ΣΣαiαjyiyjK(xi, xj) is maximized and
(1) Σαiyi = 0
(2) αi ≥ 0 for all αi

• The solution is:

f(x) = ΣαiyiK(xi, xj)+ b

• Optimization techniques for finding αi’s remain the same!

40
SVM applications
• SVMs were originally proposed by Boser, Guyon and Vapnik in 1992 and
gained increasing popularity in late 1990s.

• SVMs are currently among the best performers for a number of


classification tasks ranging from text to genomic data.

• SVM techniques have been extended to a number of tasks such as


regression [Vapnik et al. ’97], principal component analysis [Schölkopf et
al. ’99], etc.

• Most popular optimization algorithms for SVMs are SMO [Platt ’99] and
SVMlight [Joachims’ 99], both use decomposition to hill-climb over a subset
of αi’s at a time.

• Tuning SVMs remains a black art: selecting a specific kernel and


parameters is usually done in a try-and-see manner.
41
SVM Extensions

• Regression
• Variable Selection
• Boosting
• Density Estimation
• Unsupervised Learning
– Novelty/Outlier Detection
– Feature Detection
– Clustering

42
Support Vector Machine Resources
• SVM Application List
http://www.clopinet.com/isabelle/Projects/SVM/applist.html
• Kernel machines
http://www.kernel-machines.org/
• Pattern Classification and Machine Learning
http://clopinet.com/isabelle/#projects
• R a GUI language for statistical computing and graphics
http://www.r-project.org/
• Kernel Methods for Pattern Analysis – 2004
http://www.kernel-methods.net/
• An Introduction to Support Vector Machines
(and other kernel-based learning methods)
http://www.support-vector.net/
• Kristin P. Bennett web page
http://www.rpi.edu/~bennek
• Isabelle Guyon's home page
http://clopinet.com/isabelle

43
Support Vector Machine References
• Duda R.O. and Hart P.E.; Patter Classification and Scene Analysis. Wiley, 1973.
• T.M. Cover. Geometrical and statistical properties of systems of linear inequalities with applications in
pattern recognition. IEEE Transactions on Electronic Computers}, 14:326--334, 1965.
• V.Vapnik and A.Lerner. Pattern recognition using generalized portrait method. Automation and Remote
Control}, 24, 1963.
• V.Vapnik and A.Chervonenkis. A note on one class of perceptrons. Automation and Remote Control}, 25,
1964.
• J.K. Anlauf and M.Biehl. The adatron: an adaptive perceptron algorithm. Europhysics Letters, 10:687--
692, 1989.
• N.Aronszajn. Theory of reproducing kernels. Transactions of the American Mathematical Society, 68:337--
404, 1950.
• M.Aizerman, E.Braverman, and L.Rozonoer.Theoretical foundations of the potential function method in
pattern recognition learning. Automation and Remote Control 25:821--837, 1964.
• O. L. Mangasarian. Linear and nonlinear separation of patterns by linear programming. Operations
Research , 13:444--452, 1965.
• F. W. Smith. Pattern classifier design by linear programming. IEEE Transactions on Computers , C-
17:367--372, 1968.
• C.Cortes and V.Vapnik. Support vector networks. Machine Learning, 20:273--297, 1995.V.Vapnik. The
Nature of Statistical Learning Theory}. Springer Verlag, 1995.
• V.Vapnik. Statistical Learning Theory}. Wiley, 1998.A.N. Tikhonov and V.Y. Arsenin. Solutions of Ill-
posed Problems. W. H. Winston, 1977.
• B.Schoelkopf, C.J.C. Burges, and A.J. Smola, Advances in kernel methods - support vector learning, MIT
Press, Cambridge, MA, 1999.
• A.J. Smola, P.Bartlett, B.Schoelkopf, and C.Schuurmans, Advances in large margin classifiers, MIT Press,
Cambridge, MA, 1999.
44

You might also like