Supervised Machine Learning Algorithm
Supervised Machine Learning Algorithm
Supervised Machine Learning Algorithm
2
Basic Concepts
3
An example application
An emergency room in a hospital measures 17
variables (e.g., blood pressure, age, etc) of newly
admitted patients.
A decision is needed: whether to put a new patient
in an intensive-care unit.
Due to the high cost of ICU, those patients who
may survive less than a month are given higher
priority.
Problem: to predict high-risk patients and
discriminate them from low-risk patients.
4
Another application
A credit card company receives thousands of
applications for new cards. Each application
contains information about an applicant,
age
Marital status
annual salary
outstanding debts
credit rating
etc.
Problem: to decide whether an application should
approved, or to classify applications into two
categories, approved and not approved.
5
Machine learning and our focus
Like human learning from past experiences.
A computer does not have “experiences”.
A computer system learns from data, which
represent some “past experiences” of an
application domain.
Our focus: learn a target function that can be used
to predict the values of a discrete class attribute,
e.g., approve or not-approved, and high-risk or low
risk.
The task is commonly called: Supervised learning,
classification, or inductive learning.
6
The data and the goal
Data: A set of data records (also called
examples, instances or cases) described by
k attributes: A1, A2, … Ak.
a class: Each example is labelled with a pre-
defined class.
Goal: To learn a classification model from the
data that can be used to predict the classes
of new (future, or test) cases/instances.
7
An example: data (loan application)
Approved or not
8
An example: the learning task
Learn a classification model from the data
Use the model to classify future loan applications
into
Yes (approved) and
No (not approved)
What is the class for following case/instance?
9
Supervised vs. unsupervised
Learning
Supervised learning: classification is seen as
supervised learning from examples.
Supervision: The data (observations,
measurements, etc.) are labeled with pre-defined
classes. It is like that a “teacher” gives the classes
(supervision).
Test data are classified into these classes too.
Unsupervised learning (clustering)
Class labels of the data are unknown
Given a set of data, the task is to establish the
existence of classes or clusters in the data
10
Supervised learning process: two
steps
Learning (training): Learn a model using the
training data
Testing: Test the model using unseen test data
to assess the model accuracy
Number of correct classifications
Accuracy ,
Total number of test cases
11
What do we mean by learning?
Given
a data set D,
a task T, and
a performance measure M,
a computer system is said to learn from D to
perform the task T if after learning the
system’s performance on T improves as
measured by M.
In other words, the learned model helps the
system to perform T better as compared to
no learning.
12
An example
Data: Loan application data
Task: Predict whether a loan should be
approved or not.
Performance measure: accuracy.
13
Fundamental assumption of learning
Assumption: The distribution of training
examples is identical to the distribution of test
examples (including future unseen
examples).
15
In Machine Learning,
Linear Regression is a supervised machine
learning algorithm.
It tries to find out the best linear relationship that
16
Representing Linear Regression Model-
17
Representing Linear Regression Model-
18
Types of Linear Regression-
19
1. Simple Linear Regression-
simple linear regression, the dependent variable
depends only on a single independent variable.
For simple linear regression, the form of the model is-
Y = β0 + β1X
Here, Y is a dependent variable,
X is an independent variable.
21
Case-02: β1 = 0
It indicates that variable X has no impact on Y.
If X changes, there will be no change in Y.
22
Case-03: β1 > 0
It indicates that variable X has positive impact on Y.
If X increases, Y will increase and vice-versa.
23
2. Multiple Linear Regression-
In multiple linear regression, the dependent variable
depends on more than one independent variables.
For multiple linear regression, the form of the model
25
Bayesian classification
Probabilistic view: Supervised learning can naturally
be studied from a probabilistic point of view.
Let A1 through Ak be attributes with discrete values.
The class is C.
Given a test example d with observed attribute
values a1 through ak.
Classification is basically to compute the following
posteriori probability. The prediction is the class cj
such that
is maximal
26
Apply Bayes’ Rule
Pr(C c j | A1 a1 ,..., A| A| a| A| )
Pr( A1 a1 ,..., A| A| a| A| | C c j ) Pr(C c j )
Pr( A1 a1 ,..., A| A| a| A| )
Pr( A1 a1 ,..., A| A| a| A| | C c j ) Pr(C c j )
|C |
Pr( A a ,..., A
r 1
1 1 | A| a| A| | C cr ) Pr(C cr )
27
Computing probabilities
The denominator P(A1=a1,...,Ak=ak) is irrelevant
for decision making since it is the same for
every class.
We only need P(A1=a1,...,Ak=ak | C=ci), which
can be written as
Pr(A1=a1|A2=a2,...,Ak=ak, C=cj)* Pr(A2=a2,...,Ak=ak |C=cj)
Recursively, the second factor above can be
written in the same way, and so on.
Now an assumption is needed.
28
Conditional independence
assumption
All attributes are conditionally independent
given the class C = cj.
Formally, we assume,
Pr(A1=a1 | A2=a2, ..., A|A|=a|A|, C=cj) = Pr(A1=a1 | C=cj)
29
Final naïve Bayesian classifier
Pr(C c j | A1 a1 ,..., A| A| a| A| )
| A|
Pr(C c j ) Pr( Ai ai | C c j )
i 1
|C | | A|
Pr(C cr ) Pr( Ai ai | C cr )
r 1 i 1
We are done!
How do we estimate P(Ai = ai| C=cj)? Easy!.
30
Classify a test instance
If we only need a decision on the most
probable class for the test instance, we only
need the numerator as its denominator is the
same for every class.
Thus, given a test example, we compute the
following to decide the most probable class
for the test instance
| A|
c arg max Pr(c j ) Pr( Ai ai | C c j )
cj i 1
31
An example
32
An Example (cont …)
For C = t, we have
2
1 2 2 2
Pr(C t ) Pr( A j a j | C t )
j 1 2 5 5 25
For class C = f, we have
2
1 1 2 1
Pr(C f ) Pr( A j a j | C f )
j 1 2 5 5 25
33
Additional issues
Numeric attributes: Naïve Bayesian
learning assumes that all attributes are
categorical. Numeric attributes need to be
discretized.
Zero counts: An particular attribute value
never occurs together with a class in the
nij
training set. We need smoothing.
Pr( Ai ai | C c j )
n j ni
35
Naive Bayes for Text Classification
36
Text classification/categorization
Due to the rapid growth of online documents in
organizations and on the Web, automated document
classification has become an important problem.
Techniques discussed previously can be applied to
text classification, but they are not as effective as
the next three methods.
We first study a naïve Bayesian method specifically
formulated for texts, which makes use of some text
specific features.
However, the ideas are similar to the preceding
method.
37
Probabilistic framework
Generative model: Each document is
generated by a parametric distribution
governed by a set of hidden parameters.
The generative model makes two
assumptions
The data (or the text documents) are generated by
a mixture model,
There is one-to-one correspondence between
mixture components and document classes.
38
Mixture model
A mixture model models the data with a
number of statistical distributions.
Intuitively, each distribution corresponds to a data
cluster and the parameters of the distribution
provide a description of the corresponding cluster.
Each distribution in a mixture model is also
called a mixture component.
The distribution/component can be of any
kind
39
An example
The figure shows a plot of the probability
density function of a 1-dimensional data set
(with two classes) generated by
a mixture of two Gaussian distributions,
one per class, whose parameters (denoted by i) are
the mean (i) and the standard deviation (i), i.e., i =
(i, i).
class 1 class 2
40
Mixture model (cont …)
Let the number of mixture components (or
distributions) in a mixture model be K.
Let the jth distribution have the parameters j.
Let be the set of parameters of all
components, = {1, 2, …, K, 1, 2, …, K},
where j is the mixture weight (or mixture
probability) of the mixture component j and j
is the parameters of component j.
How does the model generate documents?
41
Document generation
Due to one-to-one correspondence, each class
corresponds to a mixture component. The mixture
weights are class prior probabilities, i.e., j = Pr(cj|
).
The mixture model generates each document di by:
first selecting a mixture component (or class) according to
class prior probabilities (i.e., mixture weights), j = Pr(cj|).
then having this selected mixture component (cj) generate
a document di according to its parameters, with distribution
|C |
Pr(di|cj; ) or more precisely Pr(di|cj; j).
Pr(d i | ) Pr(c
j 1
j | Θ) Pr(d i | c j ; ) (23)
42
Model text documents
The naïve Bayesian classification treats each
document as a “bag of words”. The
generative model makes the following further
assumptions:
Words of a document are generated
independently of context given the class label.
The familiar naïve Bayes assumption used before.
44
Use probability function of multinomial
distribution
|V |
Pr( wt | cj; ) Nti
Pr( di | cj; ) Pr(| di |) | di |! t 1 Nti!
(24)
Nit | di |
t 1
Pr( wt | cj; ) 1. (25)
t 1
45
Parameter estimation
The parameters are estimated based on empirical
counts.
N Pr(c | d ) .
| D|
ˆ)
Pr( wt | c j ; i 1 ti j i
(26)
N Pr(c | d )
|V | | D|
s 1 i 1 si j i
In order to handle 0 counts for infrequent occurring
words that do not appear in the training set, but may
appear in the test set, we need to smooth the
probability. Lidstone smoothing, 0 1
i 1 N ti Pr(c j | d i )
| D|
Pr( wt | c j ; ˆ ) . (27)
| V | s 1 i 1 N si Pr(c j | d i )
|V | | D|
46
Parameter estimation (cont …)
Class prior probabilities, which are mixture
weights j, can be easily estimated using
training data
| D|
Pr(cj | di )
ˆ
Pr(c | )
j
i 1 (28)
|D|
47
Classification
Given a test document di, from Eq. (23) (27) and (28)
Pr( c ˆ ˆ
j | ) Pr( di | cj ; )
ˆ)
Pr(cj | di;
ˆ)
Pr( di |
ˆ |d i |
ˆ)
Pr(cj | )k 1 Pr( wd i ,k | cj;
r 1 ˆ |d i |
ˆ
k 1 di ,k r )
|C |
Pr( cr | ) Pr( w | c ;
48
Discussions
Most assumptions made by naïve Bayesian
learning are violated to some degree in
practice.
Despite such violations, researchers have
shown that naïve Bayesian learning produces
very accurate models.
The main problem is the mixture model
assumption. When this assumption is seriously
violated, the classification performance can be
poor.
Naïve Bayesian learning is extremely
efficient.
49
Sipport Vector Machines
50
Introduction
Support vector machines were invented by V.
Vapnik and his co-workers in 1970s in Russia and
became known to the West in 1992.
SVMs are linear classifiers that find a hyperplane to
separate two class of data, positive and negative.
Kernel functions are used for nonlinear separation.
SVM not only has a rigorous theoretical foundation,
but also performs classification more accurately
than most other methods in applications, especially
for high dimensional data.
It is perhaps the best classifier for text classification.
51
Basic concepts
Let the set of training examples D be
{(x1, y1), (x2, y2), …, (xr, yr)},
where xi = (x1, x2, …, xn) is an input vector in a
real-valued space X Rn and yi is its class label
(output value), yi {1, -1}.
1: positive class and -1: negative class.
SVM finds a linear function of the form (w: weight
vector)
f(x) = w x + b
1 if w x i b 0
yi
1 if w x i b 0
52
The hyperplane
The hyperplane that separates positive and negative
training data is
w x + b = 0
It is also called the decision boundary (surface).
So many possible hyperplanes, which one to
choose?
53
Maximal margin hyperplane
SVM looks for the separating hyperplane with the largest
margin.
Machine learning theory says this hyperplane minimizes the
error bound
54
Linear SVM: separable case
Assume the data are linearly separable.
Consider a positive data point (x+, 1) and a negative
(x-, -1) that are closest to the hyperplane
<w x> + b = 0.
We define two parallel hyperplanes, H+ and H-, that
pass through x+ and x- respectively. H+ and H- are
also parallel to <w x> + b = 0.
55
Compute the margin
Now let us compute the distance between the two
margin hyperplanes H+ and H-. Their distance is the
margin (d+ + d in the figure).
Recall from vector space in algebra that the
(perpendicular) distance from a point xi to the
hyperplane w x + b = 0 is:
| w xi b |
(36)
|| w ||
where ||w|| is the norm of w,
2 2 2
|| w || w w w1 w2 ... wn (37)
56
Compute the margin (cont …)
Let us compute d+.
Instead of computing the distance from x+ to the
separating hyperplane w x + b = 0, we pick up
any point xs on w x + b = 0 and compute the
distance from xs to w x+ + b = 1 by applying the
distance Eq. (36) and noticing w xs + b = 0,
| w xs b 1 | 1
d (38)
|| w || || w ||
2 (39)
margin d d
|| w ||
57
A optimization problem!
Definition (Linear SVM: separable case): Given a set of
linearly separable training examples,
D = {(x1, y1), (x2, y2), …, (xr, yr)}
Learning is to solve the following constrained
minimization problem,
w w
Minimize : (40)
2
Subject to : yi ( w x i b) 1, i 1, 2, ..., r
yi ( w x i b 1, i summarizes
1, 2, ..., r
w xi + b 1 for yi = 1
w xi + b -1 for yi = -1.
58
Solve the constrained minimization
Standard Lagrangian method
r
1
LP w w
2
[ y (w x b) 1]
i 1
i i i (41)
59
Kuhn-Tucker conditions
61
Dual formulation
From primal to a dual: Setting to zero the
partial derivatives of the Lagrangian (41) with
respect to the primal variables (i.e., w and
b), and substituting the resulting relations
back into the Lagrangian.
I.e., substitute (48) and (49), into the original
Lagrangian (41) to eliminate the primal variables
r r
1
LD i
i 1 2 i , j 1
y i y j i j x i x j , (55)
62
Dual optimization prolem
64
Linear SVM: Non-separable case
Linear separable case is the ideal situation.
Real-life data may have noise or errors.
Class label incorrect or randomness in the application
domain.
Recall in the separable case, the problem was
w w
Minimize :
2
Subject to : yi ( w x i b) 1, i 1, 2, ..., r
With noisy data, the constraints may not be
satisfied. Then, no solution!
65
Relax the constraints
To allow errors in data, we relax the margin
constraints by introducing slack variables, i
( 0) as follows:
w xi + b 1 i for yi = 1
w xi + b 1 + i for yi = -1.
The new constraints:
Subject to: yi(w xi + b) 1 i, i =1, …, r,
i 0, i =1, 2, …, r.
66
Geometric interpretation
Two error data points xa and xb (circled) in wrong
regions
67
Penalize errors in objective function
We need to penalize the errors in the
objective function.
A natural way of doing it is to assign an extra
cost for errors to change the objective
function to
r
w w
Minimize : C ( i ) k (60)
2 i 1
k = 1 is commonly used, which has the
advantage that neither i nor its Lagrangian
multipliers appear in the dual formulation.
68
New optimization problem
r
w w
Minimize : C i (61)
2 i 1
Subject to : yi ( w x i b) 1 i , i 1, 2, ..., r
i 0, i 1, 2, ..., r
69
Kuhn-Tucker conditions
70
From primal to dual
As the linear separable case, we transform
the primal to a dual by setting to zero the
partial derivatives of the Lagrangian (62) with
respect to the primal variables (i.e., w, b
and i), and substituting the resulting
relations back into the Lagrangian.
Ie.., we substitute Equations (63), (64) and
(65) into the primal Lagrangian (62).
From Equation (65), C i i = 0, we can
deduce that i C because i 0.
71
Dual
The dual of (61) is
72
Find primal variable values
The dual problem (72) can be solved numerically.
The resulting i values are then used to compute w
and b. w is computed using Equation (63) and b is
computed using the Kuhn-Tucker complementarity
conditions (70) and (71).
Since no values for i, we need to get around it.
From Equations (65), (70) and (71), we observe that if 0 < i
< C then both i = 0 and yiw xi + b – 1 + i = 0. Thus, we
can use any training data point for which 0 < i < C and
Equation r(69) (with i = 0) to compute b.
1
b yi i x i x j 0. (73)
yi i 1
73
(65), (70) and (71) in fact tell us
more
75
How to deal with nonlinear
separation?
The SVM formulations require linear separation.
76
Space transformation
The basic idea is to map the data in the input
space X to a feature space F via a nonlinear
mapping ,
:X F
(76)
x ( x)
After the mapping, the original training data
set {(x1, y1), (x2, y2), …, (xr, yr)} becomes:
{((x1), y1), ((x2), y2), …, ((xr), yr)} (77)
77
Geometric interpretation
79
An example space transformation
Suppose our input space is 2-dimensional,
and we choose the following transformation
(mapping) from 2-D to 3-D:
2 2
( x1 , x2 ) ( x1 , x2 , 2 x1 x2 )
The training example ((2, 3), -1) in the input
space is transformed to the following in the
feature space:
((4, 9, 8.5), -1)
80
Problem with explicit transformation
The potential problem with this explicit data
transformation and then applying the linear SVM is
that it may suffer from the curse of dimensionality.
The number of dimensions in the feature space can
be huge with some useful transformations even with
reasonable numbers of attributes in the input space.
This makes it computationally infeasible to handle.
Fortunately, explicit transformation is not needed.
81
Kernel functions
We notice that in the dual formulation both
the construction of the optimal hyperplane (79) in F and
the evaluation of the corresponding decision function (80)
only require dot products (x) (z) and never the mapped
vector (x) in its explicit form. This is a crucial point.
Thus, if we have a way to compute the dot product
(x) (z) using the input vectors x and z directly,
no need to know the feature vector (x) or even itself.
In SVM, this is done through the use of kernel
functions, denoted by K,
K(x, z) = (x) (z) (82)
82
An example kernel function
Polynomial kernel
K(x, z) = x zd (83)
Let us compute the kernel with degree d = 2 in a 2-
dimensional space: x = (x1, x2) and z = (z1, z2).
2 2
x z ( x1 z1 x 2 z 2 )
2 2 2 2 (84)
x1 z1 2 x1 z1 x 2 z 2 x 2 z 2
2 2 2 2
( x1 , x 2 , 2 x1 x 2 ) ( z1 , z 2 , 2 z1 z 2 )
( x) (z ) ,
This shows that the kernel x z2 is a dot product in
a transformed feature space
83
Kernel trick
The derivation in (84) is only for illustration
purposes.
We do not need to find the mapping function.
We can simply apply the kernel function
directly by
replace all the dot products (x) (z) in (79) and
(80) with the kernel function K(x, z) (e.g., the
polynomial kernel x zd in (83)).
This strategy is called the kernel trick.
84
Is it a kernel function?
The question is: how do we know whether a
function is a kernel without performing the
derivation such as that in (84)? I.e,
How do we know that a kernel function is indeed a
dot product in some feature space?
This question is answered by a theorem
called the Mercer’s theorem, which we will
not discuss here.
85
Commonly used kernels
It is clear that the idea of kernel generalizes the dot
product in the input space. This dot product is also
a kernel with the feature map being the identity
86
Some other issues in SVM
SVM works only in a real-valued space. For a
categorical attribute, we need to convert its
categorical values to numeric values.
SVM does only two-class classification. For multi-
class problems, some strategies can be applied,
e.g., one-against-rest, and error-correcting output
coding.
The hyperplane produced by SVM is hard to
understand by human users. The matter is made
worse by kernels. Thus, SVM is commonly used in
applications that do not required human
understanding.
87
K-Nearest Neighbor
Classification
88
k-Nearest Neighbor Classification (kNN)
Unlike all the previous learning methods, kNN
does not build model from the training data.
To classify a test instance d, define k-
neighborhood P as k nearest neighbors of d
Count number n of training instances in P that
belong to class cj
Estimate Pr(cj|d) as n/k
No training is needed. Classification time is
linear in training set size for each test case.
89
kNNAlgorithm
A new point
Pr(science| )?
91
Discussions
kNN can deal with complex and arbitrary
decision boundaries.
Despite its simplicity, researchers have
shown that the classification accuracy of kNN
can be quite strong and in many cases as
accurate as those elaborated methods.
kNN is slow at the classification time
kNN does not produce an understandable
model
92
Artificial Neural Network
93
Artificial Neural Network
Computational models inspired by the
human brain:
Algorithms that try to mimic the brain.
Massively parallel, distributed system, made up
of simple processing units (neurons)
Synaptic connection strengths among neurons
are used to store the acquired knowledge.
Knowledge is acquired by the network from its
environment through a learning process
94
Applications of ANNs
95
Properties:
Inputs are flexible
any real values
Highly correlated or independent
Target function may be discrete-valued, real-valued,
or vectors of discrete or real values
Outputs are real numbers between 0 and 1
Resistant to errors in the training data
Long training time
Fast evaluation
The function produced can be difficult for humans to
interpret
96
When to consider Neural Networks
Input is high-dimensional discrete or raw-valued
Output is discrete or real-valued
Output is a vector of values
Possibly noisy data
Form of target function is unknown
Human readability of the result is not important
Examples:
Speech phoneme recognition
Image classification
Financial prediction
97
98
99
100
101
102
103
104
105
106
107
108
109
Summary
110
Summary
Applications of supervised learning are in almost
any field or domain.
We studied 5 classification techniques.
There are still many other methods, e.g.,
Bayesian networks
Genetic algorithms
Fuzzy classification
111