Machine Learning 1
Machine Learning 1
Machine Learning 1
Machine
Learning
Andrew Ng
CS 229
Machine Learning
Handout #1: Course Information
Course Description
Course Materials
There is no required text for this course. Notes will be posted periodically on
the course web site. The following books are recommended as optional reading:
Christopher Bishop, Pattern Recognition and Machine Learning. Springer, 2006.
Richard Duda, Peter Hart and David Stork, Pattern Classification, 2nd ed. John Wiley &
Sons, 2001.
Tom Mitchell, Machine Learning. McGraw-Hill, 1997.
Richard Sutton and Andrew Barto, Reinforcement Learning: An introduction. MIT Press,
1998
Course grades will be based 40% on homeworks (10% each), 20% on the
midterm, and 40% on the major term project. Up to 3% extra credit may be
awarded for class participation.
Sections
To review material from the prerequisites or to supplement the lecture material,
there will occasionally be extra discussion sections held on Friday. An
announcement will be made whenever one of these sections is held. Attendance
at these sections is optional.
Supervised learning
Lets start by talking about a few examples of supervised learning problems.
Suppose we have a dataset giving the living areas and prices of 47 houses
from Portland, Oregon:
Living area (feet2 ) Price (1000$s)
2104 400
1600 330
2400 369
1416 232
3000 540
.. ..
. .
We can plot this data:
housing prices
1000
900
800
700
600
price (in $1000)
500
400
300
200
100
500 1000 1500 2000 2500 3000 3500 4000 4500 5000
square feet
Given data like this, how can we learn to predict the prices of other houses
in Portland, as a function of the size of their living areas?
1
CS229 Winter 2003 2
To establish notation for future use, we’ll use x(i) to denote the “input”
variables (living area in this example), also called input features, and y (i)
to denote the “output” or target variable that we are trying to predict
(price). A pair (x(i) , y (i) ) is called a training example, and the dataset
that we’ll be using to learn—a list of m training examples {(x(i) , y (i) ); i =
1, . . . , m}—is called a training set. Note that the superscript “(i)” in the
notation is simply an index into the training set, and has nothing to do with
exponentiation. We will also use X denote the space of input values, and Y
the space of output values. In this example, X = Y = R.
To describe the supervised learning problem slightly more formally, our
goal is, given a training set, to learn a function h : X 7→ Y so that h(x) is a
“good” predictor for the corresponding value of y. For historical reasons, this
function h is called a hypothesis. Seen pictorially, the process is therefore
like this:
Training
set
Learning
algorithm
x h predicted y
(living area of (predicted price)
house.) of house)
When the target variable that we’re trying to predict is continuous, such
as in our housing example, we call the learning problem a regression prob-
lem. When y can take on only a small number of discrete values (such as
if, given the living area, we wanted to predict if a dwelling is a house or an
apartment, say), we call it a classification problem.
3
Part I
Linear Regression
To make our housing example more interesting, lets consider a slightly richer
dataset in which we also know the number of bedrooms in each house:
Living area (feet2 ) #bedrooms Price (1000$s)
2104 3 400
1600 3 330
2400 3 369
1416 2 232
3000 4 540
.. .. ..
. . .
(i)
Here, the x’s are two-dimensional vectors in R2 . For instance, x1 is the
(i)
living area of the i-th house in the training set, and x2 is its number of
bedrooms. (In general, when designing a learning problem, it will be up to
you to decide what features to choose, so if you are out in Portland gathering
housing data, you might also decide to include other features such as whether
each house has a fireplace, the number of bathrooms, and so on. We’ll say
more about feature selection later, but for now lets take the features as given.)
To perform supervised learning, we must decide how we’re going to rep-
resent functions/hypotheses h in a computer. As an initial choice, lets say
we decide to approximate y as a linear function of x:
hθ (x) = θ0 + θ1 x1 + θ2 x2
Here, the θi ’s are the parameters (also called weights) parameterizing the
space of linear functions mapping from X to Y. When there is no risk of
confusion, we will drop the θ subscript in hθ (x), and write it more simply as
h(x). To simplify our notation, we also introduce the convention of letting
x0 = 1 (this is the intercept term), so that
n
X
h(x) = θi xi = θT x,
i=0
where on the right-hand side above we are viewing θ and x both as vectors,
and here n is the number of input variables (not counting x0 ).
Now, given a training set, how do we pick, or learn, the parameters θ?
One reasonable method seems to be to make h(x) close to y, at least for
4
If you’ve seen linear regression before, you may recognize this as the familiar
least-squares cost function that gives rise to the ordinary least squares
regression model. Whether or not you have seen it previously, lets keep
going, and we’ll eventually show this to be a special case of a much broader
family of algorithms.
1 LMS algorithm
We want to choose θ so as to minimize J(θ). To do so, lets use a search
algorithm that starts with some “initial guess” for θ, and that repeatedly
changes θ to make J(θ) smaller, until hopefully we converge to a value of
θ that minimizes J(θ). Specifically, lets consider the gradient descent
algorithm, which starts with some initial θ, and repeatedly performs the
update:
∂
θj := θj − α J(θ).
∂θj
(This update is simultaneously performed for all values of j = 0, . . . , n.)
Here, α is called the learning rate. This is a very natural algorithm that
repeatedly takes a step in the direction of steepest decrease of J.
In order to implement this algorithm, we have to work out what is the
partial derivative term on the right hand side. Lets first work it out for the
case of if we have only one training example (x, y), so that we can neglect
the sum in the definition of J. We have:
∂ ∂ 1
J(θ) = (hθ (x) − y)2
∂θj ∂θj 2
1 ∂
= 2 · (hθ (x) − y) · (hθ (x) − y)
2 ∂θj
n
!
∂ X
= (hθ (x) − y) · θi x i − y
∂θj i=0
= (hθ (x) − y) xj
5
The rule is called the LMS update rule (LMS stands for “least mean squares”),
and is also known as the Widrow-Hoff learning rule. This rule has several
properties that seem natural and intuitive. For instance, the magnitude of
the update is proportional to the error term (y (i) − hθ (x(i) )); thus, for in-
stance, if we are encountering a training example on which our prediction
nearly matches the actual value of y (i) , then we find that there is little need
to change the parameters; in contrast, a larger change to the parameters will
be made if our prediction hθ (x(i) ) has a large error (i.e., if it is very far from
y (i) ).
We’d derived the LMS rule for when there was only a single training
example. There are two ways to modify this method for a training set of
more than one example. The first is replace it with the following algorithm:
The reader can easily verify that the quantity in the summation in the update
rule above is just ∂J(θ)/∂θj (for the original definition of J). So, this is
simply gradient descent on the original cost function J. This method looks
at every example in the entire training set on every step, and is called batch
gradient descent. Note that, while gradient descent can be susceptible
to local minima in general, the optimization problem we have posed here
for linear regression has only one global, and no other local, optima; thus
gradient descent always converges (assuming the learning rate α is not too
large) to the global minimum. Indeed, J is a convex quadratic function.
Here is an example of gradient descent as it is run to minimize a quadratic
function.
1
We use the notation “a := b” to denote an operation (in a computer program) in
which we set the value of a variable a to be equal to the value of b. In other words, this
operation overwrites a with the value of b. In contrast, we will write “a = b” when we are
asserting a statement of fact, that the value of a is equal to the value of b.
6
50
45
40
35
30
25
20
15
10
5 10 15 20 25 30 35 40 45 50
The ellipses shown above are the contours of a quadratic function. Also
shown is the trajectory taken by gradient descent, with was initialized at
(48,30). The x’s in the figure (joined by straight lines) mark the successive
values of θ that gradient descent went through.
When we run batch gradient descent to fit θ on our previous dataset,
to learn to predict housing price as a function of living area, we obtain
θ0 = 71.27, θ1 = 0.1345. If we plot hθ (x) as a function of x (area), along
with the training data, we obtain the following figure:
housing prices
1000
900
800
700
600
price (in $1000)
500
400
300
200
100
500 1000 1500 2000 2500 3000 3500 4000 4500 5000
square feet
If the number of bedrooms were included as one of the input features as well,
we get θ0 = 89.60, θ1 = 0.1392, θ2 = −8.738.
The above results were obtained with batch gradient descent. There is
an alternative to batch gradient descent that also works very well. Consider
the following algorithm:
7
Loop {
for i=1 to m, {
(i)
θj := θj + α y (i) − hθ (x(i) ) xj (for every j).
}
In this algorithm, we repeatedly run through the training set, and each time
we encounter a training example, we update the parameters according to
the gradient of the error with respect to that single training example only.
This algorithm is called stochastic gradient descent (also incremental
gradient descent). Whereas batch gradient descent has to scan through
the entire training set before taking a single step—a costly operation if m is
large—stochastic gradient descent can start making progress right away, and
continues to make progress with each example it looks at. Often, stochastic
gradient descent gets θ “close” to the minimum much faster than batch gra-
dient descent. (Note however that it may never “converge” to the minimum,
and the parameters θ will keep oscillating around the minimum of J(θ); but
in practice most of the values near the minimum will be reasonably good
approximations to the true minimum.2 ) For these reasons, particularly when
the training set is large, stochastic gradient descent is often preferred over
batch gradient descent.
We now state without proof some facts of matrix derivatives (we won’t
need some of these until later this quarter). Equation (4) applies only to
non-singular square matrices A, where |A| denotes the determinant of A. We
have:
∇A trAB = BT (1)
∇AT f (A) = (∇A f (A))T (2)
∇A trABAT C = CAB + C T AB T (3)
∇A |A| = |A|(A−1 )T . (4)
To make our matrix notation more concrete, let us now explain in detail the
meaning of the first of these equations. Suppose we have some fixed matrix
B ∈ Rn×m . We can then define a function f : Rm×n 7→ R according to
f (A) = trAB. Note that this definition makes sense, because if A ∈ Rm×n ,
then AB is a square matrix, and we can apply the trace operator to it; thus,
f does indeed map from Rm×n to R. We can then apply our definition of
matrix derivatives to find ∇A f (A), which will itself by an m-by-n matrix.
Equation (1) above states that the (i, j) entry of this matrix will be given by
the (i, j)-entry of B T , or equivalently, by Bji .
The proofs of Equations (1-3) are reasonably simple, and are left as an
exercise to the reader. Equations (4) can be derived using the adjoint repre-
sentation of the inverse of a matrix.3
be seen from its definition), this implies that (∂/∂Aij )|A| = A′ij . Putting all this together
shows the result.
10
Also, let ~y be the m-dimensional vector containing all the target values from
the training set:
y (1)
y (2)
~y = .. .
.
y (m)
Now, since hθ (x(i) ) = (x(i) )T θ, we can easily verify that
(x(1) )T θ y (1)
Xθ − ~y = .. ..
−
. .
(x(m) )T θ y (m)
hθ (x(1) ) − y (1)
= ..
.
.
(m) (m)
hθ (x ) − y
2
Thus, using the fact that for a vector z, we have that z T z =
P
i zi :
m
1 1X
(Xθ − ~y )T (Xθ − ~y ) = (hθ (x(i) ) − y (i) )2
2 2 i=1
= J(θ)
Hence,
1
∇θ J(θ) = ∇θ (Xθ − ~y )T (Xθ − ~y )
2
1
∇θ θT X T Xθ − θT X T ~y − ~y T Xθ + ~y T ~y
=
2
1
∇θ tr θT X T Xθ − θT X T ~y − ~y T Xθ + ~y T ~y
=
2
1
∇θ tr θT X T Xθ − 2tr ~y T Xθ
=
2
1
X T Xθ + X T Xθ − 2X T ~y
=
2
= X T Xθ − X T ~y
In the third step, we used the fact that the trace of a real number is just the
real number; the fourth step used the fact that trA = trAT , and the fifth
step used Equation (5) with AT = θ, B = B T = X T X, and C = I, and
Equation (1). To minimize J, we set its derivatives to zero, and obtain the
normal equations:
X T Xθ = X T ~y
Thus, the value of θ that minimizes J(θ) is given in closed form by the
equation
θ = (X T X)−1 X T ~y .
3 Probabilistic interpretation
When faced with a regression problem, why might linear regression, and
specifically why might the least-squares cost function J, be a reasonable
choice? In this section, we will give a set of probabilistic assumptions, under
which least-squares regression is derived as a very natural algorithm.
Let us assume that the target variables and the inputs are related via the
equation
y (i) = θT x(i) + ǫ(i) ,
where ǫ(i) is an error term that captures either unmodeled effects (such as
if there are some features very pertinent to predicting housing price, but
that we’d left out of the regression), or random noise. Let us further assume
that the ǫ(i) are distributed IID (independently and identically distributed)
according to a Gaussian distribution (also called a Normal distribution) with
12
mean zero and some variance σ 2 . We can write this assumption as “ǫ(i) ∼
N (0, σ 2 ).” I.e., the density of ǫ(i) is given by
(ǫ(i) )2
(i) 1
p(ǫ ) = √ exp − .
2πσ 2σ 2
(y (i) − θT x(i) )2
(i) (i) 1
p(y |x ; θ) = √ exp − .
2πσ 2σ 2
The notation “p(y (i) |x(i) ; θ)” indicates that this is the distribution of y (i)
given x(i) and parameterized by θ. Note that we should not condition on θ
(“p(y (i) |x(i) , θ)”), since θ is not a random variable. We can also write the
distribution of y (i) as as y (i) | x(i) ; θ ∼ N (θT x(i) , σ 2 ).
Given X (the design matrix, which contains all the x(i) ’s) and θ, what
is the distribution of the y (i) ’s? The probability of the data is given by
p(~y |X; θ). This quantity is typically viewed a function of ~y (and perhaps X),
for a fixed value of θ. When we wish to explicitly view this as a function of
θ, we will instead call it the likelihood function:
Note that by the independence assumption on the ǫ(i) ’s (and hence also the
y (i) ’s given the x(i) ’s), this can also be written
m
Y
L(θ) = p(y (i) | x(i) ; θ)
i=1
m
(y (i) − θT x(i) )2
Y 1
= √ exp − .
i=1
2πσ 2σ 2
Now, given this probabilistic model relating the y (i) ’s and the x(i) ’s, what
is a reasonable way of choosing our best guess of the parameters θ? The
principal of maximum likelihood says that we should should choose θ so
as to make the data as high probability as possible. I.e., we should choose θ
to maximize L(θ).
Instead of maximizing L(θ), we can also maximize any strictly increasing
function of L(θ). In particular, the derivations will be a bit simpler if we
13
4 4 4
3 3 3
y
2 2 2
1 1 1
0 0 0
0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7
x x x
2. Output θT x.
In contrast, the locally weighted linear regression algorithm does the fol-
lowing:
1. Fit θ to minimize i w(i) (y (i) − θT x(i) )2 .
P
2. Output θT x.
15
Here, the w(i) ’s are non-negative valued weights. Intuitively, if w(i) is large
for a particular value of i, then in picking θ, we’ll try hard to make (y (i) −
θT x(i) )2 small. If w(i) is small, then the (y (i) − θT x(i) )2 error term will be
pretty much ignored in the fit.
A fairly standard choice for the weights is4
(x(i) − x)2
(i)
w = exp −
2τ 2
Note that the weights depend on the particular point x at which we’re trying
to evaluate x. Moreover, if |x(i) − x| is small, then w(i) is close to 1; and
if |x(i) − x| is large, then w(i) is small. Hence, θ is chosen giving a much
higher “weight” to the (errors on) training examples close to the query point
x. (Note also that while the formula for the weights takes a form that is
cosmetically similar to the density of a Gaussian distribution, the w(i) ’s do
not directly have anything to do with Gaussians, and in particular the w(i)
are not random variables, normally distributed or otherwise.) The parameter
τ controls how quickly the weight of a training example falls off with distance
of its x(i) from the query point x; τ is called the bandwidth parameter, and
is also something that you’ll get to experiment with in your homework.
Locally weighted linear regression is the first example we’re seeing of a
non-parametric algorithm. The (unweighted) linear regression algorithm
that we saw earlier is known as a parametric learning algorithm, because
it has a fixed, finite number of parameters (the θi ’s), which are fit to the
data. Once we’ve fit the θi ’s and stored them away, we no longer need to
keep the training data around to make future predictions. In contrast, to
make predictions using locally weighted linear regression, we need to keep
the entire training set around. The term “non-parametric” (roughly) refers
to the fact that the amount of stuff we need to keep in order to represent the
hypothesis h grows linearly with the size of the training set.
4
If x is vector-valued, this is generalized to be w(i) = exp(−(x(i) − x)T (x(i) − x)/(2τ 2 )),
or w(i) = exp(−(x(i) − x)T Σ−1 (x(i) − x)/2), for an appropriate choice of τ or Σ.
16
Part II
Classification and logistic
regression
Lets now talk about the classification problem. This is just like the regression
problem, except that the values y we now want to predict take on only
a small number of discrete values. For now, we will focus on the binary
classification problem in which y can take on only two values, 0 and 1.
(Most of what we say here will also generalize to the multiple-class case.)
For instance, if we are trying to build a spam classifier for email, then x(i)
may be some features of a piece of email, and y may be 1 if it is a piece
of spam mail, and 0 otherwise. 0 is also called the negative class, and 1
the positive class, and they are sometimes also denoted by the symbols “-”
and “+.” Given x(i) , the corresponding y (i) is also called the label for the
training example.
5 Logistic regression
We could approach the classification problem ignoring the fact that y is
discrete-valued, and use our old linear regression algorithm to try to predict
y given x. However, it is easy to construct examples where this method
performs very poorly. Intuitively, it also doesn’t make sense for hθ (x) to take
values larger than 1 or smaller than 0 when we know that y ∈ {0, 1}.
To fix this, lets change the form for our hypotheses hθ (x). We will choose
1
hθ (x) = g(θT x) = ,
1 + e−θT x
where
1
g(z) =
1 + e−z
is called the logistic function or the sigmoid function. Here is a plot
showing g(z):
17
0.9
0.8
0.7
0.6
g(z)
0.5
0.4
0.3
0.2
0.1
0
−5 −4 −3 −2 −1 0 1 2 3 4 5
z
So, given the logistic regression model, how do we fit θ for it? Follow-
ing how we saw least squares regression could be derived as the maximum
likelihood estimator under a set of assumptions, lets endow our classification
model with a set of probabilistic assumptions, and then fit the parameters
via maximum likelihood.
18
P (y = 1 | x; θ) = hθ (x)
P (y = 0 | x; θ) = 1 − hθ (x)
L(θ) = p(~y | X; θ)
Ym
= p(y (i) | x(i) ; θ)
i=1
m
Y y(i) 1−y(i)
= hθ (x(i) ) 1 − hθ (x(i) )
i=1
= (y − hθ (x)) xj
19
Above, we used the fact that g ′ (z) = g(z)(1 − g(z)). This therefore gives us
the stochastic gradient ascent rule
(i)
θj := θj + α y (i) − hθ (x(i) ) xj
If we compare this to the LMS update rule, we see that it looks identical; but
this is not the same algorithm, because hθ (x(i) ) is now defined as a non-linear
function of θT x(i) . Nonetheless, it’s a little surprising that we end up with
the same update rule for a rather different algorithm and learning problem.
Is this coincidence, or is there a deeper reason behind this? We’ll answer this
when get get to GLM models. (See also the extra credit problem on Q3 of
problem set 1.)
If we then let hθ (x) = g(θT x) as before but using this modified definition of
g, and if we use the update rule
(i)
θj := θj + α y (i) − hθ (x(i) ) xj .
50 50 50
40 40 40
30 30 30
f(x)
f(x)
f(x)
20 20 20
10 10 10
0 0 0
In the leftmost figure, we see the function f plotted along with the line
y = 0. We’re trying to find θ so that f (θ) = 0; the value of θ that achieves this
is about 1.3. Suppose we initialized the algorithm with θ = 4.5. Newton’s
method then fits a straight line tangent to f at θ = 4.5, and solves for the
where that line evaluates to 0. (Middle figure.) This give us the next guess
for θ, which is about 2.8. The rightmost figure shows the result of running
one more iteration, which the updates θ to about 1.8. After a few more
iterations, we rapidly approach θ = 1.3.
Newton’s method gives a way of getting to f (θ) = 0. What if we want to
use it to maximize some function ℓ? The maxima of ℓ correspond to points
where its first derivative ℓ′ (θ) is zero. So, by letting f (θ) = ℓ′ (θ), we can use
the same algorithm to maximize ℓ, and we obtain update rule:
ℓ′ (θ)
θ := θ − ′′ .
ℓ (θ)
(Something to think about: How would this change if we wanted to use
Newton’s method to minimize rather than maximize a function?)
21
Part III
Generalized Linear Models5
So far, we’ve seen a regression example, and a classification example. In the
regression example, we had y|x; θ ∼ N (µ, σ 2 ), and in the classification one,
y|x; θ ∼ Bernoulli(φ), where for some appropriate definitions of µ and φ as
functions of x and θ. In this section, we will show that both of these methods
are special cases of a broader family of models, called Generalized Linear
Models (GLMs). We will also show how other models in the GLM family
can be derived and applied to other classification and regression problems.
Here, η is called the natural parameter (also called the canonical param-
eter) of the distribution; T (y) is the sufficient statistic (for the distribu-
tions we consider, it will often be the case that T (y) = y); and a(η) is the log
partition function. The quantity e−a(η) essentially plays the role of a nor-
malization constant, that makes sure the distribution p(y; η) sums/integrates
over y to 1.
A fixed choice of T , a and b defines a family (or set) of distributions that
is parameterized by η; as we vary η, we then get different distributions within
this family.
We now show that the Bernoulli and the Gaussian distributions are ex-
amples of exponential family distributions. The Bernoulli distribution with
mean φ, written Bernoulli(φ), specifies a distribution over y ∈ {0, 1}, so that
p(y = 1; φ) = φ; p(y = 0; φ) = 1 − φ. As we varying φ, we obtain Bernoulli
distributions with different means. We now show that this class of Bernoulli
distributions, ones obtained by varying φ, is in the exponential family; i.e.,
that there is a choice of T , a and b so that Equation (6) becomes exactly the
class of Bernoulli distributions.
5
The presentation of the material in this section takes inspiration from Michael I.
Jordan, Learning in graphical models (unpublished book draft), and also McCullagh and
Nelder, Generalized Linear Models (2nd ed.).
23
p(y; φ) = φy (1 − φ)1−y
= exp(y log φ + (1 − y) log(1 − φ))
φ
= exp log y + log(1 − φ) .
1−φ
T (y) = y
a(η) = − log(1 − φ)
= log(1 + eη )
b(y) = 1
This shows that the Bernoulli distribution can be written in the form of
Equation (6), using an appropriate choice of T , a and b.
Lets now move on to consider the Gaussian distribution. Recall that,
when deriving linear regression, the value of σ 2 had no effect on our final
choice of θ and hθ (x). Thus, we can choose an arbitrary value for σ 2 without
changing anything. To simplify the derivation below, lets set σ 2 = 1.6 We
then have:
1 1 2
p(y; µ) = √ exp − (y − µ)
2π 2
1 1 2 1 2
= √ exp − y · exp µy − µ
2π 2 2
6
If we leave σ 2 as a variable, the Gaussian distribution can also be shown to be in the
exponential family, where η ∈ R2 is now a 2-dimension vector that depends on both µ and
σ. For the purposes of GLMs, however, the σ 2 parameter can also be treated by considering
a more general definition of the exponential family: p(y; η, τ ) = b(a, τ ) exp((η T T (y) −
a(η))/c(τ )). Here, τ is called the dispersion parameter, and for the Gaussian, c(τ ) = σ 2 ;
but given our simplification above, we won’t need the more general definition for the
examples we will consider here.
24
η = µ
T (y) = y
a(η) = µ2 /2
= η 2 /2
√
b(y) = (1/ 2π) exp(−y 2 /2).
9 Constructing GLMs
Suppose you would like to build a model to estimate the number y of cus-
tomers arriving in your store (or number of page-views on your website) in
any given hour, based on certain features x such as store promotions, recent
advertising, weather, day-of-week, etc. We know that the Poisson distribu-
tion usually gives a good model for numbers of visitors. Knowing this, how
can we come up with a model for our problem? Fortunately, the Poisson is an
exponential family distribution, so we can apply a Generalized Linear Model
(GLM). In this section, we will we will describe a method for constructing
GLM models for problems such as these.
More generally, consider a classification or regression problem where we
would like to predict the value of some random variable y as a function of
x. To derive a GLM for this problem, we will make the following three
assumptions about the conditional distribution of y given x and about our
model:
1. y | x; θ ∼ ExponentialFamily(η). I.e., given x and θ, the distribution of
y follows some exponential family distribution, with parameter η.
2. Given x, our goal is to predict the expected value of T (y) given x.
In most of our examples, we will have T (y) = y, so this means we
would like the prediction h(x) output by our learned hypothesis h to
25
The third of these assumptions might seem the least well justified of
the above, and it might be better thought of as a “design choice” in our
recipe for designing GLMs, rather than as an assumption per se. These
three assumptions/design choices will allow us to derive a very elegant class
of learning algorithms, namely GLMs, that have many desirable properties
such as ease of learning. Furthermore, the resulting models are often very
effective for modelling different types of distributions over y; for example, we
will shortly show that both logistic regression and ordinary least squares can
both be derived as GLMs.
hθ (x) = E[y|x; θ]
= µ
= η
= θT x.
The first equality follows from Assumption 2, above; the second equality
follows from the fact that y|x; θ ∼ N (µ, σ 2 ), and so its expected value is given
by µ; the third equality follows from Assumption 1 (and our earlier derivation
showing that µ = η in the formulation of the Gaussian as an exponential
family distribution); and the last equality follows from Assumption 3.
26
hθ (x) = E[y|x; θ]
= φ
= 1/(1 + e−η )
T
= 1/(1 + e−θ x )
T
So, this gives us hypothesis functions of the form hθ (x) = 1/(1 + e−θ x ). If
you are previously wondering how we came up with the form of the logistic
function 1/(1 + e−z ), this gives one answer: Once we assume that y condi-
tioned on x is Bernoulli, it arises as a consequence of the definition of GLMs
and exponential family distributions.
To introduce a little more terminology, the function g giving the distri-
bution’s mean as a function of the natural parameter (g(η) = E[T (y); η])
is called the canonical response function. Its inverse, g −1 , is called the
canonical link function. Thus, the canonical response function for the
Gaussian family is just the identify function; and the canonical response
function for the Bernoulli is the logistic function.7
Unlike our previous examples, here we do not have T (y) = y; also, T (y) is
now a k − 1 dimensional vector, rather than a real number. We will write
(T (y))i to denote the i-th element of the vector T (y).
We introduce one more very useful piece of notation. An indicator func-
tion 1{·} takes on a value of 1 if its argument is true, and 0 otherwise
(1{True} = 1, 1{False} = 0). For example, 1{2 = 3} = 0, and 1{3 =
5 − 2} = 1. So, we can also write the relationship between T (y) and y as
(T (y))i = 1{y = i}. (Before you continue reading, please make sure you un-
derstand why this is true!) Further, we have that E[(T (y))i ] = P (y = i) = φi .
We are now ready to show that the multinomial is a member of the
28
This function mapping from the η’s to the φ’s is called the softmax function.
To complete our model, we use Assumption 3, given earlier, that the ηi ’s
are linearly related to the x’s. So, have ηi = θiT x (for i = 1, . . . , k − 1),
where θ1 , . . . , θk−1 ∈ Rn+1 are the parameters of our model. For notational
convenience, we can also define θk = 0, so that ηk = θkT x = 0, as given
previously. Hence, our model assumes that the conditional distribution of y
given x is given by
p(y = i|x; θ) = φi
eηi
= Pk
j=1 eηj
T
eθi x
= Pk T (8)
j=1 eθj x
In other words, our hypothesis will output the estimated probability that
p(y = i|x; θ), for every value of i = 1, . . . , k. (Even though hθ (x) as defined
above is only k − 1 dimensional, clearly p(y = k|x; θ) can be obtained as
Pk−1
1 − i=1 φi .)
30
To obtain the second line above, we used the definition for p(y|x; θ) given
in Equation (8). We can now obtain the maximum likelihood estimate of
the parameters by maximizing ℓ(θ) in terms of θ, using a method such as
gradient ascent or Newton’s method.
CS229 Lecture notes
Andrew Ng
Part IV
Generative Learning algorithms
So far, we’ve mainly been talking about learning algorithms that model
p(y|x; θ), the conditional distribution of y given x. For instance, logistic
regression modeled p(y|x; θ) as hθ (x) = g(θ T x) where g is the sigmoid func-
tion. In these notes, we’ll talk about a different type of learning algorithm.
Consider a classification problem in which we want to learn to distinguish
between elephants (y = 1) and dogs (y = 0), based on some features of
an animal. Given a training set, an algorithm like logistic regression or
the perceptron algorithm (basically) tries to find a straight line—that is, a
decision boundary—that separates the elephants and dogs. Then, to classify
a new animal as either an elephant or a dog, it checks on which side of the
decision boundary it falls, and makes its prediction accordingly.
Here’s a different approach. First, looking at elephants, we can build a
model of what elephants look like. Then, looking at dogs, we can build a
separate model of what dogs look like. Finally, to classify a new animal, we
can match the new animal against the elephant model, and match it against
the dog model, to see whether the new animal looks more like the elephants
or more like the dogs we had seen in the training set.
Algorithms that try to learn p(y|x) directly (such as logistic regression),
or algorithms that try to learn mappings directly from the space of inputs X
to the labels {0, 1}, (such as the perceptron algorithm) are called discrim-
inative learning algorithms. Here, we’ll talk about algorithms that instead
try to model p(x|y) (and p(y)). These algorithms are called generative
learning algorithms. For instance, if y indicates whether a example is a dog
(0) or an elephant (1), then p(x|y = 0) models the distribution of dogs’
features, and p(x|y = 1) models the distribution of elephants’ features.
After modeling p(y) (called the class priors) and p(x|y), our algorithm
1
2
can then use Bayes rule to derive the posterior distribution on y given x:
p(x|y)p(y)
p(y|x) = .
p(x)
Here, the denominator is given by p(x) = p(x|y = 1)p(y = 1) + p(x|y =
0)p(y = 0) (you should be able to verify that this is true from the standard
properties of probabilities), and thus can also be expressed in terms of the
quantities p(x|y) and p(y) that we’ve learned. Actually, if were calculating
p(y|x) in order to make a prediction, then we don’t actually need to calculate
the denominator, since
p(x|y)p(y)
arg max p(y|x) = arg max
y y p(x)
= arg max p(x|y)p(y).
y
Cov(X) = Σ.
3 3 3
2 2 2
3 3 3
1 2 1 2 1 2
0 1 0 1 0 1
−1 0 −1 0 −1 0
−1 −1 −1
−2 −2 −2
−2 −2 −2
−3 −3 −3 −3 −3 −3
The left-most figure shows a Gaussian with mean zero (that is, the 2x1
zero-vector) and covariance matrix Σ = I (the 2x2 identity matrix). A Gaus-
sian with zero mean and identity covariance is also called the standard nor-
mal distribution. The middle figure shows the density of a Gaussian with
zero mean and Σ = 0.6I; and in the rightmost figure shows one with , Σ = 2I.
We see that as Σ becomes larger, the Gaussian becomes more “spread-out,”
and as it becomes smaller, the distribution becomes more “compressed.”
Lets look at some more examples.
0.25 0.25 0.25
3 3 3
2 2 2
1 1 1
0 0 0
3 3 3
−1 2 −1 2 −1 2
1 1 1
−2 0 −2 0 −2 0
−1 −1 −1
−3 −2 −3 −2 −3 −2
−3 −3 −3
The figures above show Gaussians with mean 0, and with covariance
matrices respectively
1 0 1 0.5 1 0.8
Σ= ; Σ= ; .Σ = .
0 1 0.5 1 0.8 1
The leftmost figure shows the familiar standard normal distribution, and we
see that as we increase the off-diagonal entry in Σ, the density becomes more
“compressed” towards the 45◦ line (given by x1 = x2 ). We can see this more
clearly when we look at the contours of the same three densities:
4
3 3 3
2 2 2
1 1 1
0 0 0
−1 −1 −1
−2 −2 −2
−3 −3 −3
−3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3
2 2 2
1 1 1
0 0 0
−1 −1 −1
−2 −2 −2
−3 −3 −3
−3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3
3 3 3
2 2 2
3 3 3
1 2 1 2 1 2
0 1 0 1 0 1
−1 0 −1 0 −1 0
−1 −1 −1
−2 −2 −2
−2 −2 −2
−3 −3 −3 −3 −3 −3
y ∼ Bernoulli(φ)
x|y = 0 ∼ N (µ0 , Σ)
x|y = 1 ∼ N (µ1 , Σ)
p(y) = φy (1 − φ)1−y
1 1 T −1
p(x|y = 0) = exp − (x − µ0 ) Σ (x − µ0 )
(2π)n/2 |Σ|1/2 2
1 1 T −1
p(x|y = 1) = exp − (x − µ1 ) Σ (x − µ1 )
(2π)n/2 |Σ|1/2 2
Here, the parameters of our model are φ, Σ, µ0 and µ1 . (Note that while
there’re two different mean vectors µ0 and µ1 , this model is usually applied
using only one covariance matrix Σ.) The log-likelihood of the data is given
by
m
Y
`(φ, µ0 , µ1 , Σ) = log p(x(i) , y (i) ; φ, µ0 , µ1 , Σ)
i=1
Ym
= log p(x(i) |y (i) ; µ0 , µ1 , Σ)p(y (i) ; φ).
i=1
6
−1
−2
−3
−4
−5
−6
−7
−2 −1 0 1 2 3 4 5 6 7
Shown in the figure are the training set, as well as the contours of the
two Gaussian distributions that have been fit to the data in each of the
two classes. Note that the two Gaussians have contours that are the same
shape and orientation, since they share a covariance matrix Σ, but they have
different means µ0 and µ1 . Also shown in the figure is the straight line
giving the decision boundary at which p(y = 1|x) = 0.5. On one side of
the boundary, we’ll predict y = 1 to be the most likely outcome, and on the
other side, we’ll predict y = 0.
almost always do better than GDA. For this reason, in practice logistic re-
gression is used more often than GDA. (Some related considerations about
discriminative vs. generative models also apply for the Naive Bayes algo-
rithm that we discuss next, but the Naive Bayes algorithm is still considered
a very good, and is certainly also a very popular, classification algorithm.)
2 Naive Bayes
In GDA, the feature vectors x were continuous, real-valued vectors. Lets now
talk about a different learning algorithm in which the xi ’s are discrete-valued.
For our motivating example, consider building an email spam filter using
machine learning. Here, we wish to classify messages according to whether
they are unsolicited commercial (spam) email, or non-spam email. After
learning to do this, we can then have our mail reader automatically filter
out the spam messages and perhaps place them in a separate mail folder.
Classifying emails is one example of a broader set of problems called text
classification.
Lets say we have a training set (a set of emails labeled as spam or non-
spam). We’ll begin our construction of our spam filter by specifying the
features xi used to represent an email.
We will represent an email via a feature vector whose length is equal to
the number of words in the dictionary. Specifically, if an email contains the
i-th word of the dictionary, then we will set xi = 1; otherwise, we let xi = 0.
For instance, the vector
1 a
0 aardvark
0 aardwolf
. ..
.
x= . .
1 buy
. ..
.. .
0 zygmurgy
is used to represent an email that contains the words “a” and “buy,” but not
“aardvark,” “aardwolf” or “zygmurgy.”2 The set of words encoded into the
2
Actually, rather than looking through an english dictionary for the list of all english
words, in practice it is more common to look through our training set and encode in our
feature vector only the words that occur at least once there. Apart from reducing the
number of words modeled and hence reducing our computational and space requirements,
9
The first equality simply follows from the usual properties of probabilities,
and the second equality used the NB assumption. We note that even though
the Naive Bayes assumption is an extremely strong assumptions, the resulting
algorithm works well on many problems.
Our model is parameterized by φi|y=1 = p(xi = 1|y = 1), φi|y=0 = p(xi =
1|y = 0), and φy = p(y = 1). As usual, given a training set {(x(i) , y (i) ); i =
this also has the advantage of allowing us to model/include as a feature many words
that may appear in your email (such as “cs229”) but that you won’t find in a dictionary.
Sometimes (as in the homework), we also exclude the very high frequency words (which
will be words like “the,” “of,” “and,”; these high frequency, “content free” words are called
stop words) since they occur in so many documents and do little to indicate whether an
email is spam or non-spam.
10
Maximizing this with respect to φy , φi|y=0 and φi|y=1 gives the maximum
likelihood estimates:
Pm (i) (i)
i=1 1{xj = 1 ∧ y = 1}
φj|y=1 = Pm (i)
i=1 1{y = 1}
Pm (i) (i)
i=1 1{xj = 1 ∧ y = 0}
φj|y=0 = Pm (i) = 0}
i=1 1{y
Pm (i)
i=1 1{y = 1}
φy =
m
In the equations above, the “∧” symbol means “and.” The parameters have
a very natural interpretation. For instance, φj|y=1 is just the fraction of the
spam (y = 1) emails in which word j does appear.
Having fit all these parameters, to make a prediction on a new example
with features x, we then simply calculate
p(x|y = 1)p(y = 1)
p(y = 1|x) =
p(x)
( ni=1 p(xi |y = 1)) p(y = 1)
Q
= Qn ,
( i=1 p(xi |y = 1)) p(y = 1) + ( ni=1 p(xi |y = 0)) p(y = 0)
Q
(In practice, it usually doesn’t matter much whether we apply Laplace smooth-
ing to φy or not, since we will typically have a fair fraction each of spam and
non-spam messages, so φy will be a reasonable estimate of p(y = 1) and will
be quite far from 0 anyway.)
as we’ve presented it will work well for many classification problems, for text
classification, there is a related model that does even better.
In the specific context of text classification, Naive Bayes as presented uses
the what’s called the multi-variate Bernoulli event model. In this model,
we assumed that the way an email is generated is that first it is randomly
determined (according to the class priors p(y)) whether a spammer or non-
spammer will send you your next message. Then, the person sending the
email runs through the dictionary, deciding whether to include each word i
in that email independently and according to the probabilities Q p(xi = 1|y) =
φi|y . Thus, the probability of a message was given by p(y) ni=1 p(xi |y).
Here’s a different model, called the multinomial event model. To de-
scribe this model, we will use a different notation and set of features for
representing emails. We let xi denote the identity of the i-th word in the
email. Thus, xi is now an integer taking values in {1, . . . , |V |}, where |V |
is the size of our vocabulary (dictionary). An email of n words is now rep-
resented by a vector (x1 , x2 , . . . , xn ) of length n; note that n can vary for
different documents. For instance, if an email starts with “A NIPS . . . ,”
then x1 = 1 (“a” is the first word in the dictionary), and x2 = 35000 (if
“nips” is the 35000th word in the dictionary).
In the multinomial event model, we assume that the way an email is
generated is via a random process in which spam/non-spam is first deter-
mined (according to p(y)) as before. Then, the sender of the email writes the
email by first generating x1 from some multinomial distribution over words
(p(x1 |y)). Next, the second word x2 is chosen independently of x1 but from
the same multinomial distribution, and similarly for x3 , x4 , and so on, until
all n words of the email have Qnbeen generated. Thus, the overall probability of
a message is given by p(y) i=1 p(xi |y). Note that this formula looks like the
one we had earlier for the probability of a message under the multi-variate
Bernoulli event model, but that the terms in the formula now mean very dif-
ferent things. In particular xi |y is now a multinomial, rather than a Bernoulli
distribution.
The parameters for our new model are φy = p(y) as before, φi|y=1 =
p(xj = i|y = 1) (for any j) and φi|y=0 = p(xj = i|y = 0). Note that we have
assumed that p(xj |y) is the same for all values of j (i.e., that the distribution
according to which a word is generated does not depend on its position j
within the email).
If we are given a training set {(x(i) , y (i) ); i = 1, . . . , m} where x(i) =
(i) (i) (i)
(x1 , x2 , . . . , xni ) (here, ni is the number of words in the i-training example),
14
While not necessarily the very best classification algorithm, the Naive Bayes
classifier often works surprisingly well. It is often also a very good “first thing
to try,” given its simplicity and ease of implementation.
CS229 Lecture notes
Andrew Ng
Part V
Support Vector Machines
This set of notes presents the Support Vector Machine (SVM) learning al-
gorithm. SVMs are among the best (and many believe is indeed the best)
“off-the-shelf” supervised learning algorithm. To tell the SVM story, we’ll
need to first talk about margins and the idea of separating data with a large
“gap.” Next, we’ll talk about the optimal margin classifier, which will lead
us into a digression on Lagrange duality. We’ll also see kernels, which give
a way to apply SVMs efficiently in very high dimensional (such as infinite-
dimensional) feature spaces, and finally, we’ll close off the story with the
SMO algorithm, which gives an efficient implementation of SVMs.
1 Margins: Intuition
We’ll start our story on SVMs by talking about margins. This section will
give the intuitions about margins and about the “confidence” of our predic-
tions; these ideas will be made formal in Section 3.
Consider logistic regression, where the probability p(y = 1|x; θ) is mod-
eled by hθ (x) = g(θ T x). We would then predict “1” on an input x if and
only if hθ (x) ≥ 0.5, or equivalently, if and only if θ T x ≥ 0. Consider a
positive training example (y = 1). The larger θ T x is, the larger also is
hθ (x) = p(y = 1|x; w, b), and thus also the higher our degree of “confidence”
that the label is 1. Thus, informally we can think of our prediction as being
a very confident one that y = 1 if θ T x 0. Similarly, we think of logistic
regression as making a very confident prediction of y = 0, if θ T x 0. Given
a training set, again informally it seems that we’d have found a good fit to
the training data if we can find θ so that θ T x(i) 0 whenever y (i) = 1, and
1
2
θT x(i) 0 whenever y (i) = 0, since this would reflect a very confident (and
correct) set of classifications for all the training examples. This seems to be
a nice goal to aim for, and we’ll soon formalize this idea using the notion of
functional margins.
For a different type of intuition, consider the following figure, in which x’s
represent positive training examples, o’s denote negative training examples,
a decision boundary (this is the line given by the equation θ T x = 0, and
is also called the separating hyperplane) is also shown, and three points
have also been labeled A, B and C.
A
B
C
Notice that the point A is very far from the decision boundary. If we are
asked to make a prediction for the value of y at at A, it seems we should be
quite confident that y = 1 there. Conversely, the point C is very close to
the decision boundary, and while it’s on the side of the decision boundary
on which we would predict y = 1, it seems likely that just a small change to
the decision boundary could easily have caused out prediction to be y = 0.
Hence, we’re much more confident about our prediction at A than at C. The
point B lies in-between these two cases, and more broadly, we see that if
a point is far from the separating hyperplane, then we may be significantly
more confident in our predictions. Again, informally we think it’d be nice if,
given a training set, we manage to find a decision boundary that allows us
to make all correct and confident (meaning far from the decision boundary)
predictions on the training examples. We’ll formalize this later using the
notion of geometric margins.
3
2 Notation
To make our discussion of SVMs easier, we’ll first need to introduce a new
notation for talking about classification. We will be considering a linear
classifier for a binary classification problem with labels y and features x.
From now, we’ll use y ∈ {−1, 1} (instead of {0, 1}) to denote the class labels.
Also, rather than parameterizing our linear classifier with the vector θ, we
will use parameters w, b, and write our classifier as
Note that if y (i) = 1, then for the functional margin to be large (i.e., for our
prediction to be confident and correct), then we need w T x + b to be a large
positive number. Conversely, if y (i) = −1, then for the functional margin to
be large, then we need w T x + b to be a large negative number. Moreover,
if y (i) (wT x + b) > 0, then our prediction on this example is correct. (Check
this yourself.) Hence, a large functional margin represents a confident and a
correct prediction.
For a linear classifier with the choice of g given above (taking values in
{−1, 1}), there’s one property of the functional margin that makes it not a
very good measure of confidence, however. Given our choice of g, we note that
if we replace w with 2w and b with 2b, then since g(w T x + b) = g(2w T x + 2b),
4
this would not change hw,b (x) at all. I.e., g, and hence also hw,b (x), depends
only on the sign, but not on the magnitude, of w T x + b. However, replacing
(w, b) with (2w, 2b) also results in multiplying our functional margin by a
factor of 2. Thus, it seems that by exploiting our freedom to scale w and b,
we can make the functional margin arbitrarily large without really changing
anything meaningful. Intuitively, it might therefore make sense to impose
some sort of normalization condition such as that ||w||2 = 1; i.e., we might
replace (w, b) with (w/||w||2 , b/||w||2 ), and instead consider the functional
margin of (w/||w||2 , b/||w||2 ). We’ll come back to this later.
Given a training set S = {(x(i) , y (i) ); i = 1, . . . , m}, we also define the
function margin of (w, b) with respect to S as the smallest of the functional
margins of the individual training examples. Denoted by γ̂, this can therefore
be written:
γ̂ = min γ̂ (i) .
i=1,...,m
Next, lets talk about geometric margins. Consider the picture below:
A w
γ (i)
find that the point B is given by x(i) − γ (i) · w/||w||. But this point lies on
the decision boundary, and all points x on the decision boundary satisfy the
equation w T x + b = 0. Hence,
T (i) (i) w
w x −γ + b = 0.
||w||
Note that if ||w|| = 1, then the functional margin equals the geometric
margin—this thus gives us a way of relating these two different notions of
margin. Also, the geometric margin is invariant to rescaling of the parame-
ters; i.e., if we replace w with 2w and b with 2b, then the geometric margin
does not change. This will in fact come in handy later. Specifically, because
of this invariance to the scaling of the parameters, when trying to fit w and b
to training data, we can impose an arbitrary scaling constraint on w without
changing anything important; for instance, we can demand that ||w|| = 1, or
|w1 | = 5, or |w1 + b| + |w2 | = 2, and any of these can be satisfied simply by
rescaling w and b.
Finally, given a training set S = {(x(i) , y (i) ); i = 1, . . . , m}, we also define
the geometric margin of (w, b) with respect to S to be the smallest of the
geometric margins on the individual training examples:
γ = min γ (i) .
i=1,...,m
on the training set and a good “fit” to the training data. Specifically, this
will result in a classifier that separates the positive and the negative training
examples with a “gap” (geometric margin).
For now, we will assume that we are given a training set that is linearly
separable; i.e., that it is possible to separate the positive and negative ex-
amples using some separating hyperplane. How we we find the one that
achieves the maximum geometric margin? We can pose the following opti-
mization problem:
maxγ,w,b γ
s.t. y (i) (wT x(i) + b) ≥ γ, i = 1, . . . , m
||w|| = 1.
Here, we’re going to maximize γ̂/||w||, subject to the functional margins all
being at least γ̂. Since the geometric and functional margins are related by
γ = γ̂/||w|, this will give us the answer we want. Moreover, we’ve gotten rid
of the constraint ||w|| = 1 that we didn’t like. The downside is that we now
γ̂
have a nasty (again, non-convex) objective ||w|| function; and, we still don’t
have any off-the-shelf software that can solve this form of an optimization
problem.
Lets keep going. Recall our earlier discussion that we can add an arbitrary
scaling constraint on w and b without changing anything. This is the key idea
we’ll use now. We will introduce the scaling constraint that the functional
margin of w, b with respect to the training set must be 1:
γ̂ = 1.
7
5 Lagrange duality
Lets temporarily put aside SVMs and maximum margin classifiers, and talk
about solving constrained optimization problems.
Consider a problem of the following form:
minw f (w)
s.t. hi (w) = 0, i = 1, . . . , l.
Some of you may recall how the method of Lagrange multipliers can be used
to solve it. (Don’t worry if you haven’t seen it before.) In this method, we
define the Lagrangian to be
l
X
L(w, β) = f (w) + βi hi (w)
i=1
1
You may be familiar with linear programming, which solves optimization problems
that have linear objectives and linear constraints. QP software is also widely available,
which allows convex quadratic objectives and linear constraints.
8
Here, the βi ’s are called the Lagrange multipliers. We would then find
and set L’s partial derivatives to zero:
∂L ∂L
= 0; = 0,
∂wi ∂βi
and solve for w and β.
In this section, we will generalize this to constrained optimization prob-
lems in which we may have inequality as well as equality constraints. Due to
time constraints, we won’t really be able to do the theory of Lagrange duality
justice in this class,2 but we will give the main ideas and results, which we
will then apply to our optimal margin classifier’s optimization problem.
Consider the following, which we’ll call the primal optimization problem:
minw f (w)
s.t. gi (w) ≤ 0, i = 1, . . . , k
hi (w) = 0, i = 1, . . . , l.
To solve it, we start by defining the generalized Lagrangian
k
X l
X
L(w, α, β) = f (w) + αi gi (w) + βi hi (w).
i=1 i=1
Here, the αi ’s and βi ’s are the Lagrange multipliers. Consider the quantity
θP (w) = max L(w, α, β).
α,β : αi ≥0
Here, the “P” subscript stands for “primal.” Let some w be given. If w
violates any of the primal constraints (i.e., if either gi (w) > 0 or hi (w) 6= 0
for some i), then you should be able to verify that
k
X l
X
θP (w) = max f (w) + αi gi (w) + βi hi (w) (1)
α,β : αi ≥0
i=1 i=1
= ∞. (2)
Conversely, if the constraints are indeed satisfied for a particular value of w,
then θP (w) = f (w). Hence,
f (w) if w satisfies primal constraints
θP (w) =
∞ otherwise.
2
Readers interested in learning more about this topic are encouraged to read, e.g., R.
T. Rockarfeller (1970), Convex Analysis, Princeton University Press.
9
Thus, θP takes the same value as the objective in our problem for all val-
ues of w that satisfies the primal constraints, and is positive infinity if the
constraints are violated. Hence, if we consider the minimization problem
min θP (w) = min max L(w, α, β),
w w α,β : αi ≥0
we see that it is the same problem (i.e., and has the same solutions as) our
original, primal problem. For later use, we also define the optimal value of
the objective to be p∗ = minw θP (w); we call this the value of the primal
problem.
Now, lets look at a slightly different problem. We define
θD (α, β) = min L(w, α, β).
w
Here, the “D” subscript stands for “dual.” Note also that whereas in the
definition of θP we were optimizing (maximizing) with respect to α, β, here
are are minimizing with respect to w.
We can now pose the dual optimization problem:
max θD (α, β) = max min L(w, α, β).
α,β : αi ≥0 α,β : αi ≥0 w
This is exactly the same as our primal problem shown above, except that the
order of the “max” and the “min” are now exchanged. We also define the
optimal value of the dual problem’s objective to be d∗ = maxα,β : αi ≥0 θD (w).
How are the primal and the dual problems related? It can easily be shown
that
d∗ = max min L(w, α, β) ≤ min max L(w, α, β) = p∗ .
α,β : αi ≥0 w w α,β : αi ≥0
(You should convince yourself of this; this follows from the “max min” of a
function always being less than or equal to the “min max.”) However, under
certain conditions, we will have
d∗ = p ∗ ,
so that we can solve the dual problem in lieu of the primal problem. Lets
see what these conditions are.
Suppose f and the gi ’s are convex,3 and the hi ’s are affine.4 Suppose
further that the constraints gi are (strictly) feasible; this means that there
exists some w so that gi (w) < 0 for all i.
3
When f has a Hessian, then it is convex if and only if the hessian is positive semi-
definite. For instance, f (w) = w T w is convex; similarly, all linear (and affine) functions
are also convex. (A function f can also be convex without being differentiable, but we
won’t need those more general definitions of convexity here.)
4
I.e., there exists ai , bi , so that hi (w) = aTi w + bi . “Affine” means the same thing as
linear, except that we also allow the extra intercept term bi .
10
We have one such constraint for each training example. Note that from the
KKT dual complementarity condition, we will have αi > 0 only for the train-
ing examples that have functional margin exactly equal to one (i.e., the ones
11
The points with the smallest margins are exactly the ones closest to the
decision boundary; here, these are the three points (one negative and two pos-
itive examples) that lie on the dashed lines parallel to the decision boundary.
Thus, only three of the αi ’s—namely, the ones corresponding to these three
training examples—will be non-zero at the optimal solution to our optimiza-
tion problem. These three points are called the support vectors in this
problem. The fact that the number of support vectors can be much smaller
than the size the training set will be useful later.
Lets move on. Looking ahead, as we develop the dual form of the problem,
one key idea to watch out for is that we’ll try to write our algorithm in terms
of only the inner product hx(i) , x(j) i (think of this as (x(i) )T x(j) ) between
points in the input feature space. The fact that we can express our algorithm
in terms of these inner products will be key when we apply the kernel trick.
When we construct the Lagrangian for our optimization problem we have:
m
1 X
L(w, b, α) = ||w||2 − αi y (i) (wT x(i) + b) − 1 .
(8)
2 i=1
Note that there’re only “αi ” but no “βi ” Lagrange multipliers, since the
problem has only inequality constraints.
Lets find the dual form of the problem. To do so, we need to first minimize
L(w, b, α) with respect to w and b (for fixed α), to get θD , which we’ll do by
12
If we take the definition of w in Equation (9) and plug that back into the
Lagrangian (Equation 8), and simplify, we get
m m m
X 1 X (i) (j) (i) T (j)
X
L(w, b, α) = αi − y y αi αj (x ) x − b αi y (i) .
i=1
2 i,j=1 i=1
But from Equation (10), the last term must be zero, so we obtain
m m
X 1 X (i) (j)
L(w, b, α) = αi − y y αi αj (x(i) )T x(j) .
i=1
2 i,j=1
You should also be able to verify that the conditions required for p∗ =
d∗ and the KKT conditions (Equations 3–7) to hold are indeed satisfied in
our optimization problem. Hence, we can solve the dual in lieu of solving
the primal problem. Specifically, in the dual problem above, we have a
maximization problem in which the parameters are the αi ’s. We’ll talk later
13
about the specific algorithm that we’re going to use to solve the dual problem,
but if we are indeed able to solve it (i.e., find the α’s that maximize W (α)
subject to the constraints), then we can use Equation (9) to go back and find
the optimal w’s as a function of the α’s. Having found w ∗ , by considering
the primal problem, it is also straightforward to find the optimal value for
the intercept term b as
maxi:y(i) =−1 w∗ T x(i) + mini:y(i) =1 w∗ T x(i)
b∗ = − . (11)
2
(Check for yourself that this is correct.)
Before moving on, lets also take a more careful look at Equation (9), which
gives the optimal value of w in terms of (the optimal value of) α. Suppose
we’ve fit our model’s parameters to a training set, and now wish to make a
prediction at a new point input x. We would then calculate w T x + b, and
predict y = 1 if and only if this quantity is bigger than zero. But using (9),
this quantity can also be written:
m
!T
X
wT x + b = αi y (i) x(i) x+b (12)
i=1
m
X
= αi y (i) hx(i) , xi + b. (13)
i=1
7 Kernels
Back in our discussion of linear regression, we had a problem in which the
input x was the living area of a house, and we considered performing regres-
14
Rather than applying SVMs using the original input attributes x, we may
instead want to learn using some features φ(x). To do so, we simply need to
go over our previous algorithm, and replace x everywhere in it with φ(x).
Since the algorithm can be written entirely in terms of the inner prod-
ucts hx, zi, this means that we would replace all those inner products with
hφ(x), φ(z)i. Specificically, given a feature mapping φ, we define the corre-
sponding Kernel to be
Thus, we see that K(x, z) = φ(x)T φ(z), where the feature mapping φ is given
(shown here for the case of n = 3) by
x1 x1
x1 x2
x1 x3
x2 x1
φ(x) =
x 2 x 2
.
x2 x3
x3 x1
x3 x2
x3 x3
Note that whereas calculating the high-dimensional φ(x) requires O(n2 ) time,
finding K(x, z) takes only O(n) time—linear in the dimension of the input
attributes.
For a related kernel, also consider
(Check this yourself.) This corresponds to the feature mapping (again shown
16
for n = 3)
x1 x1
x1 x2
x1 x3
x2 x1
x2 x2
x2 x3
φ(x) =
x3 x1 ,
x3 x2
√ 3 x3
x
√2cx1
√2cx2
2cx3
c
and the parameter c controls the relative weighting between the xi (first
order) and the xi xj (second order) terms.
More broadly, the kernel K(x, z) = (xT z + c)d corresponds to a feature
mapping to an n+d d feature space, corresponding of all monomials of the
form xi1 xi2 . . . xik that are up to order d. However, despite working in this
O(nd )-dimensional space, computing K(x, z) still takes only O(n) time, and
hence we never need to explicitly represent feature vectors in this very high
dimensional feature space.
Now, lets talk about a slightly different view of kernels. Intuitively, (and
there are things wrong with this intuition, but nevermind), if φ(x) and φ(z)
are close together, then we might expect K(x, z) = φ(x)T φ(z) to be large.
Conversely, if φ(x) and φ(z) are far apart—say nearly orthogonal to each
other—then K(x, z) = φ(x)T φ(z) will be small. So, we can think of K(x, z)
as some measurement of how similar are φ(x) and φ(z), or of how similar are
x and z.
Given this intuition, suppose that for some learning problem that you’re
working on, you’ve come up with some function K(x, z) that you think might
be a reasonable measure of how similar x and z are. For instance, perhaps
you chose
||x − z||2
K(x, z) = exp − .
2σ 2
This is a resonable measure of x and z’s similarity, and is close to 1 when
x and z are close, and near 0 when x and z are far apart. Can we use this
definition of K as the kernel in an SVM? In this particular example, the
answer is yes. (This kernel is called the Gaussian kernel, and corresponds
17
to an infinite dimensional feature mapping φ.) But more broadly, given some
function K, how can we tell if it’s a valid kernel; i.e., can we tell if there is
some feature mapping φ so that K(x, z) = φ(x)T φ(z) for all x, z?
Suppose for now that K is indeed a valid kernel corresponding to some
feature mapping φ. Now, consider some finite set of m points (not necessarily
the training set) {x(1) , . . . , x(m) }, and let a square, m-by-m matrix K be
defined so that its (i, j)-entry is given by Kij = K(x(i) , x(j) ). This matrix
is called the Kernel matrix. Note that we’ve overloaded the notation and
used K to denote both the kernel function K(x, z) and the kernel matrix K,
due to their obvious close relationship.
Now, if K is a valid Kernel, then Kij = K(x(i) , x(j) ) = φ(x(i) )T φ(x(j) ) =
φ(x(j) )T φ(x(i) ) = K(x(j) , x(i) ) = Kji , and hence K must be symmetric. More-
over, letting φk (x) denote the k-th coordinate of the vector φ(x), we find that
for any vector z, we have
XX
z T Kz = zi Kij zj
i j
XX
= zi φ(x(i) )T φ(x(j) )zj
i j
XX X
= zi φk (x(i) )φk (x(j) )zj
i j k
XXX
= zi φk (x(i) )φk (x(j) )zj
k i j
!2
X X
= zi φk (x(i) )
k i
≥ 0.
The second-to-last step above used the same trick as you saw in Problem
set 1 Q1. Since z was arbitrary, this shows that K is positive semi-definite
(K ≥ 0).
Hence, we’ve shown that if K is a valid kernel (i.e., if it corresponds to
some feature mapping φ), then the corresponding Kernel matrix K ∈ Rm×m
is symmetric positive semidefinite. More generally, this turns out to be not
only a necessary, but also a sufficient, condition for K to be a valid kernel
(also called a Mercer kernel). The following result is due to Mercer.5
5
Many texts present Mercer’s theorem in a slightly more complicated form involving
L functions, but when the input attributes take values in Rn , the version given here is
2
equivalent.
18
algorithms that we’ll see later in this class will also be amenable to this
method, which has come to be known as the “kernel trick.”
Thus, examples are now permitted to have (functional) margin less than 1,
and if an example whose functional margin is 1 − ξi , we would pay a cost of
the objective function being increased by Cξi . The parameter C controls the
relative weighting between the twin goals of making the ||w||2 large (which
we saw earlier makes the margin small) and of ensuring that most examples
have functional margin at least 1.
20
Now, all that remains is to give an algorithm for actually solving the dual
problem, which we will do in the next section.
of the SVM. Partly to motivate the SMO algorithm, and partly because it’s
interesting in its own right, lets first take another digression to talk about
the coordinate ascent algorithm.
max W (α1 , α2 , . . . , αm ).
α
Here, we think of W as just some function of the parameters αi ’s, and for now
ignore any relationship between this problem and SVMs. We’ve already seen
two optimization algorithms, gradient ascent and Newton’s method. The
new algorithm we’re going to consider here is called coordinate ascent:
For i = 1, . . . , m, {
αi := arg maxα̂i W (α1 , . . . , αi−1 , α̂i , αi+1 , . . . , αm ).
}
Thus, in the innermost loop of this algorithm, we will hold all the vari-
ables except for some αi fixed, and reoptimize W with respect to just the
parameter αi . In the version of this method presented here, the inner-loop
reoptimizes the variables in order α1 , α2 , . . . , αm , α1 , α2 , . . .. (A more sophis-
ticated version might choose other orderings; for instance, we may choose
the next variable to update according to which one we expect to allow us to
make the largest increase in W (α).)
When the function W happens to be of such a form that the “arg max”
in the inner loop can be performed efficiently, then coordinate ascent can be
a fairly efficient algorithm. Here’s a picture of coordinate ascent in action:
22
2.5
1.5
0.5
−0.5
−1
−1.5
−2
The ellipses in the figure are the contours of a quadratic function that
we want to optimize. Coordinate ascent was initialized at (2, −2), and also
plotted in the figure is the path that it took on its way to the global maximum.
Notice that on each step, coordinate ascent takes a step that’s parallel to one
of the axes, since only one variable is being optimized at a time.
9.2 SMO
We close off the discussion of SVMs by sketching the derivation of the SMO
algorithm. Some details will be left to the homework, and for others you
may refer to the paper excerpt handed out in class.
Here’s the (dual) optimization problem that we want to solve:
m m
X 1 X (i) (j)
maxα W (α) = αi − y y αi αj hx(i) , x(j) i. (17)
i=1
2 i,j=1
s.t. 0 ≤ αi ≤ C, i = 1, . . . , m (18)
Xm
αi y (i) = 0. (19)
i=1
Lets say we have set of αi ’s that satisfy the constraints (18-19). Now,
suppose we want to hold α2 , . . . , αm fixed, and take a coordinate ascent step
and reoptimize the objective with respect to α1 . Can we make any progress?
The answer is no, because the constraint (19) ensures that
m
X
(1)
α1 y =− αi y (i) .
i=2
23
(This step used the fact that y (1) ∈ {−1, 1}, and hence (y (1) )2 = 1.) Hence,
α1 is exactly determined by the other αi ’s, and if we were to hold α2 , . . . , αm
fixed, then we can’t make any change to α1 without violating the con-
straint (19) in the optimization problem.
Thus, if we want to update some subject of the αi ’s, we must update at
least two of them simultaneously in order to keep satisfying the constraints.
This motivates the SMO algorithm, which simply does the following:
Repeat till convergence {
1. Select some pair αi and αj to update next (using a heuristic that
tries to pick the two that will allow us to make the biggest progress
towards the global maximum).
2. Reoptimize W (α) with respect to αi and αj , while holding all the
other αk ’s (k 6= i, j) fixed.
}
To test for convergence of this algorithm, we can check whether the KKT
conditions (Equations 14-16) are satisfied to within some tol. Here, tol is
the convergence tolerance parameter, and is typically set to around 0.01 to
0.001. (See the paper and pseudocode for details.)
The key reason that SMO is an efficient algorithm is that the update to
αi , αj can be computed very efficiently. Lets now briefly sketch the main
ideas for deriving the efficient update.
Lets say we currently have some setting of the αi ’s that satisfy the con-
straints (18-19), and suppose we’ve decided to hold α3 , . . . , αm fixed, and
want to reoptimize W (α1 , α2 , . . . , αm ) with respect to α1 and α2 (subject to
the constraints). From (19), we require that
m
X
(1) (2)
α1 y + α2 y =− αi y (i) .
i=3
Since the right hand side is fixed (as we’ve fixed α3 , . . . αm ), we can just let
it be denoted by some constant ζ:
α1 y (1) + α2 y (2) = ζ. (20)
We can thus picture the constraints on α1 and α2 as follows:
24
H α1y(1)+ α2y(2)=ζ
α2
L
α1 C
From the constraints (18), we know that α1 and α2 must lie within the box
[0, C] × [0, C] shown. Also plotted is the line α1 y (1) + α2 y (2) = ζ, on which we
know α1 and α2 must lie. Note also that, from these constraints, we know
L ≤ α2 ≤ H; otherwise, (α1 , α2 ) can’t simultaneously satisfy both the box
and the straight line constraint. In this example, L = 0. But depending on
what the line α1 y (1) + α2 y (2) = ζ looks like, this won’t always necessarily be
the case; but more generally, there will be some lower-bound L and some
upper-bound H on the permissable values for α2 that will ensure that α1 , α2
lie within the box [0, C] × [0, C].
Using Equation (20), we can also write α1 as a function of α2 :
α1 = (ζ − α2 y (2) )y (1) .
(Check this derivation yourself; we again used the fact that y (1) ∈ {−1, 1} so
that (y (1) )2 = 1.) Hence, the objective W (α) can be written
Finally, having found the α2new , we can use Equation (20) to go back and find
the optimal value of α1new .
There’re a couple more details that are quite easy but that we’ll leave you
to read about yourself in Platt’s paper: One is the choice of the heuristics
used to select the next αi , αj to update; the other is how to update b as the
SMO algorithm is run.
CS229 Lecture notes
Andrew Ng
Part VI
Learning Theory
1 Bias/variance tradeoff
When talking about linear regression, we discussed the problem of whether
to fit a “simple” model such as the linear “y = θ0 +θ1 x,” or a more “complex”
model such as the polynomial “y = θ0 + θ1 x + · · · θ5 x5 .” We saw the following
example:
4.5 4.5 4.5
4 4 4
3 3 3
2 2 2
1 1 1
0 0 0
0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7
x x x
Fitting a 5th order polynomial to the data (rightmost figure) did not
result in a good model. Specifically, even though the 5th order polynomial
did a very good job predicting y (say, prices of houses) from x (say, living
area) for the examples in the training set, we do not expect the model shown
to be a good one for predicting the prices of houses not in the training set. In
other words, what’s has been learned from the training set does not generalize
well to other houses. The generalization error (which will be made formal
shortly) of a hypothesis is its expected error on examples not necessarily in
the training set.
Both the models in the leftmost and the rightmost figures above have
large generalization error. However, the problems that the two models suffer
from are very different. If the relationship between y and x is not linear,
1
2
then even if we were fitting a linear model to a very large amount of training
data, the linear model would still fail to accurately capture the structure
in the data. Informally, we define the bias of a model to be the expected
generalization error even if we were to fit it to a very (say, infinitely) large
training set. Thus, for the problem above, the linear model suffers from large
bias, and may underfit (i.e., fail to capture structure exhibited by) the data.
Apart from bias, there’s a second component to the generalization error,
consisting of the variance of a model fitting procedure. Specifically, when
fitting a 5th order polynomial as in the rightmost figure, there is a large risk
that we’re fitting patterns in the data that happened to be present in our
small, finite training set, but that do not reflect the wider pattern of the
relationship between x and y. This could be, say, because in the training set
we just happened by chance to get a slightly more-expensive-than-average
house here, and a slightly less-expensive-than-average house there, and so
on. By fitting these “spurious” patterns in the training set, we might again
obtain a model with large generalization error. In this case, we say the model
has large variance.1
Often, there is a tradeoff between bias and variance. If our model is too
“simple” and has very few parameters, then it may have large bias (but small
variance); if it is too “complex” and has very many parameters, then it may
suffer from large variance (but have smaller bias). In the example above,
fitting a quadratic function does better than either of the extremes of a first
or a fifth order polynomial.
2 Preliminaries
In this set of notes, we begin our foray into learning theory. Apart from
being interesting and enlightening in its own right, this discussion will also
help us hone our intuitions and derive rules of thumb about how to best
apply learning algorithms in different settings. We will also seek to answer
a few questions: First, can we make formal the bias/variance tradeoff that
was just discussed? The will also eventually lead us to talk about model
selection methods, which can, for instance, automatically decide what order
polynomial to fit to a training set. Second, in machine learning it’s really
1
In these notes, we will not try to formalize the definitions of bias and variance beyond
this discussion. While bias and variance are straightforward to define formally for, e.g.,
linear regression, there have been several proposals for the definitions of bias and variance
for classification, and there is as yet no agreement on what is the “right” and/or the most
useful formalism.
3
generalization error that we care about, but most learning algorithms fit their
models to the training set. Why should doing well on the training set tell us
anything about generalization error? Specifically, can we relate error on the
training set to generalization error? Third and finally, are there conditions
under which we can actually prove that learning algorithms will work well?
We start with two simple but very useful lemmas.
Lemma. (The union bound). Let A1 , A2 , . . . , Ak be k different events (that
may not be independent). Then
P (A1 ∪ · · · ∪ Ak ) ≤ P (A1 ) + . . . + P (Ak ).
In probability theory, the union bound is usually stated as an axiom
(and thus we won’t try to prove it), but it also makes intuitive sense: The
probability of any one of k events happening is at most the sums of the
probabilities of the k different events.
Lemma. (Hoeffding inequality) Let Z1 , . . . , Zm be m independent and iden-
tically distributed (iid) random variables drawn from a Bernoulli(φ)Pdistri-
bution. I.e., P (Zi = 1) = φ, and P (Zi = 0) = 1 − φ. Let φ̂ = (1/m) mi=1 Zi
be the mean of these random variables, and let any γ > 0 be fixed. Then
P (|φ − φ̂| > γ) ≤ 2 exp(−2γ 2 m)
This lemma (which in learning theory is also called the Chernoff bound)
says that if we take φ̂—the average of m Bernoulli(φ) random variables—to
be our estimate of φ, then the probability of our being far from the true value
is small, so long as m is large. Another way of saying this is that if you have
a biased coin whose chance of landing on heads is φ, then if you toss it m
times and calculate the fraction of times that it came up heads, that will be
a good estimate of φ with high probability (if m is large).
Using just these two lemmas, we will be able to prove some of the deepest
and most important results in learning theory.
To simplify our exposition, lets restrict our attention to binary classifica-
tion in which the labels are y ∈ {0, 1}. Everything we’ll say here generalizes
to other, including regression and multi-class classification, problems.
We assume we are given a training set S = {(x(i) , y (i) ); i = 1, . . . , m}
of size m, where the training examples (x(i) , y (i) ) are drawn iid from some
probability distribution D. For a hypothesis h, we define the training error
(also called the empirical risk or empirical error in learning theory) to
be m
1 X
ε̂(h) = 1{h(x(i) ) 6= y (i) }.
m i=1
4
I.e. this is the probability that, if we now draw a new example (x, y) from
the distribution D, h will misclassify it.
Note that we have assumed that the training data was drawn from the
same distribution D with which we’re going to evaluate our hypotheses (in
the definition of generalization error). This is sometimes also referred to as
one of the PAC assumptions.2
Consider the setting of linear classification, and let hθ (x) = 1{θ T x ≥ 0}.
What’s a reasonable way of fitting the parameters θ? One approach is to try
to minimize the training error, and pick
We call this process empirical risk minimization (ERM), and the resulting
hypothesis output by the learning algorithm is ĥ = hθ̂ . We think of ERM
as the most “basic” learning algorithm, and it will be this algorithm that we
focus on in these notes. (Algorithms such as logistic regression can also be
viewed as approximations to empirical risk minimization.)
In our study of learning theory, it will be useful to abstract away from
the specific parameterization of hypotheses and from issues such as whether
we’re using a linear classifier. We define the hypothesis class H used by a
learning algorithm to be the set of all classifiers considered by it. For linear
classification, H = {hθ : hθ (x) = 1{θ T x ≥ 0}, θ ∈ Rn+1 } is thus the set of
all classifiers over X (the domain of the inputs) where the decision boundary
is linear. More broadly, if we were studying, say, neural networks, then we
could let H be the set of all classifiers representable by some neural network
architecture.
Empirical risk minimization can now be thought of as a minimization over
the class of functions H, in which the learning algorithm picks the hypothesis:
2
PAC stands for “probably approximately correct,” which is a framework and set of
assumptions under which numerous results on learning theory were proved. Of these, the
assumption of training and testing on the same distribution, and the assumption of the
independently drawn training examples, were the most important.
5
Thus, ε̂(hi ) is exactly the mean of the m random variables Zj that are drawn
iid from a Bernoulli distribution with mean ε(hi ). Hence, we can apply the
Hoeffding inequality, and obtain
This shows that, for our particular hi , training error will be close to
generalization error with high probability, assuming m is large. But we
don’t just want to guarantee that ε(hi ) will be close to ε̂(hi ) (with high
probability) for just only one particular hi . We want to prove that this will
be true for simultaneously for all h ∈ H. To do so, let Ai denote the event
that |ε(hi ) − ε̂(hi )| > γ. We’ve already show that, for any particular Ai , it
holds true that P (Ai ) ≤ 2 exp(−2γ 2 m). Thus, using the union bound, we
6
have that
(The “¬” symbol means “not.”) So, with probability at least 1−2k exp(−2γ 2 m),
we have that ε(h) will be within γ of ε̂(h) for all h ∈ H. This is called a uni-
form convergence result, because this is a bound that holds simultaneously
for all (as opposed to just one) h ∈ H.
In the discussion above, what we did was, for particular values of m and
γ, given a bound on the probability that, for some h ∈ H, |ε(h) − ε̂(h)| > γ.
There are three quantities of interest here: m, γ, and the probability of error;
we can bound either one in terms of the other two.
For instance, we can ask the following question: Given γ and some δ > 0,
how large must m be before we can guarantee that with probability at least
1 − δ, training error will be within γ of generalization error? By setting
δ = 2k exp(−2γ 2 m) and solving for m, [you should convince yourself this is
the right thing to do!], we find that if
1 2k
m≥ log ,
2γ 2 δ
then with probability at least 1 − δ, we have that |ε(h) − ε̂(h)| ≤ γ for all
h ∈ H. (Equivalently, this show that the probability that |ε(h) − ε̂(h)| > γ
for some h ∈ H is at most δ.) This bound tells us how many training
examples we need in order make a guarantee. The training set size m that
a certain method or algorithm requires in order to achieve a certain level of
performance is also called the algorithm’s sample complexity.
The key property of the bound above is that the number of training
examples needed to make this guarantee is only logarithmic in k, the number
of hypotheses in H. This will be important later.
7
Similarly, we can also hold m and δ fixed and solve for γ in the previous
equation, and show [again, convince yourself that this is right!] that with
probability 1 − δ, we have that for all h ∈ H,
r
1 2k
|ε̂(h) − ε(h)| ≤ log .
2m δ
Now, lets assume that uniform convergence holds, i.e., that |ε(h)− ε̂(h)| ≤
γ for all h ∈ H. What can we prove about the generalization of our learning
algorithm that picked ĥ = arg minh∈H ε̂(h)?
Define h∗ = arg minh∈H ε(h) to be the best possible hypothesis in H. Note
that h∗ is the best that we could possibly do given that we are using H, so
it makes sense to compare our performance to that of h∗ . We have:
ε(ĥ) ≤ ε̂(ĥ) + γ
≤ ε̂(h∗ ) + γ
≤ ε(h∗ ) + 2γ
The first line used the fact that |ε(ĥ)− ε̂(ĥ)| ≤ γ (by our uniform convergence
assumption). The second used the fact that ĥ was chosen to minimize ε̂(h),
and hence ε̂(ĥ) ≤ ε̂(h) for all h, and in particular ε̂(ĥ) ≤ ε̂(h∗ ). The third
line used the uniform convergence assumption again, to show that ε̂(h∗ ) ≤
ε(h∗ ) + γ. So, what we’ve shown is the following: If uniform convergence
occurs, then the generalization error of ĥ is at most 2γ worse than the best
possible hypothesis in H!
Lets put all this together into a theorem.
Theorem. Let |H| = k, and let any m, δ be fixed. Then with probability at
least 1 − δ, we have that
r
1 2k
ε(ĥ) ≤ min ε(h) + 2 log .
h∈H 2m δ
√
This is proved by letting γ equal the · term, using our previous argu-
ment that uniform convergence occurs with probability at least 1 − δ, and
then noting that uniform convergence implies ε(h) is at most 2γ higher than
ε(h∗ ) = minh∈H ε(h) (as we showed previously).
This also quantifies what we were saying previously saying about the
bias/variance tradeoff in model selection. Specifically, suppose we have some
hypothesis class H, and are considering switching to some much larger hy-
pothesis class H0 ⊇ H. If we switch to H0 , then the first term minh ε(h)
8
can only decrease (since we’d then be taking a min over a larger set of func-
tions). Hence, by learning using a larger hypothesis class,
√ our “bias” can
only decrease. However, if k increases, then the second 2 · term would also
increase. This increase corresponds to our “variance” increasing when we use
a larger hypothesis class.
By holding γ and δ fixed and solving for m like we did before, we can
also obtain the following sample complexity bound:
Corollary. Let |H| = k, and let any δ, γ be fixed. Then for ε(ĥ) ≤
minh∈H ε(h) + 2γ to hold with probability at least 1 − δ, it suffices that
1 2k
m ≥ log
2γ 2 δ
1 k
= O log ,
γ2 δ
“well” using a hypothesis class that has d parameters, generally we’re going
to need on the order of a linear number of training examples in d.
(At this point, it’s worth noting that these results were proved for an al-
gorithm that uses empirical risk minimization. Thus, while the linear depen-
dence of sample complexity on d does generally hold for most discriminative
learning algorithms that try to minimize training error or some approxima-
tion to training error, these conclusions do not always apply as readily to
discriminative learning algorithms. Giving good theoretical guarantees on
many non-ERM learning algorithms is still an area of active research.)
The other part of our previous argument that’s slightly unsatisfying is
that it relies on the parameterization of H. Intuitively, this doesn’t seem like
it should matter: We had written the class of linear classifiers as hθ (x) =
1{θ0 + θ1 x1 + · · · θn xn ≥ 0}, with n + 1 parameters θ0 , . . . , θn . But it could
also be written hu,v (x) = 1{(u20 − v02 ) + (u21 − v12 )x1 + · · · (u2n − vn2 )xn ≥ 0}
with 2n + 2 parameters ui , vi . Yet, both of these are just defining the same
H: The set of linear classifiers in n dimensions.
To derive a more satisfying argument, lets define a few more things.
Given a set S = {x(i) , . . . , x(d) } (no relation to the training set) of points
x(i) ∈ X , we say that H shatters S if H can realize any labeling on S.
I.e., if for any set of labels {y (1) , . . . , y (d) }, there exists some h ∈ H so that
h(x(i) ) = y (i) for all i = 1, . . . d.
Given a hypothesis class H, we then define its Vapnik-Chervonenkis
dimension, written VC(H), to be the size of the largest set that is shattered
by H. (If H can shatter arbitrarily large sets, then VC(H) = ∞.)
For instance, consider the following set of three points:
x2
x1
Can the set H of linear classifiers in two dimensions (h(x) = 1{θ0 +θ1 x1 +
θ2 x2 ≥ 0}) can shatter the set above? The answer is yes. Specifically, we
10
see that, for any of the eight possible labelings of these points, we can find a
linear classifier that obtains “zero training error” on them:
x2 x2 x2 x2
x1 x1 x1 x1
x2 x2 x2 x2
x1 x1 x1 x1
x2
x2
x1 x1
In order words, under the definition of the VC dimension, in order to
prove that VC(H) is at least d, we need to show only that there’s at least
one set of size d that H can shatter.
The following theorem, due to Vapnik, can then be shown. (This is, many
would argue, the most important theorem in all of learning theory.)
11
Part VI
Regularization and model
selection
Suppose we are trying select among several different models for a learning
problem. For instance, we might be using a polynomial regression model
hθ (x) = g(θ0 + θ1 x + θ2 x2 + · · · + θk xk ), and wish to decide if k should be
0, 1, . . . , or 10. How can we automatically select a model that represents
a good tradeoff between the twin evils of bias and variance1 ? Alternatively,
suppose we want to automatically choose the bandwidth parameter τ for
locally weighted regression, or the parameter C for our `1 -regularized SVM.
How can we do that?
For the sake of concreteness, in these notes we assume we have some
finite set of models M = {M1 , . . . , Md } that we’re trying to select among.
For instance, in our first example above, the model Mi would be an i-th
order polynomial regression model. (The generalization to infinite M is not
hard.2 ) Alternatively, if we are trying to decide between using an SVM, a
neural network or logistic regression, then M may contain these models.
1
Given that we said in the previous set of notes that bias and variance are two very
different beasts, some readers may be wondering if we should be calling them “twin” evils
here. Perhaps it’d be better to think of them as non-identical twins. The phrase “the
fraternal twin evils of bias and variance” doesn’t have the same ring to it, though.
2
If we are trying to choose from an infinite set of models, say corresponding to the
possible values of the bandwidth τ ∈ R+ , we may discretize τ and consider only a finite
number of possible values for it. More generally, most of the algorithms described here
can all be viewed as performing optimization search in the space of models, and we can
perform this search over infinite model classes as well.
1
2
1 Cross validation
Lets suppose we are, as usual, given a training set S. Given what we know
about empirical risk minimization, here’s what might initially seem like a
algorithm, resulting from using empirical risk minimization for model selec-
tion:
This algorithm does not work. Consider choosing the order of a poly-
nomial. The higher the order of the polynomial, the better it will fit the
training set S, and thus the lower the training error. Hence, this method will
always select a high-variance, high-degree polynomial model, which we saw
previously is often poor choice.
Here’s an algorithm that works better. In hold-out cross validation
(also called simple cross validation), we do the following:
1. Randomly split S into Strain (say, 70% of the data) and Scv (the remain-
ing 30%). Here, Scv is called the hold-out cross validation set.
3. Select and output the hypothesis hi that had the smallest error ε̂Scv (hi )
on the hold out cross validation set. (Recall, ε̂Scv (h) denotes the empir-
ical error of h on the set of examples in Scv .)
By testing on a set of examples Scv that the models were not trained on,
we obtain a better estimate of each hypothesis hi ’s true generalization error,
and can then pick the one with the smallest estimated generalization error.
Usually, somewhere between 1/4 − 1/3 of the data is used in the hold out
cross validation set, and 30% is a typical choice.
Optionally, step 3 in the algorithm may also be replaced with selecting
the model Mi according to arg mini ε̂Scv (hi ), and then retraining Mi on the
entire training set S. (This is often a good idea, with one exception being
learning algorithms that are be very sensitive to perturbations of the initial
conditions and/or data. For these methods, Mi doing well on Strain does not
necessarily mean it will also do well on Scv , and it might be better to forgo
this retraining step.)
The disadvantage of using hold out cross validation is that it “wastes”
about 30% of the data. Even if we were to take the optional step of retraining
3
the model on the entire training set, it’s still as if we’re trying to find a good
model for a learning problem in which we had 0.7m training examples, rather
than m training examples, since we’re testing models that were trained on
only 0.7m examples each time. While this is fine if data is abundant and/or
cheap, in learning problems in which data is scarce (consider a problem with
m = 20, say), we’d like to do something better.
Here is a method, called k-fold cross validation, that holds out less
data each time:
1. Randomly split S into k disjoint subsets of m/k training examples each.
Lets call these subsets S1 , . . . , Sk .
For j = 1, . . . , k
Train the model Mi on S1 ∪ · · · ∪ Sj−1 ∪ Sj+1 ∪ · · · Sk (i.e., train
on all the data except Sj ) to get some hypothesis hij .
Test the hypothesis hij on Sj , to get ε̂Sj (hij ).
The estimated generalization error of model Mi is then calculated
as the average of the ε̂Sj (hij )’s (averaged over j).
3. Pick the model Mi with the lowest estimated generalization error, and
retrain that model on the entire training set S. The resulting hypothesis
is then output as our final answer.
A typical choice for the number of folds to use here would be k = 10.
While the fraction of data held out each time is now 1/k—much smaller
than before—this procedure may also be more computationally expensive
than hold-out cross validation, since we now need train to each model k
times.
While k = 10 is a commonly used choice, in problems in which data is
really scarce, sometimes we will use the extreme choice of k = m in order
to leave out as little data as possible each time. In this setting, we would
repeatedly train on all but one of the training examples in S, and test on that
held-out example. The resulting m = k errors are then averaged together to
obtain our estimate of the generalization error of a model. This method has
its own name; since we’re holding out one training example at a time, this
method is called leave-one-out cross validation.
Finally, even though we have described the different versions of cross vali-
dation as methods for selecting a model, they can also be used more simply to
evaluate a single model or algorithm. For example, if you have implemented
4
some learning algorithm and want to estimate how well it performs for your
application (or if you have invented a novel learning algorithm and want to
report in a technical paper how well it performs on various test sets), cross
validation would give a reasonable way of doing so.
2 Feature Selection
One special and important case of model selection is called feature selection.
To motivate this, imagine that you have a supervised learning problem where
the number of features n is very large (perhaps n m), but you suspect that
there is only a small number of features that are “relevant” to the learning
task. Even if you use the a simple linear classifier (such as the perceptron)
over the n input features, the VC dimension of your hypothesis class would
still be O(n), and thus overfitting would be a potential problem unless the
training set is fairly large.
In such a setting, you can apply a feature selection algorithm to reduce the
number of features. Given n features, there are 2n possible feature subsets
(since each of the n features can either be included or excluded from the
subset), and thus feature selection can be posed as a model selection problem
over 2n possible models. For large values of n, it’s usually too expensive to
explicitly enumerate over and compare all 2n models, and so typically some
heuristic search procedure is used to find a good feature subset. The following
search procedure is called forward search:
1. Initialize F = ∅.
2. Repeat {
3. Select and output the best feature subset that was evaluated during the
entire search procedure.
5
(The equation above assumes that xi and y are binary-valued; more generally
the summations would be over the domains of the variables.) The probabil-
ities above p(xi , y), p(xi ) and p(y) can all be estimated according to their
empirical distributions on the training set.
To gain intuition about what this score does, note that the mutual infor-
mation can also be expressed as a Kullback-Leibler (KL) divergence:
MI(xi , y) = KL (p(xi , y)||p(xi )p(y))
You’ll get to play more with KL-divergence in Problem set #3, but infor-
mally, this gives a measure of how different the probability distributions
6
p(xi , y) and p(xi )p(y) are. If xi and y are independent random variables,
then we would have p(xi , y) = p(xi )p(y), and the KL-divergence between the
two distributions will be zero. This is consistent with the idea if xi and y
are independent, then xi is clearly very “non-informative” about y, and thus
the score S(i) should be small. Conversely, if xi is very “informative” about
y, then their mutual information MI(xi , y) would be large.
One final detail: Now that you’ve ranked the features according to their
scores S(i), how do you decide how many features k to choose? Well, one
standard way to do so is to use cross validation to select among the possible
values of k. For example, when applying naive Bayes to text classification—
a problem where n, the vocabulary size, is usually very large—using this
method to select a feature subset often results in increased classifier accuracy.
p(S|θ)p(θ)
p(θ|S) =
p(S)
Qm (i) (i)
i=1 p(y |x , θ) p(θ)
= R Qm (1)
θ
( i=1 p(y (i) |x(i) , θ)p(θ)) dθ
In the equation above, p(y (i) |x(i) , θ) comes from whatever model you’re using
for your learning problem. For example, if you are using Bayesian logistic re-
(i) (i)
gression, then you might choose p(y (i) |x(i) , θ) = hθ (x(i) )y (1−hθ (x(i) ))(1−y ) ,
where hθ (x(i) ) = 1/(1 + exp(−θ T x(i) )).3
When we are given a new test example x and asked to make it prediction
on it, we can compute our posterior distribution on the class label using the
posterior distribution on θ:
Z
p(y|x, S) = p(y|x, θ)p(θ|S)dθ (2)
θ
In the equation above, p(θ|S) comes from Equation (1). Thus, for example,
if the goal is to the predict the expected value of y given x, then we would
output4 Z
E[y|x, S] = yp(y|x, S)dy
y
The procedure that we’ve outlined here can be thought of as doing “fully
Bayesian” prediction, where our prediction is computed by taking an average
with respect to the posterior p(θ|S) over θ. Unfortunately, in general it is
computationally very difficult to compute this posterior distribution. This is
because it requires taking integrals over the (usually high-dimensional) θ as
in Equation (1), and this typically cannot be done in closed-form.
Thus, in practice we will instead approximate the posterior distribution
for θ. One common approximation is to replace our posterior distribution for
θ (as in Equation 2) with a single point estimate. The MAP (maximum
a posteriori) estimate for θ is given by
m
Y
θMAP = arg max p(y (i) |x(i) , θ)p(θ). (3)
θ
i=1
3
Since we are now viewing θ as a random variable, it is okay to condition on it value,
and write “p(y|x, θ)” instead of “p(y|x; θ).”
4
The integral below would be replaced by a summation if y is discrete-valued.
8
Note that this is the same formulas as for the ML (maximum likelihood)
estimate for θ, except for the prior p(θ) term at the end.
In practical applications, a common choice for the prior p(θ) is to assume
that θ ∼ N (0, τ 2 I). Using this choice of prior, the fitted parameters θMAP
will have smaller norm than that selected by maximum likelihood. (See
Problem Set #3.) In practice, this causes the Bayesian MAP estimate to be
less susceptible to overfitting than the ML estimate of the parameters. For
example, Bayesian logistic regression turns out to be an effective algorithm for
text classification, even though in text classification we usually have n m.
CS229 Lecture notes
Andrew Ng
where
1 if z ≥ 0
g(z) =
−1 if z < 0.
1
CS229 Winter 2003 2
Also, given a training example (x, y), the perceptron learning rule updates
the parameters as follows. If hθ (x) = y, then it makes no change to the
parameters. Otherwise, it performs the update1
θ := θ + yx.
The following theorem gives a bound on the online learning error of the
perceptron algorithm, when it is run as an online algorithm that performs
an update each time it gets an example wrong. Note that the bound below
on the number of errors does not have an explicit dependence on the number
of examples m in the sequence, or on the dimension n of the inputs (!).
The third step above used Equation (2). Moreover, again by applying a
straightfoward inductive argument, we see that (4) implies
The second inequality above follows from the fact that u is a unit-length
vector (and z T u = ||z|| · ||u|| cos φ ≤ ||z|| · ||u||, where φ is the angle between
z and u). Our result implies that k ≤ (D/γ)2 . Hence, if the perceptron made
a k-th mistake, then k ≤ (D/γ)2 .
CS229 Lecture notes
Andrew Ng
1
2
Thus, J measures the sum of squared distances between each training exam-
ple x(i) and the cluster centroid µc(i) to which it has been assigned. It can
be shown that k-means is exactly coordinate descent on J. Specifically, the
inner-loop of k-means repeatedly minimizes J with respect to c while holding
µ fixed, and then minimizes J with respect to µ while holding c fixed. Thus,
J must monotonically decrease, and the value of J must converge. (Usu-
ally, this implies that c and µ will converge too. In theory, it is possible for
3
and the parameter φj gives p(z (i) = j),), and x(i) |z (i) = j ∼ N (µj , Σj ). We
let k denote the number of values that the z (i) ’s can take on. Thus, our
model posits that each x(i) was generated by randomly choosing z (i) from
{1, . . . , k}, and then x(i) was drawn from one of k Gaussians depeneding on
z (i) . This is called the mixture of Gaussians model. Also, note that the
z (i) ’s are latent random variables, meaning that they’re hidden/unobserved.
This is what will make our estimation problem difficult.
The parameters of our model are thus φ, φ and Σ. To estimate them, we
can write down the likelihood of our data:
X
m
ℓ(φ, µ, Σ) = log p(x(i) ; φ, µ, Σ)
i=1
Xm X
k
= log p(x(i) |z (i) ; µ, Σ)p(z (i) ; φ).
i=1 z (i) =1
1
2
likelihood problem would have been easy. Specifically, we could then write
down the likelihood as
X
m
ℓ(φ, µ, Σ) = log p(x(i) |z (i) ; µ, Σ) + log p(z (i) ; φ).
i=1
1 X
m
φj = 1{z (i) = j},
m i=1
Pm (i)
i=1 1{z = j}x(i)
µj = P m (i) = j}
,
i=1 1{z
Pm (i)
i=1 1{z P = j}(x(i) − µj )(x(i) − µj )T
Σj = m (i) = j}
.
i=1 1{z
Indeed, we see that if the z (i) ’s were known, then maximum likelihood
estimation becomes nearly identical to what we had when estimating the
parameters of the Gaussian discriminant analysis model, except that here
the z (i) ’s playing the role of the class labels.1
However, in our density estimation problem, the z (i) ’s are not known.
What can we do?
The EM algorithm is an iterative algorithm that has two main steps.
Applied to our problem, in the E-step, it tries to “guess” the values of the
z (i) ’s. In the M-step, it updates the parameters of our model based on our
guesses. Since in the M-step we are pretending that the guesses in the first
part were correct, the maximization becomes easy. Here’s the algorithm:
1 X (i)
m
φj := w ,
m i=1 j
Pm (i) (i)
i=1 wj x
µj := Pm (i) ,
i=1 wj
Pm (i) (i)
i=1 wj (x − µj )(x(i) − µj )T
Σj := Pm (i)
i=1 wj
Part IX
The EM algorithm
In the previous set of notes, we talked about the EM algorithm as applied to
fitting a mixture of Gaussians. In this set of notes, we give a broader view
of the EM algorithm, and show how it can be applied to a large family of
estimation problems with latent variables. We begin our discussion with a
very useful result called Jensen’s inequality
1 Jensen’s inequality
Let f be a function whose domain is the set of real numbers. Recall that
f is a convex function if f 00 (x) ≥ 0 (for all x ∈ R). In the case of f taking
vector-valued inputs, this is generalized to the condition that its hessian H
is positive semi-definite (H ≥ 0). If f 00 (x) > 0 for all x, then we say f is
strictly convex (in the vector-valued case, the corresponding statement is
that H must be strictly positive semi-definite, written H > 0). Jensen’s
inequality can then be stated as follows:
Theorem. Let f be a convex function, and let X be a random variable.
Then:
E[f (X)] ≥ f (EX).
Moreover, if f is strictly convex, then E[f (X)] = f (EX) holds true if and
only if X = E[X] with probability 1 (i.e., if X is a constant).
Recall our convention of occasionally dropping the parentheses when writ-
ing expectations, so in the theorem above, f (EX) = f (E[X]).
For an interpretation of the theorem, consider the figure below.
1
2
f(a) f
E[f(X)]
f(b)
f(EX)
a E[X] b
2 The EM algorithm
Suppose we have an estimation problem in which we have a training set
{x(1) , . . . , x(m) } consisting of m independent examples. We wish to fit the
parameters of a model p(x, z) to the data, where the likelihood is given by
m
X
`(θ) = log p(x; θ)
i=1
Xm X
= log p(x, z; θ).
i=1 z
3
The last step of this derivation used Jensen’s inequality. Specifically, f (x) =
log x is a concave function, since f 00 (x) = −1/x2 < 0 over its domain x ∈ R+ .
Also, the term
p(x(i) , z (i) ; θ)
X
(i)
Qi (z )
Qi (z (i) )
z (i)
in the summation is just an expectation of the quantity p(x(i) , z (i) ; θ)/Qi (z (i) )
where the “z (i) ∼ Qi ” subscripts above indicate that the expectations are
with respect to z (i) drawn from Qi . This allowed us to go from Equation (2)
to Equation (3).
Now, for any set of distributions Qi , the formula (3) gives a lower-bound
on `(θ). There’re many possible choices for the Qi ’s. Which should we
choose? Well, if we have some current guess θ of the parameters, it seems
1
If z were continuous, then Qi would be a density, and the summations over z in our
discussion are replaced with integrals over z.
4
natural to try to make the lower-bound tight at that value of θ. I.e., we’ll
make the inequality above hold with equality at our particular value of θ.
(We’ll see later how this enables us to prove that `(θ) increases monotonically
with successsive iterations of EM.)
To make the bound tight for a particular value of θ, we need for the step
involving Jensen’s inequality in our derivation above to hold with equality.
For this to be true, we know it is sufficient that that the expectation be taken
over a “constant”-valued random variable. I.e., we require that
p(x(i) , z (i) ; θ)
=c
Qi (z (i) )
for some constant c that does not depend on z (i) . This is easily accomplished
by choosing
Qi (z (i) ) ∝ p(x(i) , z (i) ; θ).
Actually, since we know z Qi (z (i) ) = 1 (because it is a distribution), this
P
further tells us that
p(x(i) , z (i) ; θ)
Qi (z (i) ) = P (i)
z p(x , z; θ)
p(x(i) , z (i) ; θ)
=
p(x(i) ; θ)
= p(z (i) |x(i) ; θ)
Thus, we simply set the Qi ’s to be the posterior distribution of the z (i) ’s
given x(i) and the setting of the parameters θ.
Now, for this choice of the Qi ’s, Equation (3) gives a lower-bound on the
loglikelihood ` that we’re trying to maximize. This is the E-step. In the
M-step of the algorithm, we then maximize our formula in Equation (3) with
respect to the parameters to obtain a new setting of the θ’s. Repeatedly
carrying out these two steps gives us the EM algorithm, which is as follows:
Repeat until convergence {
(E-step) For each i, set
Qi (z (i) ) := p(z (i) |x(i) ; θ).
(M-step) Set
XX p(x(i) , z (i) ; θ)
θ := arg max Qi (z (i) ) log .
θ
i
Qi (z (i) )
z (i)
5
The parameters θ (t+1) are then obtained by maximizing the right hand side
of the equation above. Thus,
(t+1)
XX (t) p(x(i) , z (i) ; θ(t+1) )
`(θ ) ≥ Qi (z (i) ) log (t)
(4)
i z (i)
Qi (z (i) )
XX (t) p(x(i) , z (i) ; θ(t) )
≥ Qi (z (i) ) log (t)
(5)
i z (i)
Qi (z (i) )
(t)
= `(θ ) (6)
(t)
holds for any values of Qi and θ, and in particular holds for Qi = Qi ,
θ = θ(t+1) . To get Equation (5), we used the fact that θ (t+1) is chosen
explicitly to be
XX p(x(i) , z (i) ; θ)
arg max Qi (z (i) ) log ,
θ
i
Qi (z (i) )
z (i)
and thus this formula evaluated at θ (t+1) must be equal to or larger than the
same formula evaluated at θ (t) . Finally, the step used to get (6) was shown
(t)
earlier, and follows from Qi having been chosen to make Jensen’s inequality
hold with equality at θ (t) .
6
Remark. If we define
XX p(x(i) , z (i) ; θ)
J(Q, θ) = Qi (z (i) ) log ,
i
Qi (z (i) )
z (i)
the we know `(θ) ≥ J(Q, θ) from our previous derivation. The EM can also
be viewed a coordinate ascent on J, in which the E-step maximizes it with
respect to Q (check this yourself), and the M-step maximizes it with respect
to θ.
Lets maximize this with respect to µl . If we take the derivative with respect
to µl , we find
1 1 (i) T −1 (i)
X m X k
(i) (2π) n/2 |Σj | 1/2 exp − 2
(x − µ j ) Σ j (x − µ j ) · φj
∇ µl wj log (i)
i=1 j=1 wj
m X
k
(i) 1
X
= −∇µl wj (x(i) − µj )T Σ−1
j (x
(i)
− µj )
i=1 j=1
2
m
1X (i)
= wl ∇µl 2µTl Σ−1
l x
(i)
− µTl Σ−1
l µl
2 i=1
m
(i)
X
Σ−1 (i)
− Σ−1
= wl l x l µl
i=1
Setting this to zero and solving for µl therefore yields the update rule
Pm (i) (i)
i=1 wl x
µl := P m (i)
,
w
i=1 l
m X
k k
(i)
X X
L(φ) = wj log φj + β( φj − 1),
i=1 j=1 j=1
2
We don’t need to worry about the constraint that φj ≥ 0, because as we’ll shortly see,
the solution we’ll find from this derivation will automatically satisfy that anyway.
8
The derivation for the M-step updates to Σj are also entirely straightfor-
ward.
CS229 Lecture notes
Andrew Ng
Part X
Factor analysis
When we have data x(i) ∈ Rn that comes from a mixture of several Gaussians,
the EM algorithm can be applied to fit a mixture model. In this setting,
we usually imagine problems were the we have sufficient data to be able
to discern the multiple-Gaussian structure in the data. For instance, this
would be the case if our training set size m was significantly larger than the
dimension n of the data.
Now, consider a setting in which n m. In such a problem, it might be
difficult to model the data even with a single Gaussian, much less a mixture of
Gaussian. Specifically, since the m data points span only a low-dimensional
subspace of Rn , if we model the data as Gaussian, and estimate the mean
and covariance using the usual maximum likelihood estimators,
m
1 X (i)
µ = x
m i=1
m
1 X (i)
Σ = (x − µ)(x(i) − µ)T ,
m i=1
we would find that the matrix Σ is singular. This means that Σ−1 does not
exist, and 1/|Σ|1/2 = 1/0. But both of these terms are needed in computing
the usual density of a multivariate Gaussian distribution. Another way of
stating this difficulty is that maximum likelihood estimates of the parameters
result in a Gaussian that places all of its probability in the affine space
spanned by the data,1 and this corresponds to a singular covariance matrix.
1
Pm Pm
This is the set of points x satisfying x = i=1 αi x(i) , for some αi ’s so that i=1 α1 =
1.
1
2
1 Restrictions of Σ
If we do not have sufficient data to fit a full covariance matrix, we may
place some restrictions on the space of matrices Σ that we will consider. For
instance, we may choose to fit a covariance matrix Σ that is diagonal. In this
setting, the reader may easily verify that the maximum likelihood estimate
of the covariance matrix is given by the diagonal matrix Σ satisfying
m
1 X (i)
Σjj = (x − µj )2 .
m i=1 j
Thus, Σjj is just the empirical estimate of the variance of the j-th coordinate
of the data.
Recall that the contours of a Gaussian density are ellipses. A diagonal
Σ corresponds to a Gaussian where the major axes of these ellipses are axis-
aligned.
Sometimes, we may place a further restriction on the covariance matrix
that not only must it be diagonal, but its diagonal entries must all be equal.
In this setting, we have Σ = σ 2 I, where σ 2 is the parameter under our control.
The maximum likelihood estimate of σ 2 can be found to be:
n m
2 1 X X (i)
σ = (x − µj )2 .
mn j=1 i=1 j
Here, µ1 ∈ Rr , µ2 ∈ Rs , Σ11 ∈ Rr×r , Σ12 ∈ Rr×s , and so on. Note that since
covariance matrices are symmetric, Σ12 = ΣT21 .
Under our assumptions, x1 and x2 are jointly multivariate Gaussian.
What is the marginal distribution of x1 ? It is not hard to see that E[x1 ] = µ1 ,
and that Cov(x1 ) = E[(x1 − µ1 )(x1 − µ1 )] = Σ11 . To see that the latter is
true, note that by definition of the joint covariance of x1 and x2 , we have
4
that
Cov(x) = Σ
Σ11 Σ12
=
Σ21 Σ22
= E[(x − µ)(x − µ)T ]
" T #
x1 − µ 1 x1 − µ 1
= E
x2 − µ 2 x2 − µ 2
(x1 − µ1 )(x1 − µ1 )T (x1 − µ1 )(x2 − µ2 )T
= E .
(x2 − µ2 )(x1 − µ1 )T (x2 − µ2 )(x2 − µ2 )T
Matching the upper-left subblocks in the matrices in the second and the last
lines above gives the result.
Since marginal distributions of Gaussians are themselves Gaussian, we
therefore have that the marginal distribution of x1 is given by x1 ∼ N (µ1 , Σ11 ).
Also, we can ask, what is the conditional distribution of x1 given x2 ? By
referring to the definition of the multivariate Gaussian distribution, it can
be shown that x1 |x2 ∼ N (µ1|2 , Σ1|2 ), where
When working with the factor analysis model in the next section, these
formulas for finding conditional and marginal distributions of Gaussians will
be very useful.
z ∼ N (0, I)
x|z ∼ N (µ + Λz, Ψ).
Here, the parameters of our model are the vector µ ∈ Rn , the matrix
Λ ∈ Rn×k , and the diagonal matrix Ψ ∈ Rn×n . The value of k is usually
chosen to be smaller than n.
5
z ∼ N (0, I)
∼ N (0, Ψ)
x = µ + Λz + .
E[x] = E[µ + Λz + ]
= µ + ΛE[z] + E[]
= µ.
In the last step, we used the fact that E[zz T ] = Cov(z) (since z has zero
mean), and E[zT ] = E[z]E[T ] = 0 (since z and are independent, and
6
So, using these definitions for µz(i) |x(i) and Σz(i) |x(i) , we have
(i) 1 1 (i) T −1 (i)
Qi (z ) = exp − (z − µz(i) |x(i) ) Σz(i) |x(i) (z − µz(i) |x(i) ) .
(2π)k/2 |Σz(i) |x(i) |1/2 2
7
Here, the “z (i) ∼ Qi ” subscript indicates that the expectation is with respect
to z (i) drawn from Qi . In the subsequent development, we will omit this
subscript when there is no risk of ambiguity. Dropping terms that do not
depend on the parameters, we find that we need to maximize:
Xm
E log p(x(i) |z (i) ; µ, Λ, Ψ)
i=1
m
X 1 1
= E log exp − (x(i) − µ − Λz (i) )T Ψ−1 (x(i) − µ − Λz (i) )
i=1
(2π)n/2 |Ψ|1/2 2
m
X 1 n 1
= E − log |Ψ| − log(2π) − (x(i) − µ − Λz (i) )T Ψ−1 (x(i) − µ − Λz (i) )
i=1
2 2 2
Lets maximize this with respect to Λ. Only the last term above depends
on Λ. Taking derivatives, and using the facts that tr a = a (for a ∈ R),
trAB = trBA, and ∇A trABAT C = CAB + C T AB, we get:
m
X 1 (i) (i) T −1 (i) (i)
∇Λ −E (x − µ − Λz ) Ψ (x − µ − Λz )
i=1
2
m
X 1 (i) T T −1 (i) (i) T T −1 (i)
= ∇Λ E −tr z Λ Ψ Λz + trz Λ Ψ (x − µ)
i=1
2
m
X 1 T −1 (i) (i) T T −1 (i) (i) T
= ∇Λ E −tr Λ Ψ Λz z + trΛ Ψ (x − µ)z
i=1
2
m h
(i) (i) T (i) T
X i
−1 −1 (i)
= E −Ψ Λz z + Ψ (x − µ)z
i=1
8
It is interesting to note the close relationship between this equation and the
normal equation that we’d derived for least squares regression,
The analogy is that here, the x’s are a linear function of the z’s (plus noise).
Given the “guesses” for z that the E-step has found, we will now try to
estimate the unknown linearity Λ relating the x’s and z’s. It is therefore
no surprise that we obtain something similar to the normal equation. There
is, however, one important difference between this and an algorithm that
performs least squares using just the “best guesses” of the z’s; we will see
this difference shortly.
To complete our M-step update, lets work out the values of the expecta-
tions in Equation (7). From our definition of Qi being Gaussian with mean
µz(i) |x(i) and covariance Σz(i) |x(i) , we easily find
h
(i) T
i
Ez(i) ∼Qi z = µTz(i) |x(i)
h T
i
Ez(i) ∼Qi z (i) z (i) = µz(i) |x(i) µTz(i) |x(i) + Σz(i) |x(i) .
The latter comes from the fact that, for a random variable Y , Cov(Y ) =
E[Y Y T ] − E[Y ]E[Y ]T , and hence E[Y Y T ] = E[Y ]E[Y ]T + Cov(Y ). Substitut-
ing this back into Equation (7), we get the M-step update for Λ:
m
! m
!−1
X X
Λ= (x(i) − µ)µTz(i) |x(i) µz(i) |x(i) µTz(i) |x(i) + Σz(i) |x(i) . (8)
i=1 i=1
It is important to note the presence of the Σz(i) |x(i) on the right hand side of
this equation. This is the covariance in the posterior distribution p(z (i) |x(i) )
of z (i) give x(i) , and the M-step must take into account this uncertainty
9
Since this doesn’t change as the parameters are varied (i.e., unlike the update
for Λ, the right hand side does not depend on Qi (z (i) ) = p(z (i) |x(i) ; µ, Λ, Ψ),
which in turn depends on the parameters), this can be calculated just once
and needs not be further updated as the algorithm is run. Similarly, the
diagonal Ψ can be found by calculating
m
1 X (i) (i) T (i) T T
Φ= x x −x µz(i) |x(i) ΛT −Λµz(i) |x(i) x(i) +Λ(µz(i) |x(i) µTz(i) |x(i) +Σz(i) |x(i) )ΛT ,
m i=1
and setting Ψii = Φii (i.e., letting Ψ be the diagonal matrix containing only
the diagonal entries of Φ).
CS229 Lecture notes
Andrew Ng
Part XI
Principal components analysis
In our discussion of factor analysis, we gave a way to model data x ∈ Rn as
“approximately” lying in some k-dimension subspace, where k n. Specif-
ically, we imagined that each point x(i) was created by first generating some
z (i) lying in the k-dimension affine space {Λz + µ; z ∈ Rk }, and then adding
Ψ-covariance noise. Factor analysis is based on a probabilistic model, and
parameter estimation used the iterative EM algorithm.
In this set of notes, we will develop a method, Principal Components
Analysis (PCA), that also tries to identify the subspace in which the data
approximately lies. However, PCA will do so more directly, and will require
only an eigenvector calculation (easily done with the eig function in Matlab),
and does not need to resort to EM.
Suppose we are given dataset {x(i) ; i = 1, . . . , m} of attributes of m dif-
ferent types of automobiles, such as their maximum speed, turn radius, and
so on. Lets x(i) ∈ Rn for each i (n m). But unknown to us, two different
attributes—some xi and xj —respectively give a car’s maximum speed mea-
sured in miles per hour, and the maximum speed measured in kilometers per
hour. These two attributes are therefore almost linearly dependent, up to
only small differences introduced by rounding off to the nearest mph or kph.
Thus, the data really lies approximately on an n − 1 dimensional subspace.
How can we automatically detect, and perhaps remove, this redundancy?
For a less contrived example, consider a dataset resulting from a survey of
(i)
pilots for radio-controlled helicopters, where x1 is a measure of the piloting
(i)
skill of pilot i, and x2 captures how much he/she enjoys flying. Because
RC helicopters are very difficult to fly, only the most committed students,
ones that truly enjoy flying, become good pilots. So, the two attributes
x1 and x2 are strongly correlated. Indeed, we might posit that that the
1
2
data actually likes along some diagonal axis (the u1 direction) capturing the
intrinsic piloting “karma” of a person, with only a small amount of noise
lying off this axis. (See figure.) How can we automatically compute this u1
direction?
u1
x2 (enjoyment)
u2
x1 (skill)
We will shortly develop the PCA algorithm. But prior to running PCA
per se, typically we first pre-process the data to normalize its mean and
variance, as follows:
1. Let µ = m1 m (i)
P
i=1 x .
Steps (1-2) zero out the mean of the data, and may be omitted for data
known to have zero mean (for instance, time series corresponding to speech
or other acoustic signals). Steps (3-4) rescale each coordinate to have unit
variance, which ensures that different attributes are all treated on the same
“scale.” For instance, if x1 was cars’ maximum speed in mph (taking values
in the high tens or low hundreds) and x2 were the number of seats (taking
values around 2-4), then this renormalization rescales the different attributes
to make them more comparable. Steps (3-4) may be omitted if we had
apriori knowledge that the different attributes are all on the same scale. One
3
example of this is if each data point represented a grayscale image, and each
(i)
xj took a value in {0, 1, . . . , 255} corresponding to the intensity value of
pixel j in image i.
Now, having carried out the normalization, how do we compute the “ma-
jor axis of variation” u—that is, the direction on which the data approxi-
mately lies? One way to pose this problem is as finding the unit vector u so
that when the data is projected onto the direction corresponding to u, the
variance of the projected data is maximized. Intuitively, the data starts off
with some amount of variance/information in it. We would like to choose a
direction u so that if we were to approximate the data as lying in the direc-
tion/subspace corresponding to u, as much as possible of this variance is still
retained.
Consider the following dataset, on which we have already carried out the
normalization steps:
We see that the projected data still has a fairly large variance, and the
points tend to be far from zero. In contrast, suppose had instead picked the
following direction:
#$
$
#$ #$
$
#$ #$
$
#$ #$
$
#$ #$
$
#$ #$
$
#$ #$
$
#$ $#$#$#
#$# # #
$ # #
$ # #
$ # #
$ # #
$ # #
$ $#$#$#
$#
#
$ # #$
$
# # #$
$
# # #$
$
# # #$
$
# # #$
$
# # #$
$
# #
'(
(
'(' '(
(
'
$ $#$#$# ( '(
'
$ $#$#$# ( '(
'
$ $#$#$# ('('(' $
$ #$
#
$ #$
$
#
$ #$
$
#
$ #$
$
# $#$#$#
(' '
( '
( # $ '
( '
( # $ '
( '
( # $ ' ( #
$ #
$ %
& %
& #
$ #
$ %
& %
& #
$ #
$ %
& %
& #
$ #
$ %
& %
& $#$#$# & %&
% %&
&
% %&
&
% &%&%%&
( ' '
( # $ '
( # $ '
( # $ ' ( #
$
% & %& #
$
% & %& #
$
% & %& #
$
% & %&
% & %&
% & %&
% & %& &%&%&%
('
' (
( '(
' $#$# ( '(
' $#$# ( '(
' $#$# ('(' $ # &
#$ %&
% %
&
#$# &
$ %&
% %
&
#$# &
$ %&
% %
&
#$# &
$ %&
% %
&
$#$# & %&
% %
&
&
%&
% %
&
&
%&
% %
& &%&%&%
%&
&
% %&
&
% %&
&
% %&
&
% %&
&
% %&
&
% %&
&
%
"
" " %
& %& %
& %& %
& %& %
& %& %
& %& %
& %& %
& %& &%&%&%
!
" !
"
! !
"
! !
" ! !
" !
" !
" ! " ! " &
&
% %
&
&
% %
&
&
% %
&
&
% %
&
&
% %
&
&
% %
&
&
% %
& % &%&%
!"!!
"
!"!!
"
!"
" ! !"
" ! !
" !" !
" !
" !"
! " ! " % & % & % & % & % & % &
""
!"
! "" !"
! " !"
! " !"
! "
!"
! "!"!"!
!
" !" !
" !" !
" !" !
" !
" "!
"
!"
! "
!" ! "
!" ! "
!" ! "
!" !"
! "!"!"!
!"
Here, the projections have a significantly smaller variance, and are much
closer to the origin.
We would like to automatically select the direction u corresponding to
the first of the two figures shown above. To formalize this, note that given a
5
unit vector u and a point x, the length of the projection of x onto u is given
by xT u. I.e., if x(i) is a point in our dataset (one of the crosses in the plot),
then its projection onto u (the corresponding circle in the figure) is distance
xT u from the origin. Hence, to maximize the variance of the projections, we
would like to choose a unit-length u so as to maximize:
m m
1 X (i) T 2 1 X T (i) (i) T
(x u) = u x x u
m i=1 m i=1
m
!
1 X (i) (i) T
= uT x x u.
m i=1
We easily recognize that the maximizing this subject to ||u||2 = 1 gives the
(i) (i) T
principal eigenvector of Σ = m1 m
P
i=1 x x , which is just the empirical
covariance matrix of the data (assuming it has zero mean).1
To summarize, we have found that if we wish to find a 1-dimensional
subspace with with to approximate the data, we should choose u to be the
principal eigenvector of Σ. More generally, if we wish to project our data
into a k-dimensional subspace (k < n), we should choose u1 , . . . , uk to be the
top k eigenvectors of Σ. The ui ’s now form a new, orthogonal basis for the
data.2
Then, to represent x(i) in this basis, we need only compute the corre-
sponding vector
uT1 x(i)
uT x(i)
(i) 2
y = .. ∈ Rk .
.
T (i)
uk x
Thus, whereas x(i) ∈ Rn , the vector y (i) now gives a lower, k-dimensional,
approximation/representation for x(i) . PCA is therefore also referred to as
a dimensionality reduction algorithm. The vectors u1 , . . . , uk are called
the first k principal components of the data.
Remark. Although we have shown it formally only for the case of k = 1,
using well-known properties of eigenvectors it is straightforward to show that
1
If you haven’t seen this before, try using the method of Lagrange multipliers to max-
imize uT Σu subject to that uT u = 1. You should be able to show that Σu = λu, for some
λ, which implies u is an eigenvector of Σ, with eigenvalue λ.
2
Because Σ is symmetric, the ui ’s will (or always can be chosen to be) orthogonal to
each other.
6
of all possible orthogonal bases u1 , . . . , uk , the one that we have chosen max-
(i) 2
P
imizes i ||y ||2 . Thus, our choice of a basis preserves as much variability
as possible in the original data.
In problem set 4, you will see that PCA can also be derived by picking
the basis that minimizes the approximation error arising from projecting the
data onto the k-dimensional subspace spanned by them.
PCA has many applications, our discussion with a small number of exam-
ples. First, compression—representing x(i) ’s with lower dimension y (i) ’s—is
an obvious application. If we reduce high dimensional data to k = 2 or 3 di-
mensions, then we can also plot the y (i) ’s to visualize the data. For instance,
if we were to reduce our automobiles data to 2 dimensions, then we can plot
it (one point in our plot would correspond to one car type, say) to see what
cars are similar to each other and what groups of cars may cluster together.
Another standard application is to preprocess a dataset to reduce its
dimension before running a supervised learning learning algorithm with the
x(i) ’s as inputs. Apart from computational benefits, reducing the data’s
dimension can also reduce the complexity of the hypothesis class considered
and help avoid overfitting (e.g., linear classifiers over lower dimensional input
spaces will have smaller VC dimension).
Lastly, as in our RC pilot example, we can also view PCA as a noise
reduction algorithm. In our example it, estimates the intrinsic “piloting
karma” from the noisy measures of piloting skill and enjoyment. In class, we
also saw the application of this idea to face images, resulting in eigenfaces
method. Here, each point x(i) ∈ R100×100 was a 10000 dimensional vector,
with each coordinate corresponding to a pixel intensity value in a 100x100
image of a face. Using PCA, we represent each image x(i) with a much lower-
dimensional y (i) . In doing so, we hope that the principal components we
found retain the interesting, systematic variations between faces that capture
what a person really looks like, but not the “noise” in the images introduced
by minor lighting variations, slightly different imaging conditions, and so on.
We then measure distances between faces i and j by working in the reduced
dimension, and computing ||y (i) − y (j) ||2 . This resulted in a surprisingly good
face-matching and retrieval algorithm.
CS229 Lecture notes
Andrew Ng
Part XII
Independent Components
Analysis
Our next topic is Independent Components Analysis (ICA). Similar to PCA,
this will find a new basis in which to represent our data. However, the goal
is very different.
As a motivating example, consider the “cocktail party problem.” Here, n
speakers are speaking simultaneously at a party, and any microphone placed
in the room records only an overlapping combination of the n speakers’ voices.
But lets say we have n different microphones placed in the room, and because
each microphone is a different distance from each of the speakers, it records a
different combination of the speakers’ voices. Using these microphone record-
ings, can we separate out the original n speakers’ speech signals?
To formalize this problem, we imagine that there is some data s ∈ Rn
that is generated via n independent sources. What we observe is
x = As,
1
2
1 ICA ambiguities
To what degree can W = A−1 be recovered? If we have no prior knowledge
about the sources and the mixing matrix, it is not hard to see that there are
some inherent ambiguities in A that are impossible to recover, given only the
x(i) ’s.
Specifically, let P be any n-by-n permutation matrix. This means that
each row and each column of P has exactly one “1.” Here’re some examples
of permutation matrices:
0 1 0
0 1 1 0
P = 1 0 0 ; P = ; P = .
1 0 0 1
0 0 1
3 ICA algorithm
We are now ready to derive an ICA algorithm. The algorithm we describe
is due to Bell and Sejnowski, and the interpretation we give will be of their
algorithm as a method for maximum likelihood estimation. (This is different
from their original interpretation, which involved a complicated idea called
the infomax principal, that is no longer necessary in the derivation given the
modern understanding of ICA.)
We suppose that the distribution of each source si is given by a density
ps , and that the joint distribution of the sources s is given by
n
Y
p(s) = ps (si ).
i=1
formulas from the previous section, this implies the following density on
x = As = W −1 s: n
Y
p(x) = ps (wiT x) · |W |.
i=1
Part XIII
Reinforcement Learning and
Control
We now begin our study of reinforcement learning and adaptive control.
In supervised learning, we saw algorithms that tried to make their outputs
mimic the labels y given in the training set. In that setting, the labels gave
an unambiguous “right answer” for each of the inputs x. In contrast, for
many sequential decision making and control problems, it is very difficult to
provide this type of explicit supervision to a learning algorithm. For example,
if we have just built a four-legged robot and are trying to program it to walk,
then initially we have no idea what the “correct” actions to take are to make
it walk, and so do not know how to provide explicit supervision for a learning
algorithm to try to mimic.
In the reinforcement learning framework, we will instead provide our al-
gorithms only a reward function, which indicates to the learning agent when
it is doing well, and when it is doing poorly. In the four-legged walking ex-
ample, the reward function might give the robot positive rewards for moving
forwards, and negative rewards for either moving backwards or falling over.
It will then be the learning algorithm’s job to figure out how to choose actions
over time so as to obtain large rewards.
Reinforcement learning has been successful in applications as diverse as
autonomous helicopter flight, robot legged locomotion, cell-phone network
routing, marketing strategy selection, factory control, and efficient web-page
indexing. Our study of reinforcement learning will begin with a definition of
the Markov decision processes (MDP), which provides the formalism in
which RL problems are usually posed.
1
2
• Psa are the state transition probabilities. For each state s ∈ S and
action a ∈ A, Psa is a distribution over the state space. We’ll say more
about this later, but briefly, Psa gives the distribution over what states
we will transition to if we take action a in state s.
Or, when we are writing rewards as a function of the states only, this becomes
For most of our development, we will use the simpler state-rewards R(s),
though the generalization to state-action rewards R(s, a) offers no special
difficulties.
3
This says that the expected sum of discounted rewards V π (s) for starting
in s consists of two terms: First, the immediate reward R(s) that we get
rightaway simply for starting in state s, and second, the expected sum of
future discounted rewards. Examining the second term in more detail, we
see that the summation term above can be rewritten Es0 ∼Psπ(s) [V π (s0 )]. This
is the expected sum of discounted rewards for starting in state s0 , where s0
is distributed according Psπ(s) , which is the distribution over where we will
end up after taking the first action π(s) in the MDP from state s. Thus, the
second term above gives the expected sum of discounted rewards obtained
after the first step in the MDP.
Bellman’s equations can be used to efficiently solve for V π . Specifically,
in a finite-state MDP (|S| < ∞), we can write down one such equation for
V π (s) for every state s. This gives us a set of |S| linear equations in |S|
variables (the unknown V π (s)’s, one for each state), which can be efficiently
solved for the V π (s)’s.
1
This notation in which we condition on π isn’t technically correct because π isn’t a
random variable, but this is quite standard in the literature.
4
In other words, this is the best possible expected sum of discounted rewards
that can be attained using any policy. There is also a version of Bellman’s
equations for the optimal value function:
X
V ∗ (s) = R(s) + max γ Psa (s0 )V ∗ (s0 ). (2)
a∈A
s0 ∈S
The first term above is the immediate reward as before. The second term
is the maximum over all actions a of the expected future sum of discounted
rewards we’ll get upon after action a. You should make sure you understand
this equation and see why it makes sense.
We also define a policy π ∗ : S 7→ A as follows:
X
π ∗ (s) = arg max Psa (s0 )V ∗ (s0 ). (3)
a∈A
s0 ∈S
Note that π ∗ (s) gives the action a that attains the maximum in the “max”
in Equation (2).
It is a fact that for every state s and every policy π, we have
∗
V ∗ (s) = V π (s) ≥ V π (s).
∗
The first equality says that the V π , the value function for π ∗ , is equal to the
optimal value function V ∗ for every state s. Further, the inequality above
says that π ∗ ’s value is at least a large as the value of any other other policy.
In other words, π ∗ as defined in Equation (3) is the optimal policy.
Note that π ∗ has the interesting property that it is the optimal policy
for all states s. Specifically, it is not the case that if we were starting in
some state s then there’d be some optimal policy for that state, and if we
were starting in some other state s0 then there’d be some other policy that’s
optimal policy for s0 . Specifically, the same policy π ∗ attains the maximum
in Equation (1) for all states s. This means that we can use the same policy
π ∗ no matter what the initial state of our MDP is.
}
This algorithm can be thought of as repeatedly trying to update the esti-
mated value function using Bellman Equations (2).
There are two possible ways of performing the updates in the inner loop of
the algorithm. In the first, we can first compute the new values for V (s) for
every state s, and then overwrite all the old values with the new values. This
is called a synchronous update. In this case, the algorithm can be viewed as
implementing a “Bellman backup operator” that takes a current estimate of
the value function, and maps it to a new estimate. (See homework problem
for details.) Alternatively, we can also perform asynchronous updates.
Here, we would loop over the states (in some order), updating the values one
at a time.
Under either synchronous or asynchronous updates, it can be shown that
value iteration will cause V to converge to V ∗ . Having found V ∗ , we can
then use Equation (3) to find the optimal policy.
Apart from value iteration, there is a second standard algorithm for find-
ing an optimal policy for an MDP. The policy iteration algorithm proceeds
as follows:
1. Initialize π randomly.
(a) Let V := V π .
P
(b) For each state s, let π(s) := arg maxa∈A s0 Psa (s0 )V (s0 ).
}
Thus, the inner-loop repeatedly computes the value function for the current
policy, and then updates the policy using the current value function. (The
policy π found in step (b) is also called the policy that is greedy with re-
spect to V .) Note that step (a) can be done via solving Bellman’s equations
as described earlier, which in the case of a fixed policy, is just a set of |S|
linear equations in |S| variables.
After at most a finite number of iterations of this algorithm, V will con-
verge to V ∗ , and π will converge to π ∗ .
6
Both value iteration and policy iteration are standard algorithms for solv-
ing MDPs, and there isn’t currently universal agreement over which algo-
rithm is better. For small MDPs, policy iteration is often very fast and
converges with very few iterations. However, for MDPs with large state
spaces, solving for V π explicitly would involve solving a large system of lin-
ear equations, and could be difficult. In these problems, value iteration may
be preferred. For this reason, in practice value iteration seems to be used
more often than policy iteration.
using the new experience. Specifically, if we keep around the counts for both
the numerator and denominator terms of (4), then as we observe more trials,
we can simply keep accumulating those counts. Computing the ratio of these
counts then given our estimate of Psa .
Using a similar procedure, if R is unknown, we can also pick our estimate
of the expected immediate reward R(s) in state s to be the average reward
observed in state s.
Having learned a model for the MDP, we can then use either value it-
eration or policy iteration to solve the MDP using the estimated transition
probabilities and rewards. For example, putting together model learning and
value iteration, here is one possible algorithm for learning in an MDP with
unknown state transition probabilities:
1. Initialize π randomly.
2. Repeat {
We note that, for this particular algorithm, there is one simple optimiza-
tion that can make it run much more quickly. Specifically, in the inner loop
of the algorithm where we apply value iteration, if instead of initializing value
iteration with V = 0, we initialize it with the solution found during the pre-
vious iteration of our algorithm, then that will provide value iteration with
a much better initial starting point and make it converge more quickly.
Linear Algebra Review and Reference
Zico Kolter
October 16, 2007
This is two equations and two variables, so as you know from high school algebra, you
can find a unique solution for x1 and x2 (unless the equations are somehow degenerate, for
example if the second equation is simply a multiple of the first, but in the case above there
is in fact a unique solution). In matrix notation, we can write the system more compactly
as:
Ax = b
4 −5 13
with A = ,b= .
−2 3 −9
As we will see shortly, there are many advantages (including the obvious space savings)
to analyzing linear equations in this form.
• By A ∈ Rm×n we denote a matrix with m rows and n columns, where the entries of A
are real numbers.
1
• The ith element of a vector x is denoted xi :
x1
x2
x = .. .
.
xn
• We use the notation aij (or Aij , Ai,j , etc) to denote the entry of A in the ith row and
jth column:
a11 a12 ··· a1n
a21 a22 ··· a2n
A = .. .. .. .
. ...
. .
am1 am2 · · · amn
• Note that these definitions are ambiguous (for example, the a1 and aT1 in the previous
two definitions are not the same vector). Usually the meaning of the notation should
be obvious from its use.
2 Matrix Multiplication
The product of two matrices A ∈ Rm×n and B ∈ Rn×p is the matrix
C = AB ∈ Rm×p ,
where n
X
Cij = Aik Bkj .
k=1
Note that in order for the matrix product to exist, the number of columns in A must equal
the number of rows in B. There are many ways of looking at matrix multiplication, and
we’ll start by examining a few special cases.
2
2.1 Vector-Vector Products
Given two vectors x, y ∈ Rn , the quantity xT y, sometimes called the inner product or dot
product of the vectors, is a real number given by
n
X
xT y ∈ R = xi yi .
i=1
In other words, the ith entry of y is equal to the inner product of the ith row of A and x,
yi = aTi x.
Alternatively, lets write A in column form. In this case we see that,
x1
| | | x2
y = a1 a2 · · · an .. = a1 x1 + a2 x2 + . . . + an xn .
.
| | |
xn
3
express A in terms on its rows or columns. In the first case we express A in terms of its
columns, which gives
| | |
y T = x T a1 a 2 · · · an = x T a 1 x T a2 · · · x T a n
| | |
which demonstrates that the ith entry of y T is equal to the inner product of x and the ith
column of A.
Finally, expressing A in terms of rows we get the final representation of the vector-matrix
product,
— aT1 —
T
— a2 —
T
y = x1 x2 · · · xn ..
.
T
— am —
so we see that y T is a linear combination of the rows of A, where the coefficients for the
linear combination are given by the entries of x.
4
Put another way, AB is equal to the sum, over all i, of the outer product of the ith column
of A and the ith row of B. Since, in this case, ai ∈ Rm and bi ∈ Rp , the dimension of the
outer product ai bTi is m × p, which coincides with the dimension of C.
Second, we can also view matrix-matrix multiplication as a set of matrix-vector products.
Specifically, if we represent B by columns, we can view the columns of C as matrix-vector
products between A and the columns of B. Symbolically,
| | | | | |
C = AB = A b1 b2 · · · bp = Ab1 Ab2 · · · Abp .
| | | | | |
Here the ith column of C is given by the matrix-vector product with the vector on the right,
ci = Abi . These matrix-vector products can in turn be interpreted using both viewpoints
given in the previous subsection. Finally, we have the analogous viewpoint, where we repre-
sent A by rows, and view the rows of C as the matrix-vector product between the rows of A
and C. Symbolically,
— aT1 — — aT1 B —
— aT — — aT B —
2 2
C = AB = .. B = .. .
. .
T T
— am — — am B —
Here the ith row of C is given by the matrix-vector product with the vector on the left,
cTi = aTi B.
It may seem like overkill to dissect matrix multiplication to such a large degree, especially
when all these viewpoints follow immediately from the initial definition we gave (in about a
line of math) at the beginning of this section. However, virtually all of linear algebra deals
with matrix multiplications of some kind, and it is worthwhile to spend some time trying to
develop an intuitive understanding of the viewpoints presented here.
In addition to this, it is useful to know a few basic properties of matrix multiplication at
a higher level:
• Matrix multiplication is associative: (AB)C = A(BC).
• Matrix multiplication is, in general, not commutative; that is, it can be the case that
AB 6= BA.
5
3.1 The Identity Matrix and Diagonal Matrices
The identity matrix , denoted I ∈ Rn×n , is a square matrix with ones on the diagonal and
zeros everywhere else. That is,
1 i=j
Iij =
0 i 6= j
It has the property that for all A ∈ Rm×n ,
AI = A = IA
We have in fact already been using the transpose when describing row vectors, since the
transpose of a column vector is naturally a row vector.
The following properties of transposes are easily verified:
• (AT )T = A
• (AB)T = B T AT
• (A + B)T = AT + B T
6
and the first matrix on the right is symmetric, while the second is anti-symmetric. It turns out
that symmetric matrices occur a great deal in practice, and they have many nice properties
which we will look at shortly. It is common to denote the set of all symmetric matrices of
size n as Sn , so that A ∈ Sn means that A is a symmetric n × n matrix;
As described in the CS229 lecture notes, the trace has the following properties (included
here for the sake of completeness):
• For A, B, C such that ABC is square, trABC = trBCA = trCAB, and so on for the
product of more matrices.
3.5 Norms
A norm of a vector kxk is informally measure of the “length” of the vector. For example,
we have the commonly-used Euclidean or ℓ2 norm,
v
u n
uX
kxk2 = t x2i .
i=1
7
Other examples of norms are the ℓ1 norm,
n
X
kxk1 = |xi |
i=1
Norms can also be defined for matrices, such as the Frobenius norm,
v
u m X n
uX
p
kAkF = t A2ij = tr(AT A).
i=1 j=1
Many other norms exist, but they are beyond the scope of this review.
for some {α1 , . . . , αn−1 } then xn is dependent on {x1 , . . . , xn−1 }; otherwise, it is independent
of {x1 , . . . , xn−1 }.
The column rank of a matrix A is the largest number of columns of A that constitute
linearly independent set. This is often referred to simply as the number of linearly indepen-
dent columns, but this terminology is a little sloppy, since it is possible that any vector in
some set {x1 , . . . xn } can be expressed as a linear combination of the remaining vectors, even
though some subset of the vectors might be independent. In the same way, the row rank
is the largest number of rows of A that constitute a linearly independent set.
It is a basic fact of linear algebra, that for any matrix A, columnrank(A) = rowrank(A),
and so this quantity is simply refereed to as the rank of A, denoted as rank(A). The
following are some basic properties of the rank:
• For A ∈ Rm×n , rank(A) ≤ min(m, n). If rank(A) = min(m, n), then A is said to be
full rank .
8
• For A ∈ Rm×n , rank(A) = rank(AT ).
• For A ∈ Rm×n , B ∈ Rn×p , rank(AB) ≤ min(rank(A), rank(B)).
• For A, B ∈ Rm×n , rank(A + B) ≤ rank(A) + rank(B).
9
3.9 Range and Nullspace of a Matrix
The span of a set of vectors {x1 , x2 , . . . xn } is the set of all vectors that can be expressed as
a linear combination of {x1 , . . . , xn }. That is,
( n
)
X
span({x1 , . . . xn }) = v : v = αi xi , αi ∈ R .
i=1
It can be shown that if {x1 , . . . , xn } is a set of n linearly independent vectors, where each
xi ∈ Rn , then span({x1 , . . . xn }) = Rn . In other words, any vector v ∈ Rn can be written as
a linear combination of x1 through xn . The projection of a vector y ∈ Rm onto the span
of {x1 , . . . , xn } (here we assume xi ∈ Rm ) is the vector v ∈ span({x1 , . . . xn }) , such that
v as close as possible to y, as measured by the Euclidean norm kv − yk2 . We denote the
projection as Proj(y; {x1 , . . . , xn }) and can define it formally as,
Proj(y; {x1 , . . . xn }) = argminv∈span({x1 ,...,xn }) ky − vk2 .
The range (sometimes also called the columnspace) of a matrix A ∈ Rm×n , denoted
R(A), is the the span of the columns of A. In other words,
R(A) = {v ∈ Rm : v = Ax, x ∈ Rn }.
Making a few technical assumptions (namely that A is full rank and that n < m), the
projection of a vector y ∈ Rm onto the range of A is given by,
Proj(y; A) = argminv∈R(A) kv − yk2 = A(AT A)−1 AT y .
This last equation should look extremely familiar, since it is almost the same formula we
derived in class (and which we will soon derive again) for the least squares estimation of
parameters. Looking at the definition for the projection, it should not be too hard to
convince yourself that this is in fact the same objective that we minimized in our least
squares problem (except for a squaring of the norm, which doesn’t affect the optimal point)
and so these problems are naturally very connected. When A contains only a single column,
a ∈ Rm , this gives the special case for a projection of a vector on to a line:
aaT
Proj(y; a) = T y .
a a
m×n
The nullspace of a matrix A ∈ R , denoted N (A) is the set of all vectors that equal
0 when multiplied by A, i.e.,
N (A) = {x ∈ Rn : Ax = 0}.
Note that vectors in R(A) are of size m, while vectors in the N (A) are of size n, so vectors
in R(AT ) and N (A) are both in Rn . In fact, we can say much more. It turns out that
w : w = u + v, u ∈ R(AT ), v ∈ N (A) = Rn and R(AT ) ∩ N (A) = ∅ .
In other words, R(AT ) and N (A) are disjoint subsets that together span the entire space of
Rn . Sets of this type are called orthogonal complements, and we denote this R(AT ) =
N (A)⊥ .
10
3.10 The Determinant
The determinant of a square matrix A ∈ Rn×n , is a function det : Rn×n → R, and is
denoted |A| or detA (like the trace operator, we usually omit parentheses). The full formula
for the determinant gives little intuition about its meaning, so we instead first give three
defining properties of the determinant, from which all the rest follow (including the general
formula):
1. The determinant of the identity is 1, |I| = 1.
3. If we exchange any two rows aTi and aTj of A, then the determinant of the new matrix
is −|A|, for example
— aT2 —
— aT —
1
.. = −|A| .
.
— aTm —
These properties, however, also give very little intuition about the nature of the deter-
minant, so we now list several properties that follow from the three properties above:
• For A ∈ Rn×n , |A| = |AT |.
11
with the initial case that |A| = a11 for A ∈ R1×1 . If we were to expand this formula
completely for A ∈ Rn×n , there would be a total of n! (n factorial) different terms. For this
reason, we hardly even explicitly write the complete equation of the determinant for matrices
bigger than 3 × 3. However, the equations for determinants of matrices up to size 3 × 3 are
fairly common, and it is good to know them:
|[a11 ]| = a11
a11 a12
= a11 a22 − a12 a21
a21 a22
a11 a12 a13
a21 a22 a11 a22 a33 + a12 a23 a31 + a13 a21 a32
a23 =
−a11 a23 a32 − a12 a21 a33 − a13 a22 a31
a31 a32 a33
The classical adjoint (often just called the adjoint) of a matrix A ∈ Rn×n , is denoted
adj(A), and defined as
(note the switch in the indices A\j,\i ). It can be shown that for any nonsingular A ∈ Rn×n ,
1
A−1 = adj(A) .
|A|
While this is a nice “explicit” formula for the inverse of matrix, we should note that, numer-
ically, there are in fact much more efficient ways of computing the inverse.
Note that,
1 1
xT Ax = (xT Ax)T = xT AT x = xT ( A + AT )x
2 2
i.e., only the symmetric part of A contributes to the quadratic form. For this reason, we
often implicitly assume that the matrices appearing in a quadratic form are symmetric.
We give the following definitions:
12
• A symmetric matrix A ∈ Sn is position semidefinite (PSD) if for all vectors xT Ax ≥
0. This is written A 0 (or just A ≥ 0), and the set of all positive semidefinite matrices
is often denoted Sn+ .
It should be obvious that if A is positive definite, then −A is negative definite and vice
versa. Likewise, if A is positive semidefinite then −A is negative semidefinite and vice versa.
If A is indefinite, then so is −A. It can also be shown that positive definite and negative
definite matrices are always invertible.
Finally, there is one type of positive definite matrix that comes up frequently, and so
deserves some special mention. Given any matrix A ∈ Rm×n (not necessarily symmetric or
even square), the matrix G = AT A (sometimes called a Gram matrix ) is always positive
semidefinite. Further, if m ≥ n (and we assume for convenience that A is full rank), then
G = AT A is positive definite.
Ax = λx, x 6= 0 .
Intuitively, this definition means that multiplying A by the vector x results in a new vector
that points in the same direction as x, but scaled by a factor λ. Also note that for any
eigenvector x ∈ Cn , and scalar t ∈ C, A(cx) = cAx = cλx = λ(cx), so cx is also an
eigenvector. For this reason when we talk about “the” eigenvector associated with λ, we
usually assume that the eigenvector is normalized to have length 1 (this still creates some
ambiguity, since x and −x will both be eigenvectors, but we will have to live with this).
We can rewrite the equation above to state that (λ, x) is an eigenvalue-eigenvector pair
of A if,
(λI − A)x = 0, x 6= 0 .
1
Note that λ and the entries of x are actually in C, the set of complex numbers, not just the reals; we
will see shortly why this is necessary. Don’t worry about this technicality for now, you can think of complex
vectors in the same way as real vectors.
13
But (λI − A)x = 0 has a non-zero solution to x if and only if (λI − A) has a non-empty
nullspace, which is only the case if (λI − A) is singular, i.e.,
|(λI − A)| = 0 .
We can now use the previous definition of the determinant to expand this expression
into a (very large) polynomial in λ, where λ will have maximum degree n. We then find
the n (possibly complex) roots of this polynomial to find the n eigenvalues λ1 , . . . , λn . To
find the eigenvector corresponding to the eigenvalue λi , we simply solve the linear equation
(λi I − A)x = 0. It should be noted that this is not the method which is actually used
in practice to numerically compute the eigenvalues and eigenvectors (remember that the
complete expansion of the determinant has n! terms); it is rather a mathematical argument.
The following are properties of eigenvalues and eigenvectors (in all cases assume A ∈ Rn×n
has eigenvalues λi , . . . , λn and associated eigenvectors x1 , . . . xn ):
• The trace of a A is equal to the sum of its eigenvalues,
n
X
trA = λi .
i=1
AX = XΛ
where the columns of X ∈ Rn×n are the eigenvectors of A and Λ is a diagonal matrix whose
entries are the eigenvalues of A, i.e.,
| | |
X ∈ Rn×n = x1 x2 · · · xn , Λ = diag(λ1 , . . . , λn ) .
| | |
If the eigenvectors of A are linearly independent, then the matrix X will be invertible, so
A = XΛX −1 . A matrix that can be written in this form is called diagonalizable.
14
3.13 Eigenvalues and Eigenvectors of Symmetric Matrices
Two remarkable properties come about when we look at the eigenvalues and eigenvectors
of a symmetric matrix A ∈ Sn . First, it can be shown that all the eigenvalues of A are
real. Secondly, the eigenvectors of A are orthonormal, i.e., the matrix X defined above is an
orthogonal matrix (for this reason, we denote the matrix of eigenvectors as U in this case).
We can therefore represent A as A = U ΛU T , remembering from above that the inverse of
an orthogonal matrix is just its transpose.
Using this, we can show that the definiteness of a matrix depends entirely on the sign of
its eigenvalues. Suppose A ∈ Sn = U ΛU T . Then
n
X
xT Ax = xT U ΛU T x = y T Λy = λi yi2
i=1
where y = U T x (and since U is full rank, any vector y ∈ Rn can be represented in this form).
Because yi2 is always positive, the sign of this expression depends entirely on the λi ’s. If all
λi > 0, then the matrix is positive definite; if all λi ≥ 0, it is positive semidefinite. Likewise,
if all λi < 0 or λi ≤ 0, then A is negative definite or negative semidefinite respectively.
Finally, if A has both positive and negative eigenvalues, it is indefinite.
An application where eigenvalues and eigenvectors come up frequently is in maximizing
some function of a matrix. In particular, for a matrix A ∈ Sn , consider the following
maximization problem,
i.e., we want to find the vector (of norm 1) which maximizes the quadratic form. Assuming
the eigenvalues are ordered as λ1 ≥ λ2 ≥ . . . ≥ λn , the optimal x for this optimization
problem is x1 , the eigenvector corresponding to λ1 . In this case the maximal value of the
quadratic form is λ1 . Similarly, the optimal solution to the minimization problem,
is xn , the eigenvector corresponding to λn , and the minimal value is λn . This can be proved by
appealing to the eigenvector-eigenvalue form of A and the properties of orthogonal matrices.
However, in the next section we will see a way of showing it directly using matrix calculus.
4 Matrix Calculus
While the topics in the previous sections are typically covered in a standard course on linear
algebra, one topic that does not seem to be covered very often (and which we will use
extensively) is the extension of calculus to the vector setting. Despite the fact that all the
actual calculus we use is relatively trivial, the notation can often make things look much
more difficult than they are. In this section we present some basic definitions of matrix
calculus and provide a few examples.
15
4.1 The Gradient
Suppose that f : Rm×n → R is a function that takes as input a matrix A of size m × n and
returns a real value. Then the gradient of f (with respect to A ∈ Rm×n ) is the matrix of
partial derivatives, defined as:
∂f (A) ∂f (A) ∂f (A)
∂A11 ∂A12
· · · ∂A1n
∂f (A) ∂f (A) ∂f (A)
∂A21 ∂A22
· · · ∂A2n
∇A f (A) ∈ Rm×n =
.. .. ... ..
. . .
∂f (A) ∂f (A) ∂f (A)
∂Am1 ∂Am2
··· ∂Amn
It is very important to remember that the gradient of a function is only defined if the function
is real-valued, that is, if it returns a scalar value. We can not, for example, take the gradient
of Ax, A ∈ Rn×n with respect to x, since this quantity is vector-valued.
It follows directly from the equivalent properties of partial derivatives that:
• ∇x (f (x) + g(x)) = ∇x f (x) + ∇x g(x).
• For t ∈ R, ∇x (t f (x)) = t∇x f (x).
It is a little bit trickier to determine what the proper expression is for ∇x f (Ax), A ∈ Rn×n ,
but this is doable as well (if fact, you’ll have to work this out for a homework problem).
16
In other words, ∇2x f (x) ∈ Rn×n , with
∂ 2 f (x)
(∇2x f (x))ij = .
∂xi ∂xj
Note that the Hessian is always symmetric, since
∂ 2 f (x) ∂ 2 f (x)
= .
∂xi ∂xj ∂xj ∂xi
Similar to the gradient, the Hessian is defined only when f (x) is real-valued.
It is natural to think of the gradient as the analogue of the first derivative for functions
of vectors, and the Hessian as the analogue of the second derivative (and the symbols we
use also suggest this relation). This intuition is generally correct, but there a few caveats to
keep in mind.
First, for real-valued functions of one variable f : R → R, it is a basic definition that the
second derivative is the derivative of the first derivative, i.e.,
∂ 2 f (x) ∂ ∂
2
= f (x).
∂x ∂x ∂x
However, for functions of a vector, the gradient of the function is a vector, and we cannot
take the gradient of a vector — i.e.,
∂f (x)
∂x1
∂f (x)
∂x2
∇x ∇x f (x) = ∇x ..
.
∂f (x)
∂x1
and this expression is not defined. Therefore, it is not the case that the Hessian is the
gradient of the gradient. However, this is almost true, in the following sense: If we look at
the ith entry of the gradient (∇x f (x))i = ∂f (x)/∂xi , and take the gradient with respect to
x we get
∂ 2 f (x)
∂xi ∂x1
∂ 2 f (x)
∂f (x) ∂xi ∂x2
∇x = ..
∂xi
.
∂f (x)
∂xi ∂xn
If we don’t mind being a little bit sloppy we can say that (essentially) ∇2x f (x) = ∇x (∇x f (x))T ,
so long as we understand that this really means taking the gradient of each entry of (∇x f (x))T ,
not the gradient of the whole vector.
17
Finally, note that while we can take the gradient with respect to a matrix A ∈ Rn , for
the purposes of this class we will only consider taking the Hessian with respect to a vector
x ∈ Rn . This is simply a matter of convenience (and the fact that none of the calculations
we do require us to find the Hessian with respect to a matrix), since the Hessian with respect
to a matrix would have to represent all the partial derivatives ∂ 2 f (A)/(∂Aij ∂Akℓ ), and it is
rather cumbersome to represent this as a matrix.
where the last equality follows since A is symmetric (which we can safely assume, since it is
appearing in a quadratic form). Note that the kth entry of ∇x f (x) is just the inner product
of the kth row of A and x. Therefore, ∇x xT Ax = 2Ax. Again, this should remind you of
the analogous fact in single-variable calculus, that ∂/(∂x) ax2 = 2ax.
Finally, lets look at the Hessian of the quadratic function f (x) = xT Ax (it should be
obvious that the Hessian of a linear function bT x is zero). This is even easier than determining
the gradient of the function, since
n n
∂ 2 f (x) ∂2 X X
= Aij xi xj = Akℓ + Aℓk = 2Akℓ .
∂xk ∂xℓ ∂xk ∂xℓ i=1 j=1
Therefore, it should be clear that ∇2x xT Ax = 2A, which should be entirely expected (and
again analogous to the single-variable fact that ∂ 2 /(∂x2 ) ax2 = 2a).
To recap,
18
• ∇x bT x = b
Taking the gradient with respect to x we have, and using the properties we derived in the
previous section
Setting this last expression equal to zero and solving for x gives the normal equations
x = (AT A)−1 AT b
19
Now lets consider the function f : Sn++ → R, f (A) = log |A|. Note that we have to
restrict the domain of f to be the positive definite matrices, since this ensures that |A| > 0,
so that the log of |A| is a real number. In this case we can use the chain rule (nothing fancy,
just the ordinary chain rule from single-variable calculus) to see that
where we can drop the transpose in the last expression because A is symmetric. Note the
similarity to the single-valued case, where ∂/(∂x) log x = 1/x.
L(x, λ) = xT Ax − λxT x
where λ is called the Lagrange multiplier associated with the equality constraint. It can be
established that for x∗ to be a optimal point to the problem, the gradient of the Lagrangian
has to be zero at x∗ (this is not the only condition, but it is required). That is,
Notice that this is just the linear equation Ax = λx. This shows that the only points which
can possibly maximize (or minimize) xT Ax assuming xT x = 1 are the eigenvectors of A.
2
Don’t worry if you haven’t seen Lagrangians before, as we will cover them in greater detail later in
CS229.
20
Probability Theory Review for Machine Learning
Samuel Ieong
November 6, 2006
1 Basic Concepts
Broadly speaking, probability theory is the mathematical study of uncertainty. It plays a
central role in machine learning, as the design of learning algorithms often relies on proba-
bilistic assumption of the data. This set of notes attempts to cover some basic probability
theory that serves as a background for the class.
• F ⊆ 2Ω (the power set of Ω) is the space of (measurable) events (or event space),
Given the outcome space Ω, there is some restrictions as to what subset of 2Ω can be
considered an event space F:
1
Note that when the outcome space Ω is finite, as in the previous example, we often take
the event space F to be 2Ω . This treatment is not fully general, but it is often sufficient
for practical purposes. However, when the outcome space is infinite, we must be careful to
define what the event space is.
Given an event space F, the probability measure P must satisfy certain axioms.
• (non-negativity) For all α ∈ F , P (α) ≥ 0.
then this distribution P completely specifies the probability of any given event happening
(through the additivity axiom). For example, the probability of an even dice throw will be
2
rigorously) using measure theory, the distinction between outcome space and event space
will be very important. This topic is too advanced to be covered in this short review note.
In any case, it is good to keep in mind that event space is not always simply the power set
of the outcome space.
From here onwards, we will talk mostly about probability with respect to random vari-
ables. While some probability concepts can be defined meaningfully without using them,
random variables allow us to provide a more uniform treatment of probability theory. For
notations, the probability of a random variable X taking on the value of a will be denoted
by either
P (X = a) or PX (a)
We will also denote the range of a random variable X by V al(X).
Example 4. Let random variable X be defined on the outcome space Ω of a dice throw
(again!). If the dice is fair, then the distribution of X would be
Note that while this example resembles that of Example 2, they have different semantic
meaning. The probability distribution defined in Example 2 is over events, whereas the one
here is defined over random variables.
For notation, we will use P (X) to denote the distribution of the random variable X.
Sometimes, we speak about the distribution of more than one variables at a time. We
call these distributions joint distributions, as the probability is determined jointly by all the
variables involved. This is best clarified by an example.
Example 5. Let X be a random variable defined on the outcome space of a dice throw. Let
Y be an indicator variable that takes on value 1 if a coin flip turns up head and 0 if tail.
Assuming both the dice and the coin are fair, the joint distribution of X and Y is given by
P X=1 X=2 X=3 X=4 X=5 X=6
Y =0 1/12 1/12 1/12 1/12 1/12 1/12
Y =1 1/12 1/12 1/12 1/12 1/12 1/12
As before, we will denote the probability of X taking value a and Y taking value b by
either the long hand of P (X = a, Y = b), or the short hand of PX,Y (a, b). We refer to their
joint distribution by P (X, Y ).
Given a joint distribution, say over random variables X and Y , we can talk about the
marginal distribution of X or that of Y . The marginal distribution refers to the probability
distribution of a random variable on its own. To find out the marginal distribution of a
3
random variable, we sum out all the other random variables from the distribution. Formally,
we mean X
P (X) = P (X, Y = b) (1)
b∈V al(Y )
The name of marginal distribution comes from the fact that if we add up all the entries
of a row (or a column) of a joint distribution, and write the answer at the end (i.e., margin)
of it, this will be the probability of the random variable taking on that value. Of course,
thinking in this way only helps when the joint distribution involves two variables.
P (X = a, Y = b)
P (X = a|Y = b) = (2)
P (Y = b)
Example 6. Suppose we know that a dice throw was odd, and want to know the probability
of an “one” has been thrown. Let X be the random variable of the dice throw, and Y be an
indicator variable that takes on the value of 1 if the dice throw turns up odd, then we write
our desired probability as follows:
P (X = 1, Y = 1) 1/6
P (X = 1|Y = 1) = = = 1/3
P (Y = 1) 1/2
The idea of conditional probability extends naturally to the case when the distribution
of a random variable is conditioned on several variables, namely
P (X = a, Y = b, Z = c)
P (X = a|Y = b, Z = c) =
P (Y = b, Z = c)
1.5 Independence
In probability theory, independence means that the distribution of a random variable does
not change on learning the value of another random variable. In machine learning, we often
make such assumptions about our data. For example, the training samples are assumed to
4
be drawn independently from some underlying space; the label of sample i is assumed to be
independent of the features of sample j (i 6= j).
Mathematically, a random variable X is independent of Y when
P (X) = P (X|Y )
(Note that we have dropped what values X and Y are taking. This means the statement
holds true for any values X and Y may take.)
Using Equation (2), it is easy to verify that if X is independent of Y , then Y is also
independent of X. As a notation, we write X ⊥ Y if X and Y are independent.
An equivalent mathematical statement about the independence of random variables X
and Y is
P (X, Y ) = P (X)P (Y )
Sometimes we also talk about conditional independence, meaning that if we know the
value of a random variable (or more generally, a set of random variables), then some other
random variables will be independent of each other. Formally, we say “X and Y are condi-
tionally independent given Z” if
P (X|Z) = P (X|Y, Z)
or, equivalently,
P (X, Y |Z) = P (X|Z)P (Y |Z)
An example of conditional independence that we will se in class is the Naı̈ve Bayes
assumption. This assumption is made in the context of a learning algorithm for learning to
classify emails as spams or non-spams. It assumes that the probability of a word x appearing
in the email is conditionally independent of a word y appearing given whether the email is
spam or not. This clearly is not without loss of generality, as some words almost invariably
comes in pair. However, as it turns out, making this simplifying assumption does not hurt
the performance much, and in any case allow us to learn to classify spams rapidly. Details
can be found in Lecture Notes 2.
The Chain Rule is often used to evaluate the joint probability of some random variables,
and is especially useful when there are (conditional) independence across variables. Notice
5
there is a choice in the order we unravel the random variables when applying the Chain Rule;
picking the right order can often make evaluating the probability much easier.
The second rule we are going to introduce is the Bayes Rule. The Bayes Rule allows
us to compute the conditional probability P (X|Y ) from P (Y |X), in a sense “inverting” the
conditions. It can be derived simply from Equation (2) as well.
This application of Equation (1) is sometimes referred to as the law of total probability.
Extending the Bayes Rule to the case of multiple random variables can sometimes be
tricky. Just to be clear, we would give a few examples. When in doubt, one can always refer
to how conditional probabilities are defined and work out the details.
Example 7. Let’s consider the following conditional probabilities: P (X, Y |Z) and (X|Y, Z).
6
To define a discrete distribution, we can simply enumerate the probability of the random
variable taking on each of the possible values. This enumeration is known as the probability
mass function, as it divides up a unit mass (the total probability) and places them on the
different values a random variable can take. This can be extended analogously to joint
distributions and conditional distributions.
More generally, suppose X is distributed uniformly over the range [a, b], then the PDF
would be (
1
if a ≤ x ≤ b
f (x) = b−a
0 otherwise
7
Sometimes we will also speak about cumulative distribution function. It is a function
that gives the probability of a random variable being smaller than some value. A cumulative
distribution function F is related to the underlying probability density function f as follows:
Z b
F (b) = P (X ≤ b) = f (x)dx
−∞
R
and hence F (x) = f (x)dx (in the sense of indefinite integral).
To extend the definition of continuous distribution to joint distribution, the probability
density function is extended to take multiple arguments, namely,
Z b1 Z b2 Z bn
P (a1 ≤ X1 ≤ b1 , a2 ≤ X2 ≤ b2 , . . . , an ≤ Xn ≤ n1 ) = ··· f (x1 , x2 , . . . , xn )dx1 dx2 . . . dxn
a1 a2 an
8
the easiest way to think about this is that we define a new random variable Y = f (X), and
compute the expected value of Y instead.
When working with indicator variables, a useful identify is the following:
E(X) = P (X = 1) for indicator variable X
When working with the sums of random variables, one of the most important rule is the
linearity of expectations.
Theorem 3 (Linearity of Expectations). Let X1 , X2 , . . . , Xn be (possibly dependent) ran-
dom variables,
E(X1 + X2 + · · · + Xn ) = E(X1 ) + E(X2 ) + · · · + E(Xn ) (6)
The linearity of expectations is very powerful because there are no restrictions on whether
the random variables are independent or not. When we work on products of random vari-
ables, however, there is very little we can say in general. However, when the random variables
are independent, then
Theorem 4. Let X and Y be independent random variables,
E(XY ) = E(X)E(Y )
3.2 Variance
The variance of a distribution is a measure of the “spread” of a distribution. Sometimes it
is also referred to as the second moment. It is defined as follows:
¡ ¢
V ar(X) = E (X − E(X))2 (7)
The variance of a random variable is often denoted by σ 2 . The reason that this is squared
is because we often want to find out σ, known as thep standard deviation. The variance and
the standard deviation is related (obviously) by σ = V ar(X).
To find out the variance of a random variable X, it’s often easier to compute the following
instead
V ar(X) = E(X 2 ) − (E(X))2
Note that unlike expectation, variance is not a linear function of a random variable X.
In fact, we can verify that the variance of (aX + b) is
V ar(aX + b) = a2 V ar(X)
If random variables X and Y are independent, then
V ar(X + Y ) = V ar(X)V ar(Y ) if X ⊥ Y
Sometimes we also talk about the covariance of two random variables. This is a measure
of how “closely related” two random variables are. Its definition is as follows.
Cov(X, Y ) = E((X − E(X))(Y − E(Y )))
9
4 Some Important Distributions
In this section, we will review some of the probability distributions that we will see in this
class. This is by no means a comprehensive list of distribution that one should know. In
particular, distributions such as the geoemtric, hypergeometric, and binomial distributions,
which are very useful in their own right and studied in introductory probability theory, are
not reviewed here.
4.1 Bernoulli
The Bernoulli distribution is one of the most basic distribution. A random variable distrib-
uted according to the Bernoulli distribution can take on two possible values, {0, 1}. It can
be specified by a single parameter p, and by convention we take p to be P (X = 1). It is
often used to indicate whether a trail is successful or not.
Sometimes it is useful to write the probability distribution of a Bernoulli random variable
X as follows
P (X) = px (1 − p)1−x
An example of the Bernoulli distribution in action is the classification task in Lecture
Notes 1. To develop the logistic regression algorithm for the task, we assume that the labels
are distributed according to the Bernoulli distribution given the features.
4.2 Poisson
The Poisson distribution is a very useful distribution that deals with the arrival of events.
It measures probaiblity of the number of events happening over a fixed period of time, given
a fixed average rate of occurrence, and that the events take place independently of the time
since the last event. It is parametrized by the average arrival rate λ. The probability mass
function is given by:
exp(−λ)λk
P (X = k) =
k!
The mean value of a Poisson random variable is λ, and its variance is also λ.
We will get to work on a learning algorithm that deals with Poisson random variables in
Homework 1, Problem 3.
4.3 Gaussian
The Gaussian distribution, also known as the normal distribution, is one of the most “ver-
satile” distributions in probability theory, and appears in a wide variety of contexts. For
example, it can be used to approximate the binomial distribution when the number of ex-
periments is large, or the Poisson distribution when the average arrival rate is high. It is
also related to the Law of Large Numbers. For many problems, we will also often assume
that when noise in the system is Gaussian distributed. The list of applications is endless.
10
0.4
Gaussian(0,1)
Gaussian(1,1)
0.35
Gaussian(0,2)
0.3
0.25
0.2
0.15
0.1
0.05
0
−5 0 5
The Gaussian distribution is determined by two parameters: the mean µ and the variance
2
σ . The probability density function is given by
µ ¶
1 (x − µ)2
f (x) = √ exp − (8)
2πσ 2σ 2
To get a better sense of how the distribution changes with respect to the mean and the
variance, we have plotted three different Gaussian distributions in Figure 1.
In our class, we will sometimes work with multi-variate Gaussian distributions. A k-
dimensional multi-variate Gaussian distribution is parametrized by (µ, Σ), where µ is now a
vector of means in Rk , and Σ is the covariance matrix in Rk×k , in other words, Σii = V ar(Xi )
and Σij = Cov(Xi , Xj ). The probability density function is now defined over vectors of input,
given by µ ¶
1 1 T −1
f (x) = p exp − (x − µ) Σ (x − µ) (9)
2π k |Σ| 2
(Recall that we denote the determinant of a matrix A by |A|, and its inverse by A−1 )
To get a better sense of how a multi-variate Gaussian distribution depends on the covari-
ance matrix, we can look at the figures in Lecture Notes 2, Pages 3—4.
Working with a multi-variate Gaussian distribution can be tricky and daunting at times.
One way to make our lives easier, at least as a way to get intuition on a problem, is to assume
that the covariances are zero when we first attempt a problem. When the covariances are
zero, the determinant |Σ| will simply be the product of the variances, and the inverse Σ−1
can be found by taking the inverse of the diagonal entries of Σ.
11
5 Working with Probabilities
As we will be working with probabilities and distributions a lot in this class, listed below
are a few tips about efficient manipulation of distributions.
I dare say this is a pretty mean-looking function. But by taking the logarithm of it, termed
log-likelihood function, we have instead
m
X
`(θ) = log L(θ) = y (i) log hθ (x(i) ) + (1 − y (i) ) log(1 − hθ (x(i) ))
i=1
Not the world’s prettiest function, but at least it’s more manageable. We can now work
on one term (i.e., one training sample) at a time, because they are summed together rather
than multiplied together.
12
150
100
50
0
0 1 2 3 4 5
While we can show Jenson’s inequality by algebra, it’s easiest to understand it through
a picture. The function in Figure 2 is a convex function. We can see that a straight line
between any two points on the function always lie above the function. This shows that if a
random variable can take on only two values, then Jenson’s inequality holds. It is relatively
straight forward to extend this to general random variables.
13
Convex Optimization Overview
Zico Kolter
October 19, 2007
1 Introduction
Many situations arise in machine learning where we would like to optimize the value of
some function. That is, given a function f : Rn → R, we want to find x ∈ Rn that minimizes
(or maximizes) f (x). We have already seen several examples of optimization problems in
class: least-squares, logistic regression, and support vector machines can all be framed as
optimization problems.
It turns out that in the general case, finding the global optimum of a function can be a
very difficult task. However, for a special class of optimization problems, known as convex
optimization problems, we can efficiently find the global solution in many cases. Here,
“efficiently” has both practical and theoretical connotations: it means that we can solve
many real-world problems in a reasonable amount of time, and it means that theoretically
we can solve problems in time that depends only polynomially on the problem size.
The goal of these section notes and the accompanying lecture is to give a very brief
overview of the field of convex optimization. Much of the material here (including some
of the figures) is heavily based on the book Convex Optimization [1] by Stephen Boyd and
Lieven Vandenberghe (available for free online), and EE364, a class taught here at Stanford
by Stephen Boyd. If you are interested in pursuing convex optimization further, these are
both excellent resources.
2 Convex Sets
We begin our look at convex optimization with the notion of a convex set.
Intuitively, this means that if we take any two elements in C, and draw a line segment
between these two elements, then every point on that line segment also belongs to C. Figure
1 shows an example of one convex and one non-convex set. The point θx + (1 − θ)y is called
a convex combination of the points x and y.
1
(a) (b)
2.1 Examples
• All of Rn . It should be fairly obvious that given any x, y ∈ Rn , θx + (1 − θ)y ∈ Rn .
• The non-negative orthant, Rn+ . The non-negative orthant consists of all vectors in
Rn whose elements are all non-negative: Rn+ = {x : xi ≥ 0 ∀i = 1, . . . , n}. To show
that this is a convex set, simply note that given any x, y ∈ Rn+ and 0 ≤ θ ≤ 1,
n
• Norm
pPn balls. Let k · k be some norm on R (e.g., the Euclidean norm, kxk2 =
2 n
i=1 xi ). Then the set {x : kxk ≤ 1} is a convex set. To see this, suppose x, y ∈ R ,
with kxk ≤ 1, kyk ≤ 1, and 0 ≤ θ ≤ 1. Then
where we used the triangle inequality and the positive homogeneity of norms.
2
• Intersections of convex sets. Suppose C1 , C2 , . . . , Ck are convex sets. Then their
intersection
\k
Ci = {x : x ∈ Ci ∀i = 1, . . . , k}
i=1
Tk
is also a convex set. To see this, consider x, y ∈ i=1 Ci and 0 ≤ θ ≤ 1. Then,
θx + (1 − θ)y ∈ Ci ∀i = 1, . . . , k
by the definition of a convex set. Therefore
k
\
θx + (1 − θ)y ∈ Ci .
i=1
Note, however, that the union of convex sets in general will not be convex.
• Positive semidefinite matrices. The set of all symmetric positive semidefinite
matrices, often times called the positive semidefinite cone and denoted Sn+ , is a convex
set (in general, Sn ⊂ Rn×n denotes the set of symmetric n × n matrices). Recall that
a matrix A ∈ Rn×n is symmetric positive semidefinite if and only if A = AT and for
all x ∈ Rn , xT Ax ≥ 0. Now consider two symmetric positive semidefinite matrices
A, B ∈ Sn+ and 0 ≤ θ ≤ 1. Then for any x ∈ Rn ,
xT (θA + (1 − θ)B)x = θxT Ax + (1 − θ)xT Bx ≥ 0.
The same logic can be used to show that the sets of all positive definite, negative
definite, and negative semidefinite matrices are each also convex.
3 Convex Functions
A central element in convex optimization is the notion of a convex function.
Definition 3.1 A function f : Rn → R is convex if its domain (denoted D(f )) is a convex
set, and if, for all x, y ∈ D(f ) and θ ∈ R, 0 ≤ θ ≤ 1,
f (θx + (1 − θ)y) ≤ θf (x) + (1 − θ)f (y).
Intuitively, the way to think about this definition is that if we pick any two points on the
graph of a convex function and draw a straight line between then, then the portion of the
function between these two points will lie below this straight line. This situation is pictured
in Figure 2.2
We say a function is strictly convex if Definition 3.1 holds with strict inequality for
x 6= y and 0 < θ < 1. We say that f is concave if −f is convex, and likewise that f is
strictly concave if −f is strictly convex.
2
Don’t worry too much about the requirement that the domain of f be a convex set. This is just a
technicality to ensure that f (θx + (1 − θ)y) is actually defined (if D(f ) were not convex, then it could be
that f (θx + (1 − θ)y) is undefined even though x, y ∈ D(f )).
3
Figure 2: Graph of a convex function. By the definition of convex functions, the line con-
necting two points on the graph must lie above the function.
3 ∂f (x)
Recall that the gradient is defined as ∇x f (x) ∈ Rn , (∇x f (x))i = ∂xi . For a review on gradients and
Hessians, see the previous section notes on linear algebra.
4
3.2 Second Order Condition for Convexity
Suppose a function f : Rn → R is twice differentiable (i.e., the Hessian4 ∇2x f (x) is defined
for all points x in the domain of f ). Then f is convex if and only if D(f ) is a convex set and
its Hessian is positive semidefinite: i.e., for any x ∈ D(f ),
∇2x f (x) 0.
Here, the notation ‘’ when used in conjunction with matrices refers to positive semidefi-
niteness, rather than componentwise inequality. 5 In one dimension, this is equivalent to the
condition that the second derivative f ′′ (x) always be positive (i.e., the function always has
positive curvature).
Again analogous to both the definition and first order conditions for convexity, f is strictly
convex if its Hessian is positive definite, concave if the Hessian is negative semidefinite, and
strictly concave if the Hessian is negative definite.
5
3.4 Sublevel Sets
Convex functions give rise to a particularly important type of convex set called an α-sublevel
set. Given a convex function f : Rn → R and a real number α ∈ R, the α-sublevel set is
defined as
{x ∈ D(f ) : f (x) ≤ α}.
In other words, the α-sublevel set is the set of all points x such that f (x) ≤ α.
To show that this is a convex set, consider any x, y ∈ D(f ) such that f (x) ≤ α and
f (y) ≤ α. Then
3.5 Examples
We begin with a few simple examples of convex functions of one variable, then move on to
multivariate functions.
∇2x f (x) = A.
6
• Norms. Let f : Rn → R be some norm on Rn . Then by the triangle inequality and
positive homogeneity of norms, for x, y ∈ Rn , 0 ≤ θ ≤ 1,
f (θx + (1 − θ)y) ≤ f (θx) + f ((1 − θ)y) = θf (x) + (1 − θ)f (y).
This is an example of a convex function where it is not possible to prove convexity based
on the second or first order conditions,
Pbecause norms are not generally differentiable
everywhere (e.g., the 1-norm, ||x||1 = ni=1 |xi |, is non-differentiable at all points where
any xi is equal to zero).
• Nonnegative weighted sums of convex functions. Let f1 , f2 , . . . , fk be convex
functions and w1 , w2 , . . . , wk be nonnegative real numbers. Then
k
X
f (x) = wi fi (x)
i=1
7
Is it imporant to note the direction of these inequalities: a convex function gi must be
less than zero. This is because the 0-sublevel set of gi is a convex set, so the feasible region,
which is the intersection of many convex sets, is also convex (recall that affine subspaces are
convex sets as well). If we were to require that gi ≥ 0 for some convex gi , the feasible region
would no longer be a convex set, and the algorithms we apply for solving these problems
would not longer be guaranteed to find the global optimum. Also notice that only affine
functions are allowed to be equality constraints. Intuitively, you can think of this as being
due to the fact that an equality constraint is equivalent to the two inequalities hi ≤ 0 and
hi ≥ 0. However, these will both be valid constraints if and only if hi is both convex and
concave, i.e., hi must be affine.
The optimal value of an optimization problem is denoted p⋆ (or sometimes f ⋆ ) and is
equal to the minimum possible value of the objective function in the feasible region7
We allow p⋆ to take on the values +∞ and −∞ when the problem is either infeasible (the
feasible region is empty) or unbounded below (there exists feasible points such that f (x) →
−∞), respectively. We say that x⋆ is an optimal point if f (x⋆ ) = p⋆ . Note that there can
be more than one optimal point, even when the optimal value is finite.
Definition 4.1 A point x is locally optimal if it is feasible (i.e., it satisfies the constraints
of the optimization problem) and if there exists some R > 0 such that all feasible points z
with kx − zk2 ≤ R, satisfy f (x) ≤ f (z).
Definition 4.2 A point x is globally optimal if it is feasible and for all feasible points z,
f (x) ≤ f (z).
We now come to the crucial element of convex optimization problems, from which they
derive most of their utility. The key idea is that for a convex optimization problem
all locally optimal points are globally optimal .
Let’s give a quick proof of this property by contradiction. Suppose that x is a locally
optimal point which is not globally optimal, i.e., there exists a feasible point y such that
7
Math majors might note that the min appearing below should more correctly be an inf. We won’t worry
about such technicalities here, and use min for simplicity.
8
f (x) > f (y). By the definition of local optimality, there exist no feasible points z such that
kx − zk2 ≤ R and f (z) < f (x). But now suppose we choose the point
R
z = θy + (1 − θ)x with θ = .
2kx − yk2
Then
R R
kx − zk2 = x− y+ 1− x
2kx − yk2 2kx − yk2 2
R
= (x − y)
2kx − yk2 2
= R/2 ≤ R.
In addition, by the convexity of f we have
f (z) = f (θy + (1 − θ)x) ≤ θf (y) + (1 − θ)f (x) < f (x).
Furthermore, since the feasible set is a convex set, and since x and y are both feasible
z = θy + (1 − θ) will be feasible as well. Therefore, z is a feasible point, with kx − zk2 < R
and f (z) < f (x). This contradicts our assumption, showing that x cannot be locally optimal.
9
where again x ∈ Rn is the optimization variable, c ∈ Rn , d ∈ R, G ∈ Rm×n , h ∈ Rm ,
A ∈ Rp×n , b ∈ Rp are defined by the problem, but we also have P ∈ Sn+ , a symmetric
positive semidefinite matrix.
• Semidefinite Programming. This last example is a bit more complex than the pre-
vious ones, so don’t worry if it doesn’t make much sense at first. However, semidefinite
programming is become more and more prevalent in many different areas of machine
learning research, so you might encounter these at some point, and it is good to have an
idea of what they are. We say that a convex optimization problem is a semidefinite
program (SDP) if it is of the form
minimize tr(CX)
subject to tr(Ai X) = bi , i = 1, . . . , p
X0
where the symmetric matrix X ∈ Sn is the optimization variable, the symmetric ma-
trices C, A1 , . . . , Ap ∈ Sn are defined by the problem, and the constraint X 0 means
that we are constraining X to be positive semidefinite. This looks a bit different than
the problems we have seen previously, since the optimization variable is now a matrix
instead of a vector. If you are curious as to why such a formulation might be useful,
you should look into a more advanced course or book on convex optimization.
It should be fairly obvious from the definitions that quadratic programs are more general
than linear programs (since a linear program is just a special case of a quadratic program
where P = 0), and likewise that quadratically constrained quadratic programs are more
general than quadratic programs. However, what is not obvious at all is that semidefinite
programs are in fact more general than all the previous types. That is, any quadratically
constrained quadratic program (and hence any quadratic program or linear program) can
be expressed as a semidefinte program. We won’t discuss this relationship further in this
document, but this might give you just a small idea as to why semidefinite programming
could be useful.
10
4.3 Examples
Now that we’ve covered plenty of the boring math and formalisms behind convex optimiza-
tion, we can finally get to the fun part: using these techniques to solve actual problems.
We’ve already encountered a few such optimization problems in class, and in nearly every
field, there is a good chance that someone has tried to apply convex optimization to solve
some problem.
• Support Vector Machines. One of the most prevalent applications of convex op-
timization methods in machine learning is the support vector machine classifier. As
discussed in class, finding the support vector classifier (in the case with slack variables)
can be formulated as the optimization problem
minimize 12 kwk22 + C m
P
i=1 ξi
subject to y (i) (wT x(i) + b) ≥ 1 − ξi , i = 1, . . . , m
ξi ≥ 0, i = 1, . . . , m
with optimization variables w ∈ Rn , ξ ∈ Rm , b ∈ R, and where C ∈ R and x(i) , y (i) , i =
1, . . . m are defined by the problem. This is an example of a quadratic program, which
we try to put the problem into the form described in the previous section. In particular,
if define k = m + n + 1, let the optimization variable be
w
x ∈ Rk ≡ ξ
b
and define the matrices
I 0 0 0
P ∈ Rk×k = 0 0 0 , c ∈ Rk = C · 1 ,
0 0 0 0
2m×k −diag(y)X −I −y 2m −1
G∈R = , h∈R =
0 −I 0 0
where I is the identity, 1 is the vector of all ones, and X and y are defined as in class,
T
x(1)
y (1)
(2) T y (2)
m×n
x m
X∈R = , y ∈ R = .. .
..
. .
T (m)
x(m) y
You should try to convince yourself that the quadratic program described in the pre-
vious section, when using these matrices defined above, is equivalent to the SVM
optimization problem. In reality, it is fairly easy to see that there the SVM optimiza-
tion problem has a quadratic objective and linear constraints, so we typically don’t
need to put it into standard form to “prove” that it is a QP, and would only do so if
we are using an off-the-shelf solver that requires the input to be in standard form.
11
• Constrained least squares. In class we have also considered the least squares prob-
lem, where we want to minimize kAx − bk22 for some matrix A ∈ Rm×n and b ∈ Rm .
As we saw, this particular problem can actually be solved analytically via the normal
equations. However, suppose that we also want to constrain the entries in the solution
x to lie within some predefined ranges. In other words, suppose we weanted to solve
the optimization problem,
minimize 12 kAx − bk22
subject to l x u
with optimization variable x and problem data A ∈ Rm×n , b ∈ Rm , l ∈ Rn , and u ∈ Rn .
This might seem like a fairly simple additional constraint, but it turns out that there
will no longer be an analytical solution. However, you should be able to convince
yourself that this optimization problem is a quadratic program, with matrices defined
by
1 1
P ∈ Rn×n = AT A, c ∈ Rn = −bT A, d ∈ R = bT b,
2 2
−I 0 −l
G ∈ R2n×2n = , h ∈ R2n = .
0 I u
• Maximum Likelihood for Logistic Regression. For homework one, you were
required to show that the log-likelihood of the data in a logistic model was concave.
This log likehood under such a model is
n
X
y (i) ln g(θT x(i) ) + (1 − y (i) ) ln(1 − g(θT x(i) ))
ℓ(θ) =
i=1
where g(z) denotes the logistic function g(z) = 1/(1 + e−z ). Finding the maximum
likelihood estimate is then a task of maximizing the log-likelihood (or equivalently,
minimizing the negative log-likelihood, a convex function), i.e.,
minimize −ℓ(θ)
with optimization variable θ ∈ Rn and no constraints.
Unlike the previous two examples, it turns out that it is not so easy to put this prob-
lem into a “standard” form optimization problem. Nevertheless, you’ve seen on the
homework that the fact that ℓ is a concave function means that you can very efficiently
find the global solution using an algorithm such as Newton’s method.
References
[1] Stephen Boyd and Lieven Vandenberghe. Convex Optimization. Cambridge UP, 2004.
Online: http://www.stanford.edu/∼boyd/cvxbook/
12
Convex Optimization Overview (cnt’d)
Chuong B. Do
October 26, 2007
1 Recap
During last week’s section, we began our study of convex optimization, the study of
mathematical optimization problems of the form,
minimize
n
f (x)
x∈R
subject to gi (x) ≤ 0, i = 1, . . . , m, (1)
hi (x) = 0, i = 1, . . . , p,
where x ∈ Rn is the optimization variable, f : Rn → R and gi : Rn → R are convex functions,
and hi : Rn → R are affine functions. In a convex optimization problem, the convexity of both
the objective function f and the feasible region (i.e., the set of x’s satisfying all constraints)
allows us to conclude that any feasible locally optimal point must also be globally optimal.
This fact provides the key intuition for why convex optimization problems can in general be
solved efficiently.
In these lecture notes, we continue our foray into the field of convex optimization. In
particular, we will introduce the theory of Lagrange duality for convex optimization problems
with inequality and equality constraints. We will also discuss generic yet efficient algorithms
for solving convex optimization problems, and then briefly mention directions for further
exploration.
2 Duality
To explain the fundamental ideas behind duality theory, we start with a motivating example
based on CS 229 homework grading. We prove a simple weak duality result in this setting,
and then relate it to duality in optimization. We then discuss strong duality and the KKT
optimality conditions.
1
decide to economize on their work load for the quarter by grading only one problem on
each submitted problem set. Nevertheless, they also require that every student submit
an attempted solution to every problem (a requirement which, if violated, would lead to
automatic failure of the course).
Because they are extremely cold-hearted1 , the TAs always try to ensure that the students
lose as many points as possible; if the TAs grade a problem that the student did not attempt,
the number of points lost is set to +∞ to denote automatic failure in the course. Conversely,
each student in the course seeks to minimize the number of points lost on his or her assign-
ments, and thus must decide on a strategy—i.e., an allocation of time to problems—that
minimizes the number of points lost on the assignment.
The struggle between student and TAs can be summarized in a matrix A = (aij ) ∈ Rn×m ,
whose columns correspond to different problems that the TAs might grade, and whose rows
correspond to different strategies for time allocation that the student might use for the
problem set. For example, consider the following matrix,
2 3
5 5 5 5 5
6
A=4 8 8 1 8 8 75,
+∞ +∞ +∞ 0 +∞
Here, the student must decide between three strategies (corresponding to the three rows of
the matrix, A):
• i = 1: she invests an equal effort into all five problems and hence loses at most 5 points
on each problem,
• i = 2: she invests more time into problem 3 than the other four problems, and
• i = 3: she skips four problems in order to guarantee no points lost on problem 4.
Similarly, the TAs must decide between five strategies (j ∈ {1, 2, 3, 4, 5}) corresponding to
the choice of problem graded.
If the student is forced to submit the homework without knowing the TAs choice of
problem to be graded, and if the TAs are allowed to decide which problem to grade after
having seen the student’s problem set, then the number of points she loses will be:
p∗ = min max aij (= 5 in the example above) (P)
i j
where the order of the minimization and maximization reflect that for each fixed student time
allocation strategy i, the TAs will have the opportunity to choose the worst scoring problem
maxj aij to grade. However, if the TAs announce beforehand which homework problem will
be graded, then the the number of points lost will be:
d∗ = max min aij (= 0 in the example above) (D)
j i
where this time, for each possible announced homework problem j to be graded, the student
will have the opportunity to choose the optimal time allocation strategy, mini aij , which loses
1
Clearly, this is a fictional example. The CS 229 TAs want you to succeed. Really, we do.
2
her the fewest points. Here, (P) is called the primal optimization problem whereas (D) is
called the dual optimization problem. Rows containing +∞ values correspond to strategies
where the student has flagrantly violated the TAs demand that all problems be attempted;
for reasons, which will become clear later, we refer to these rows as being primal-infeasible.
In the example, the value of the dual problem is lower than that of the primal problem,
i.e., d∗ = 0 < 5 = p∗ . This intuitively makes sense: the second player in this adversarial
game has the advantage of knowing his/her opponent’s strategy. This principle, however,
holds more generally:
Theorem 2.1 (Weak duality). For any matrix A = (aij ) ∈ Rm×n , it is always the case that
Proof. Let (id , jd ) be the row and column associated with d∗ , and let (ip , jp ) be the row and
column associated with p∗ . We have,
Here, the first inequality follows from the fact that aid jd is the smallest element in the jd th
column (i.e., id was the strategy chosen by the student after the TAs chose problem jd , and
hence, it must correspond to the fewest points lost in that column). Similarly, the second
inequality follow from the fact that aip jp is the largest element in the ip th row (i.e., jp was
the problem chosen by the TAs after the student picked strategy ip , so it must correspond
to the most points lost in that row).
minimize
x
f (x)
subject to gi (x) ≤ 0, i = 1, . . . , m,
hi (x) = 0, i = 1, . . . , p.
Here, the variables λ and ν are called the the dual variables (or Lagrange multipliers).
Analogously, the variables x are known as the primal variables.
The correspondence between primal/dual optimization and game playing can be pictured
informally using an infinite matrix whose rows are indexed by x ∈ Rn and whose columns
3
are indexed by (λ, ν) ∈ Rm p
+ × R (i.e., λi ≥ 0, for i = 1, . . . , m). In particular, we have
2 3
... .. ..
6 . . 7
64· · · L(x, λ, ν) · · ·775
A=6
. . .
.. .. ..
Here, the “student” manipulates the primal variables x in order to minimize the Lagrangian
L(x, λ, ν) while the “TAs” manipulate the dual variables (λ, ν) in order to maximize the
Lagrangian.
To see the relationship between this game and the original optimization problem, we
formulate the following primal problem:
= min
x
θP (x) (P’)
where θP (x) := maxλ,ν:λi ≥0 L(x, λ, ν). Computing p∗ is equivalent to our original convex
optimization primal in the following sense: for any candidate solution x,
• if gi (x) > 0 for some i ∈ {1, . . . , m}, then setting λi = ∞ gives θP (x) = ∞.
• if hi (x) 6= 0 for some i ∈ {1, . . . , m}, then setting λi = ∞·Sign(hi (x)) gives θP (x) = ∞.
• if x is feasible (i.e., x obeys all the constraints of our original optimization problem),
then θP (x) = f (x), where the maximum is obtained, for example, by setting all of the
λi ’s and νi ’s to zero.
Intuitively then, θP (x) behaves conceptually like an “unconstrained” version of the original
constrained optimization problem in which the infeasible region of f is “carved away” by
forcing θP (x) = ∞ for any infeasible x; thus, only points in the feasible region are left
as candidate minimizers. This idea of using penalties to ensure that minimizers stay in the
feasible region will come up later when talk about barrier algorithms for convex optimization.
By analogy to the CS 229 grading example, we can form the following dual problem:
d∗ = max min
x
L(x, λ, ν)
λ,ν:λi ≥0
where θD (λ, ν) := minx L(x, λ, ν). Dual problems can often be easier to solve than their
corresponding primal problems. In the case of SVMs, for instance, SMO is a dual optimiza-
tion algorithm which considers joint optimization of pairs of dual variables. Its simple form
derives largely from the simplicity of the dual objective and the simplicity of the correspond-
ing constraints on the dual variables. Primal-based SVM solutions are indeed possible, but
when the number of training examples is large and the kernel matrix K of inner products
Kij = K(x(i) , x(j) ) is large, dual-based optimization can be considerably more efficient.
4
Using an argument essentially identical to that presented in Theorem (2.1), we can show
that in this setting, we again have d∗ ≤ p∗ . This is the property of weak duality for
general optimization problems. Weak duality can be particularly useful in the design of
optimization algorithms. For example, suppose that during the course of an optimization
algorithm we have a candidate primal solution x and dual-feasible vector (λ, ν) such that
θP (x) − θD (λ, ν) ≤ . From weak duality, we have that
θD (λ, ν) ≤ d∗ ≤ p∗ ≤ θP (x),
implying that x and (λ, ν) must be -optimal (i.e., their objective functions differ by no more
than from the objective functions of the true optima x∗ and (λ∗ , ν ∗ ).
In practice, the dual objective θD (λ, ν) can often be found in closed form, thus allowing
the dual problem (D’) to depend only on the dual variables λ and ν. When the Lagrangian is
differentiable with respect to x, then a closed-form for θD (λ, ν) can often be found by setting
the gradient of the Lagrangian to zero, so as to ensure that the Lagrangian is minimized
with respect to x.2 An example derivation of the dual problem for the L1 soft-margin SVM
is shown in the Appendix.
Theorem 2.2. Consider a convex optimization problem of the form (1), whose corresponding
primal and dual problems are given by (P’) and (D’). If there exists a primal feasible x for
2
Often, differentiating the Lagrangian with respect to x leads to the generation of additional requirements
on dual variables that must hold at any fixed point of the Lagrangian with respect to x. When these
constraints are not satisfied, one can show that the Lagrangian is unbounded below (i.e., θD (λ, ν) = −∞).
Since such points are clearly not optimal solutions for the dual problem, we can simply exclude them from
Pmconstraints to the existing constraints of the
the domain of the dual problem altogether by adding the derived
dual problem. An example of this is the derived constraint, i=1 αi y (i) = 0, in the SVM formulation. This
procedure of incorporating derived constraints into the dual problem is known as making dual constraints
explicit (see [1], page 224).
5
which each inequality constraint is strictly satisfied (i.e., gi (x) < 0), then d∗ = p∗ .3
The proof of this theorem is beyond the scope of this course. We will, however, point out
its application to the soft-margin SVMs described in class. Recall that soft-margin SVMs
were found by solving
1 Xm
minimize kwk2 + C ξi
w,b,ξ 2 i=1
subject to y (i) (wT x(i) + b) ≥ 1 − ξi , i = 1, . . . , m,
ξi ≥ 0, i = 1, . . . , m.
Slater’s condition applies provided we can find at least one primal feasible setting of w, b,
and ξ where all inequalities are strict. It is easy to verify that w = 0, b = 0, ξ = 2 · 1 satisfies
these conditions (where 0 and 1 denote the vector of all 0’s and all 1’s, respectively), since
and the remaining m inequalities are trivially strictly satisfied. Hence, strong duality holds,
so the optimal values of the primal and dual soft-margin SVM problems will be equal.
Theorem 2.3. If x̃ is primal feasible and (λ̃, ν̃) are dual feasible, and if
then x̃ is primal optimal, (λ̃, ν̃) are dual optimal, and strong duality holds.
Theorem 2.4. If Slater’s condition holds, then conditions of Theorem 2.3 are necessary for
any (x∗ , λ∗ , ν ∗ ) such that x∗ is primal optimal and (λ∗ , ν ∗ ) are dual feasible.
3
One can actually show a more general version of Slater’s inequality, which requires only strict satisfaction
of non-affine inequality constraints (but allowing affine inequalities to be satisfied with equality). See [1],
page 226.
6
(KKT1) is the standard gradient stationarity condition found for unconstrained differentiable
optimization problems. The set of inequalities corresponding to (KKT2) are known as the
KKT complementarity (or complementary slackness) conditions. In particular,
if x∗ is primal optimal and (λ∗ , ν ∗ ) is dual optimal, then (KKT2) implies that
That is, whenever λ∗i is greater than zero, its corresponding inequality constraint must be
tight; conversely, any strictly satisfied inequality must have have λ∗i equal to zero. Thus, we
can interpret the dual variables λ∗i as measuring the “importance” of a particular constraint
in characterizing the optimal point.
This interpretation provides an intuitive explanation for the difference between hard-
margin and soft-margin SVMs. Recall the dual problems for a hard-margin SVM:
X
m
1X m X m
maximize αi − αi αi y (i) y (j) hx(i) , x(j) i
α,β
i=1 2 i=1 j=1
subject to αi ≥ 0, i = 1, . . . , m, (2)
X
m
αi y (i) = 0,
i=1
Note that the only difference in the soft-margin formulation is the introduction of upper
bounds on the dual variables αi . Effectively, this upper bound constraint limits the influence
of any single primal inequality constraint (i.e., any single training example) on the decision
boundary, leading to improved robustness for the L1 soft-margin model.
What consequences do the KKT conditions have for practical optimization algorithms?
When Slater’s conditions hold, then the KKT conditions are both necessary and sufficient for
primal/dual optimality of a candidate primal solution x̃ and a corresponding dual solution
(λ̃, ν̃). Therefore, many optimization algorithms work by trying to guarantee that the KKT
conditions are satisfied; the SMO algorithm, for instance, works by iteratively identifying
Lagrange multipliers for which the corresponding KKT conditions are unsatisfied and then
“fixing” KKT complementarity.4
4
See [1], pages 244-245 for an example of an optimization problem where the KKT conditions can be
solved directly, thus skipping the need for primal/dual optimization altogether.
7
3 Algorithms for convex optimization
Thus far, we have talked about convex optimization problems and their properties. But
how does one solve a convex optimization problem in practice? In this section, we describe
a generic strategy for solving convex optimization problems known as the interior-point
method. This method combines a safe-guarded variant of Newton’s algorithm with a “bar-
rier” technique for enforcing inequality constraints.
assuming ∇2x f (xt )T is positive definite (and hence, full rank). This, of course, is the standard
Newton algorithm for unconstrained minimization.
While Newton’s method converges quickly if given an initial point near the minimum, for
points far from the minimum, Newton’s method can sometimes diverge (as you may have
discovered in problem 1 of Problem Set #1 if you picked an unfortunate initial point!). A
simple fix for this behavior is to use a line-search procedure. Define the search direction d
to be,
A line-search procedure is an algorithm for finding an appropriate step size γ ≥ 0 such that
the iteration
xt+1 = xt − γ · d
will ensure that the function f decreases by a sufficient amount (relative to the size of the
step taken) during each iteration.
8
One simple yet effective method for doing this is called a backtracking line search.
In this method, one initially sets γ to 1 and then iteratively reduces γ by a multiplicative
factor β until f (xt + γ · d) is sufficiently smaller than f (xt ):
Backtracking line-search
Since the function f is known to decrease locally near xt in the direction of d, such a
step will be found, provided γ is small enough. For more details, see [1], pages 464-466.
In order to use Newton’s method, one must be able to compute and invert the Hessian
matrix ∇2x f (xt ), or equivalently, compute the search direction d indirectly without forming
the Hessian. For some problems, the number of primal variables x is sufficiently large
that computing the Hessian can be very difficult. In many cases, this can be dealt with
by clever use of linear algebra. In other cases, however, we can resort to other nonlinear
minimization schemes, such as quasi-Newton methods, which initially behave like gradient
descent but gradually construct approximations of the inverse Hessian based on the gradients
observed throughout the course of the optimization.5 Alternatively, nonlinear conjugate
gradient schemes (which augment the standard conjugate gradient (CG) algorithm for
solving linear least squares systems with a line-search) provide another generic blackbox tool
for multivariable function minimization which is simple to implement, yet highly effective in
practice.6
minimize
x
f (x)
subject to gi (x) ≤ 0, i = 1, . . . , m.
5
For more information on Quasi-Newton methods, the standard reference is Jorge Nocedal and Stephen
J. Wright’s textbook, Numerical Optimization.
6
For an excellent tutorial on the conjugate gradient method, see Jonathan Shewchuk’s tutorial, available
at: http://www.cs.cmu.edu/∼quake-papers/painless-conjugate-gradient.pdf
7
In practice, there are many of ways of dealing with equality constraints. Sometimes, we can eliminate
equality constraints by either reparameterizing of the original primal problem, or converting to the dual
problem. A more general strategy is to rely on equality-constrained variants of Newton’s algorithms which
ensure that the equality constraints are satisfied at every iteration of the optimization. For a more complete
treatment of this topic, see [1], Chapter 10.
9
We will also assume knowledge of a feasible starting point x0 which satisfies all of our
constraints with strict inequality (as needed for Slater’s condition to hold).8
Recall that in our discussion of the Lagrangian-based formulation of the primal problem,
we stated that the inner maximization, maxλ:λi ≥0 L(x, λ), was constructed in such a way
that the infeasible region of f was “carved away”, leaving only points in the feasible region
as candidate minima. The same idea of using penalties to ensure that minimizers stay in the
feasible region is the basis of barrier -based optimization. Specifically, if B(z) is the barrier
function
8
<0 z < 0
B(z) = :
∞ z ≥ 0,
When gi (x) < 0, the objective of the problem is simply f (x); infeasible points are “carved
away” using the barrier function B(z).
While conceptually correct, optimization using the straight barrier function B(x) is nu-
merically difficult. To ameliorate this, the log-barrier optimization algorithm approximates
the solution to (4) by solving the unconstrained problem,
1X m
minimize f (x) − log(−gi (x)).
x t i=1
for some fixed t > 0. Here, the function −(1/t) log(−z) ≈ B(z), and the accuracy of the
approximation increases as t → ∞. Rather than using a large value of t in order to obtain
a good approximation, however, the log-barrier algorithm works by solving a sequence of
unconstrained optimization problems, increasing t each time, and using the solution of the
previous unconstrained optimization problem as the initial point for the next unconstrained
optimization. Furthermore, at each point in the algorithm, the primal solution points stay
strictly in the interior of the feasible region:
8
For more information on finding feasible starting points for barrier algorithms, see [1], pages 579-585.
For inequality-problems where the primal problem is feasible but not strictly feasible, primal-dual interior
point methods are applicable, also described in [1], pages 609-615.
10
Log-barrier optimization
One might expect that as t increases, the difficulty of solving each unconstrained minimiza-
tion problem also increases due to numerical issues or ill-conditioning of the optimization
problem. Surprisingly, Nesterov and Nemirovski showed in 1994 that this is not the case
for certain types of barrier functions, including the log-barrier; in particular, by using an
appropriate barrier function, one obtains a general convex optimization algorithm which
takes time polynomial in the dimensionality of the optimization variables and the desired
accuracy!
11
Also, if you find this material fascinating, make sure to check out Stephen Boyd’s class,
EE364: Convex Optimization I, which will be offered during the Winter Quarter. The
textbook for the class (listed as [1] in the References) has a wealth of information about
convex optimization and is available for browsing online.
References
[1] Stephen Boyd and Lieven Vandenberghe. Convex Optimization. Cambridge UP, 2004.
Online: http://www.stanford.edu/∼boyd/cvxbook/
1 Xm
minimize kwk2 + C ξi
w,b,ξ 2 i=1
subject to y (i) (wT x(i) + b) ≥ 1 − ξi , i = 1, . . . , m,
ξi ≥ 0, i = 1, . . . , m.
First, we put this into our standard form, with “≤ 0” inequality constraints and no equality
constraints. That is,
1 2
X m
minimize kwk + C ξi
w,b,ξ 2 i=1
subject to 1 − ξi − y (i) (wT x(i) + b) ≤ 0, i = 1, . . . , m,
−ξi ≤ 0, i = 1, . . . , m.
1 2
Xm Xm
(i) T (i)
Xm
L(w, b, ξ, α, β) = kwk + C ξi + αi (1 − ξi − y (w x + b)) − βi ξi ,
2 i=1 i=1 i=1
To get the dual problem in the form shown in the lecture notes, however, we still have a
little more work to do. In particular,
10
Here, it is important to note that (w, b, ξ) collectively play the role of the x primal variables. Similarly,
(α, β) collectively play the role of the λ dual variables used for inequality constraints. There are no “ν” dual
variables here since there are no affine constraints in this problem.
12
1. Eliminating the primal variables. To eliminate the primal variables from the dual
problem, we compute θD (α, β) by noticing that
Adding (6) and (7) to the constraints of our dual optimizaton problem, we obtain,
ˆ
θD (α, β) = L(ŵ, b̂, ξ)
1 Xm Xm X m
= kŵk2 + C ξˆi + αi (1 − ξˆi − y (i) (ŵT x(i) + b̂)) − βi ξˆi
2 i=1 i=1 i=1
1 Xm Xm Xm
= kŵk2 + C ξˆi + αi (1 − ξˆi − y (i) (ŵT x(i) )) − βi ξˆi
2 i=1 i=1 i=1
1 Xm
= kŵk2 + αi (1 − y (i) (ŵT x(i) )).
2 i=1
1 Xm Xm
1 Xm
kŵk2 + αi (1 − y (i) (ŵT x(i) )) = αi + kŵk2 − ŵT αi y (i) x(i)
2 i=1 i=1 2 i=1
Xm
1
= αi + kŵk2 − kŵk2
i=1 2
Xm
1
= αi − kŵk2
i=1 2
Xm
1X m X m
= αi − αi αi y (i) y (j) hx(i) , x(j) i.
i=1 2 i=1 j=1
13
Therefore, our dual problem (with no more primal variables) is simply
X
m
1X m X m
maximize αi − αi αi y (i) y (j) hx(i) , x(j) i
α,β
i=1 2 i=1 j=1
subject to αi ≥ 0, i = 1, . . . , m,
βi ≥ 0, i = 1, . . . , m,
αi + βi = C, i = 1, . . . , m,
X
m
αi y (i) = 0.
i=1
2. KKT complementary. KKT complementarity requires that for any primal optimal
(w∗ , b∗ , ξ ∗ ) and dual optimal (α∗ , β ∗ ),
αi∗ (1 − ξi∗ − y (i) (w∗ T x(i) + b∗ )) = 0
βi∗ ξi∗ = 0
for i = 1, . . . , m. From the first condition, we see that if αi > 0, then in order for the
product to be zero, then 1 − ξi∗ − y (i) (w∗ T x(i) + b∗ ) = 0. It follows that
y (i) (w∗ T x(i) + b∗ ) ≤ 1
since ξ ∗ ≥ 0 by primal feasibility. Similarly, if βi∗ > 0, then ξi∗ = 0 to ensure comple-
mentarity. From the primal constraint, y (i) (wT x(i) + b) ≥ 1 − ξi , it follows that
y (i) (w∗ T x(i) + b∗ ) ≥ 1.
Finally, since βi∗ > 0 is equivalent to αi∗ < C (since α∗ + βi∗ = C), we can summarize
the KKT conditions as follows:
αi∗ = 0 ⇒ y (i) (w∗ T x(i) + b∗ ) ≥ 1,
0 < αi∗ < C ⇒ y (i) (w∗ T x(i) + b∗ ) = 1,
αi∗ = C ⇒ y (i) (w∗ T x(i) + b∗ ) ≤ 1.
3. Simplification. We can tidy up our dual problem slightly by observing that each pair
of constraints of the form
βi ≥ 0 αi + βi = C
is equivalent to the single constraint, αi ≤ C; that is, if we solve the optimization
problem
X
m
1X m X m
maximize αi − αi αi y (i) y (j) hx(i) , x(j) i
α,β
i=1 2 i=1 j=1
subject to 0 ≤ αi ≤ C, i = 1, . . . , m, (8)
X
m
αi y (i) = 0.
i=1
14
and subsequently set βi = C − αi , then it follows that (α, β) will be optimal for the
previous dual problem above. This last form, indeed, is the form of the soft-margin
SVM dual given in the lecture notes.
15
Hidden Markov Models Fundamentals
Daniel Ramage
December 1, 2007
Abstract
How can we apply machine learning to data that is represented as a
sequence of observations over time? For instance, we might be interested
in discovering the sequence of words that someone spoke based on an
audio recording of their speech. Or we might be interested in annotating
a sequence of words with their part-of-speech tags. These notes provides a
thorough mathematical introduction to the concept of Markov Models
a formalism for reasoning about states over time and Hidden Markov
Models where we wish to recover a series of states from a series of
observations. The nal section includes some pointers to resources that
present this material from other perspectives.
1 Markov Models
Given a set of states S = {s1 , s2 , ...s|S| } we can observe a series over time
~z ∈ S T . For example, we might have the states from a weather system S =
{sun, cloud, rain} with |S| = 3 and observe the weather over a few days {z1 =
ssun , z2 = scloud , z3 = scloud , z4 = srain , z5 = scloud } with T = 5.
The observed states of our weather example represent the output of a random
process over time. Without some further assumptions, state sj at time t could
be a function of any number of variables, including all the states from times 1
to t−1 and possibly many others that we don't even model. However, we will
make two Markov assumptions that will allow us to tractably reason about
time series.
The limited horizon assumption is that the probability of being in a
state at time t depends only on the state at time t − 1. The intuition underlying
this assumption is that the state at time t represents enough summary of the
past to reasonably predict the future. Formally:
1
P (zt |zt−1 ) = P (z2 |z1 ); t ∈ 2...T
As a convention, we will also assume that there is an initial state and initial
observation z0 ≡ s0 , where s0 represents the initial probability distribution over
states at time 0. This notational convenience allows us to encode our belief
about the prior probability of seeing the rst real state z1 as P (z1 |z0 ). Note
that P (zt |zt−1 , ..., z1 ) = P (zt |zt−1 , ..., z1 , z0 ) because we've dened z0 = s0 for
any state sequence. (Other presentations of HMMs sometimes represent these
prior believes with a vector π ∈ R|S| .)
We parametrize these transitions by dening a state transition matrix A ∈
R(|S|+1)×(|S|+1) . The value Aij is the probability of transitioning from state i
to state j at any time t. For our sun and rain example, we might have following
transition matrix:
Note that these numbers (which I made up) represent the intuition that the
weather is self-correlated: if it's sunny it will tend to stay sunny, cloudy will
stay cloudy, etc. This pattern is common in many Markov models and can
be observed as a strong diagonal in the transition matrix. Note that in this
example, our initial state s0 shows uniform probability of transitioning to each
of the three states in our weather system.
2
T
Y
= P (zt |zt−1 ; A)
t=1
T
Y
= Azt−1 zt
t=1
In the second line we introduce z0 into our joint probability, which is allowed
by the denition of z0 above. The third line is true of any joint distribution
by the chain rule of probabilities or repeated application of Bayes rule. The
fourth line follows from the Markov assumptions and the last line represents
these terms as their elements in our transition matrix A.
Let's compute the probability of our example time sequence from earlier. We
want P (z1 = ssun , z2 = scloud , z3 = srain , z4 = srain , z5 = scloud ) which can be
factored as P (ssun |s0 )P (scloud |ssun )P (srain |scloud )P (srain |srain )P (scloud |srain ) =
.33 × .1 × .2 × .7 × .2.
From a learning perspective, we could seek to nd the parameters A that maxi-
mize the log-likelihood of sequence of observations ~z. This corresponds to nd-
ing the likelihoods of transitioning from sunny to cloudy versus sunny to sunny,
etc., that make a set of observations most likely. Let's dene the log-likelihood
a Markov model.
In the last line, we use an indicator function whose value is one when the
condition holds and zero otherwise to select the observed transition at each
time step. When solving this optimization problem, it's important to ensure
that solved parameters A still make a valid transition matrix. In particular, we
need to enforce that the outgoing probability distribution from state i always
sums to 1 and all elements of A are non-negative. We can solve this optimization
problem using the method of Lagrange multipliers.
max l(A)
A
3
|S|
X
s.t. Aij = 1, i = 1..|S|
j=1
Aij ≥ 0, i, j = 1..|S|
This constrained optimization problem can be solved in closed form using the
method of Lagrange multipliers. We'll introduce the equality constraint into the
Lagrangian, but the inequality constraint can safely be ignored the optimal
solution will produce positive values for Aij anyway. Therefore we construct
the Lagrangian as:
T |S|
∂L(A, α) ∂ X ∂ X
= ( 1{zt−1 = si ∧ zt = sj } log Aij ) + αi (1 − Aij )
∂Aij ∂Aij t=1 ∂Aij j=1
T
1 X
= 1{zt−1 = si ∧ zt = sj } − αi ≡ 0
Aij t=1
⇒
T
1 X
Aij = 1{zt−1 = si ∧ zt = sj }
αi t=1
Substituting back in and setting the partial with respect to α equal to zero:
|S|
∂L(A, β) X
= 1− Aij
∂αi j=1
|S| T
X 1 X
= 1− 1{zt−1 = si ∧ zt = sj } ≡ 0
α
j=1 i t=1
⇒
|S| T
X X
αi = 1{zt−1 = si ∧ zt = sj }
j=1 t=1
T
X
= 1{zt−1 = si }
t=1
Substituting in this value for αi into the expression we derived for Aij we
4
PT
t=11{zt−1 = si ∧ zt = sj }
Âij = PT
t=1 1{zt−1 = si }
5
where our alphabet just encodes the number of ice creams consumed, i.e. V =
{v1 = 1 ice cream, v2 = 2 ice creams, v3 = 3 ice creams}. What questions can
an HMM let us answer?
There are three fundamental questions we might ask of an HMM. What is the
probability of an observed sequence (how likely were we to see 3, 2, 1, 2 ice creams
consumed)? What is the most likely series of states to generate the observations
(what was the weather for those four days)? And how can we learn values for
the HMM's parameters A and B given some data?
In an HMM, we assume that our data was generated by the following process:
posit the existence of a series of states ~z over the length of our time series.
This state sequence is generated by a Markov model parametrized by a state
transition matrix A. At each time step t, we select an output xt as a function of
the state zt . Therefore, to get the probability of a sequence of observations, we
need to add up the likelihood of the data ~x given every possible series of states.
X
P (~x; A, B) = P (~x, ~z; A, B)
~
z
X
= P (~x|~z; A, B)P (~z; A, B)
~
z
The formulas above are true for any probability distribution. However, the
HMM assumptions allow us to simplify the expression further:
X
P (~x; A, B) = P (~x|~z; A, B)P (~z; A, B)
~
z
T
X Y T
Y
= ( P (xt |zt ; B)) ( P (zt |zt−1 ; A))
~
z t=1 t=1
X T
Y T
Y
= ( Bzt xt ) ( Azt−1 zt )
~
z t=1 t=1
The good news is that this is a simple expression in terms of our parame-
ters. The derivation follows the HMM assumptions: the output independence
assumption, Markov assumption, and stationary process assumption are all used
to derive the second line. The bad news is that the sum is over every possible
assignment to ~z. Because zt can take one of |S| possible values at each time
step, evaluating this sum directly will require O(|S|T ) operations.
6
Algorithm 1 Forward Procedure for computing αi (t)
1. Base case: αi (0) = A0 i , i = 1..|S|
P|S|
2. Recursion: αj (t) = i=1 αi (t − 1)Aij Bj xt , j = 1..|S|, t = 1..T
Algorithm 2.2 presents an ecient way to compute αi (t). At each time step
we must do only O(|S|) operations, resulting in a nal algorithm complexity
of O(|S| · T ) to compute the total probability of an observed state sequence
P (~x; A, B).
A similar algorithm known as the Backward Procedure can be used to
compute an analogous probability βi (t) = P (xT , xT −1 , .., xt+1 , zt = si ; A, B).
One of the most common queries of a Hidden Markov Model is to ask what
was the most likely series of states ~z ∈ S T given an observed series of outputs
T
~x ∈ V . Formally, we seek:
P (~x, ~z; A, B)
arg max P (~z|~x; A, B) = arg max P = arg max P (~x, ~z; A, B)
~
z ~
z z P (~
~ x, ~z; A, B) ~
z
The rst simplication follows from Bayes rule and the second from the
observation that the denominator does not directly depend on ~z. Naively, we
might try every possible assignment to ~z and take the one with the highest
joint probability assigned by our model. However, this would require O(|S|T )
operations just to enumerate the set of possible assignments. At this point, you
might think a dynamic programming solution like the Forward Algorithm might
save the day, and you'd be right. Notice that if you replaced the arg max~
z with
P
z , our current task is exactly analogous to the expression which motivated
~
the forward procedure.
7
Algorithm 2 Naive application of EM to HMMs
Q(~z) := p(~z|~x; A, B)
(M-Step) Set
X P (~x, ~z; A, B)
A, B := arg max Q(~z) log
A,B Q(~z)
~
z
|S|
X
s.t. Aij = 1, i = 1..|S|; Aij ≥ 0, i, j = 1..|S|
j=1
|V |
X
Bik = 1, i = 1..|S|; Bik ≥ 0, i = 1..|S|, k = 1..|V |
k=1
The Viterbi Algorithm is just like the forward procedure except that
instead of tracking the total probability of generating the observations seen so
far, we need only track the maximum probability and record its corresponding
state sequence.
The nal question to ask of an HMM is: given a set of observations, what
are the values of the state transition probabilities A and the output emission
probabilities B that make the data most likely? For example, solving for the
maximum likelihood parameters based on a speech recognition dataset will allow
us to eectively train the HMM before asking for the maximum likelihood state
assignment of a candidate speech signal.
In this section, we present a derivation of the Expectation Maximization
algorithm for Hidden Markov Models. This proof follows from the general for-
mulation of EM presented in the CS229 lecture notes. Algorithm 2.4 shows the
basic EM algorithm. Notice that the optimization problem in the M-Step is now
constrained such that A and B contain valid probabilities. Like the maximum
likelihood solution we found for (non-Hidden) Markov models, we'll be able to
solve this optimization problem with Lagrange multipliers. Notice also that the
E-Step and M-Step both require enumerating all |S|T possible labellings of ~z.
We'll make use of the Forward and Backward algorithms mentioned earlier to
compute a set of sucient statistics for our E-Step and M-Step tractably.
First, let's rewrite the objective function using our Markov assumptions.
8
X P (~x, ~z; A, B)
A, B = arg max Q(~z) log
A,B Q(~z)
~
z
X
= arg max Q(~z) log P (~x, ~z; A, B)
A,B
~
z
X T
Y T
Y
= arg max Q(~z) log( P (xt |zt ; B)) ( P (zt |zt−1 ; A))
A,B
~
z t=1 t=1
X T
X
= arg max Q(~z) log Bzt xt + log Azt−1 zt
A,B
~
z t=1
|S| |S| |V | T
X X XXX
= arg max Q(~z) 1{zt = sj ∧ xt = vk } log Bjk + 1{zt−1 = si ∧ zt = sj } log Aij
A,B
~
z i=1 j=1 k=1 t=1
In the rst line we split the log division into a subtraction and note that
the denominator's term does not depend on the parameters A, B . The Markov
assumptions are applied in line 3. Line 5 uses indicator functions to index A
and B by state.
Just as for the maximum likelihood parameters for a visible Markov model,
it is safe to ignore the inequality constraints because the solution form naturally
results in only positive solutions. Constructing the Lagrangian:
|S| |S| |V | T
X X XXX
L(A, B, δ, ) = Q(~z) 1{zt = sj ∧ xt = vk } log Bjk + 1{zt−1 = si ∧ zt = sj } log Aij
~
z i=1 j=1 k=1 t=1
|S| |V | |S| |S|
X X X X
+ j (1 − Bjk ) + δi (1 − Aij )
j=1 k=1 i=1 j=1
T
∂L(A, B, δ, ) X 1 X
= Q(~z) 1{zt−1 = si ∧ zt = sj } − δi ≡ 0
∂Aij Aij t=1
~
z
T
1 X X
Aij = Q(~z) 1{zt−1 = si ∧ zt = sj }
δi t=1
~
z
T
∂L(A, B, δ, ) X 1 X
= Q(~z) 1{zt = sj ∧ xt = vk } − j ≡ 0
∂Bjk Bjk t=1
~
z
T
1 X X
Bjk = Q(~z) 1{zt = sj ∧ xt = vk }
j t=1
~
z
9
Taking partial derivatives with respect to the Lagrange multipliers and sub-
stituting our values of Aij and Bjk above:
|S|
∂L(A, B, δ, ) X
= 1− Aij
∂δi j=1
|S| T
X 1 X X
= 1− Q(~z) 1{zt−1 = si ∧ zt = sj } ≡ 0
j=1 i
δ t=1
~
z
|S| T
XX X
δi = Q(~z) 1{zt−1 = si ∧ zt = sj }
j=1 ~
z t=1
X T
X
= Q(~z) 1{zt−1 = si }
~
z t=1
|V |
∂L(A, B, δ, ) X
= 1− Bjk
∂j
k=1
|V | T
X 1 X X
= 1− Q(~z) 1{zt = sj ∧ xt = vk } ≡ 0
j t=1
k=1 ~
z
|V | T
XX X
j = Q(~z) 1{zt = sj ∧ xt = vk }
k=1 ~
z t=1
X T
X
= Q(~z) 1{zt = sj }
~
z t=1
Substituting back into our expressions above, we nd that parameters  and
B̂ that maximize our predicted counts with respect to the dataset are:
P PT
Q(~z) t=1 1{zt−1 = si ∧ zt = sj }
~
z
Âij = P PT
z ) t=1 1{zt−1 = si }
z Q(~
~
P PT
z Q(~
~ z ) t=1 1{zt = sj ∧ xt = vk }
B̂jk = P PT
z ) t=1 1{zt = sj }
z Q(~
~
X T
X
Q(~z) 1{zt−1 = si ∧ zt = sj }
~
z t=1
10
T X
X
= 1{zt−1 = si ∧ zt = sj }Q(~z)
t=1 ~
z
T X
X
= 1{zt−1 = si ∧ zt = sj }P (~z|~x; A, B)
t=1 ~
z
T
1 XX
= 1{zt−1 = si ∧ zt = sj }P (~z, ~x; A, B)
P (~x; A, B) t=1
~
z
T
1 X
= αi (t)Aij Bj xt βj (t + 1)
P (~x; A, B) t=1
In the rst two steps we rearrange terms and substitute in for our denition
of Q. Then we use Bayes rule in deriving line four, followed by the denitions
of α, β , A, and B , in line ve. Similarly, the denominator can be represented
by summing out over j the value of the numerator.
X T
X
Q(~z) 1{zt−1 = si }
~
z t=1
|S| T
X X X
= Q(~z) 1{zt−1 = si ∧ zt = sj }
j=1 ~
z t=1
|S| T
1 XX
= αi (t)Aij Bj xt βj (t + 1)
P (~x; A, B) j=1 t=1
PT
t=1 αi (t)Aij Bj xt βj (t + 1)
Âij = P|S| PT
j=1 t=1 αi (t)Aij Bj xt βj (t + 1)
X T
X
Q(~z) 1{zt = sj ∧ xt = vk }
~
z t=1
T
1 XX
= 1{zt = sj ∧ xt = vk }P (~z, ~x; A, B)
P (~x; A, B) t=1
~
z
|S| T
1 X XX
= 1{zt−1 = si ∧ zt = sj ∧ xt = vk }P (~z, ~x; A, B)
P (~x; A, B) i=1 t=1 ~
z
11
Algorithm 3 Forward-Backward algorithm for HMM parameter learning
γt (i, j) := αi (t)Aij Bj xt βj (t + 1)
PT
t=1 γt (i, j)
Aij := P|S| PT
j=1 t=1 γt (i, j)
P|S| PT
i=1 t=1 1{xt = vk } γt (i, j)
Bjk := P|S| PT
i=1 t=1 γt (i, j)
|S| T
1 XX
= 1{xt = vk }αi (t)Aij Bj xt βj (t + 1)
P (~x; A, B) i=1 t=1
X T
X
Q(~z) 1{zt = sj }
~
z t=1
|S| T
1 XXX
= 1{zt−1 = si ∧ zt = sj }P (~z, ~x; A, B)
P (~x; A, B) i=1 t=1
~
z
|S|
T
1 XX
= αi (t)Aij Bj xt βj (t + 1)
P (~x; A, B) i=1 t=1
Combining these expressions, we have the following form for our maximum
likelihood emission probabilities as:
P|S| PT
i=1 t=1 1{xt = vk }αi (t)Aij Bj xt βj (t + 1)
B̂jk = P|S| PT
i=1 t=1 αi (t)Aij Bj xt βj (t + 1)
12
E-Step, rather than explicitly evaluating Q(~z) for all ~z ∈ S T , we compute
a sucient statistics γt (i, j) = αi (t)Aij Bj xt βj (t + 1) that is proportional to
the probability of transitioning between sate si and sj at time t given all of
our observations ~x. The derived expressions for Aij and Bjk are intuitively
appealing. Aij is computed as the expected number of transitions from si to
sj divided by the expected number of appearances of si . Similarly, Bjk is
computed as the expected number of emissions of vk from sj divided by the
expected number of appearances of sj .
Like many applications of EM, parameter learning for HMMs is a non-convex
problem with many local maxima. EM will converge to a maximum based on
its initial parameters, so multiple runs might be in order. Also, it is often
important to smooth the probability distributions represented by A and B so
that no transition or emission is assigned 0 probability.
There are many good sources for learning about Hidden Markov Models. For ap-
plications in NLP, I recommend consulting Jurafsky & Martin's draft second edi-
tion of Speech and Language Processing 1 or Manning & Schütze's Foundations of
Statistical Natural Language Processing. Also, Eisner's HMM-in-a-spreadsheet
[1] is a light-weight interactive way to play with an HMM that requires only a
spreadsheet application.
References
[1] Jason Eisner. An interactive spreadsheet for teaching the forward-backward
algorithm. In Dragomir Radev and Chris Brew, editors,Proceedings of the
ACL Workshop on Eective Tools and Methodologies for Teaching NLP and
CL, pages 1018, 2002.
1 http://www.cs.colorado.edu/~martin/slp2.html
13
Gaussian processes
Chuong B. Do
December 1, 2007
Many of the classical machine learning algorithms that we talked about during the first
half of this course fit the following pattern: given a training set of i.i.d. examples sampled
from some unknown distribution,
1. solve a convex optimization problem in order to identify the single “best fit” model for
the data, and
2. use this estimated model to make “best guess” predictions for future test input points.
In these notes, we will talk about a different flavor of learning algorithms, known as
Bayesian methods. Unlike classical learning algorithm, Bayesian algorithms do not at-
tempt to identify “best-fit” models of the data (or similarly, make “best guess” predictions
for new test inputs). Instead, they compute a posterior distribution over models (or similarly,
compute posterior predictive distributions for new test inputs). These distributions provide
a useful way to quantify our uncertainty in model estimates, and to exploit our knowledge
of this uncertainty in order to make more robust predictions on new test points.
We focus on regression problems, where the goal is to learn a mapping from some input
space X = Rn of n-dimensional vectors to an output space Y = R of real-valued targets.
In particular, we will talk about a kernel-based fully Bayesian regression algorithm, known
as Gaussian process regression. The material covered in these notes draws heavily on many
different topics that we discussed previously in class (namely, the probabilistic interpretation
of linear regression1 , Bayesian methods2 , kernels3 , and properties of multivariate Gaussians4 ).
The organization of these notes is as follows. In Section 1, we provide a brief review
of multivariate Gaussian distributions and their properties. In Section 2, we briefly review
Bayesian methods in the context of probabilistic linear regression. The central ideas under-
lying Gaussian processes are presented in Section 3, and we derive the full Gaussian process
regression model in Section 4.
1
See course lecture notes on “Supervised Learning, Discriminative Algorithms.”
2
See course lecture notes on “Regularization and Model Selection.”
3
See course lecture notes on “Support Vector Machines.”
4
See course lecture notes on “Factor Analysis.”
1
1 Multivariate Gaussians
A vector-valued random variable x ∈ Rn is said to have a multivariate normal (or
Gaussian) distribution with mean µ ∈ Rn and covariance matrix Σ ∈ Sn++ if
1 1 T −1
p(x; µ, Σ) = exp − (x − µ) Σ (x − µ) . (1)
(2π)n/2 |Σ| 2
We write this as x ∼ N (µ, Σ). Here, recall from the section notes on linear algebra that Sn++
refers to the space of symmetric positive definite n × n matrices.5
Generally speaking, Gaussian random variables are extremely useful in machine learning
and statistics for two main reasons. First, they are extremely common when modeling “noise”
in statistical algorithms. Quite often, noise can be considered to be the accumulation of a
large number of small independent random perturbations affecting the measurement process;
by the Central Limit Theorem, summations of independent random variables will tend to
“look Gaussian.” Second, Gaussian random variables are convenient for many analytical
manipulations, because many of the integrals involving Gaussian distributions that arise in
practice have simple closed form solutions. In the remainder of this section, we will review
a number of useful properties of multivariate Gaussians.
Consider a random vector x ∈ Rn with x ∼ N (µ, Σ). Suppose also that the variables in x
have been partitioned into two sets xA = [x1 · · · xr ]T ∈ Rr and xB = [xr+1 · · · xn ]T ∈ Rn−r
(and similarly for µ and Σ), such that
xA µA ΣAA ΣAB
x= µ= Σ= .
xB µB ΣBA ΣBB
Here, ΣAB = ΣTBA since Σ = E[(x − µ)(x − µ)T ] = ΣT . The following properties hold:
1. Normalization. The density function normalizes, i.e.,
Z
p(x; µ, Σ)dx = 1.
x
This property, though seemingly trivial at first glance, turns out to be immensely
useful for evaluating all sorts of integrals, even ones which appear to have no relation
to probability distributions at all (see Appendix A.1)!
2. Marginalization. The marginal densities,
Z
p(xA ) = p(xA , xB ; µ, Σ)dxB
xB
Z
p(xB ) = p(xA , xB ; µ, Σ)dxA
xA
5
There are actually cases in which we would want to deal with multivariate Gaussian distributions where
Σ is positive semidefinite but not positive definite (i.e., Σ is not full rank). In such cases, Σ−1 does not exist,
so the definition of the Gaussian density given in (1) does not apply. For instance, see the course lecture
notes on “Factor Analysis.”
2
are Gaussian:
xA ∼ N (µA , ΣAA )
xB ∼ N (µB , ΣBB ).
p(xA , xB ; µ, Σ)
p(xA | xB ) = R
xA
p(xA , xB ; µ, Σ)dxA
p(xA , xB ; µ, Σ)
p(xB | xA ) = R
xB
p(xA , xB ; µ, Σ)dxB
4. Summation. The sum of independent Gaussian random variables (with the same
dimensionality), y ∼ N (µ, Σ) and z ∼ N (µ′ , Σ′ ), is also Gaussian:
y + z ∼ N (µ + µ′ , Σ + Σ′ ).
where the ε(i) are i.i.d. “noise” variables with independent N (0, σ 2 ) distributions. It follows
that y (i) − θT x(i) ∼ N (0, σ 2 ), or equivalently,
(y (i) − θT x(i) )2
(i) (i) 1
P (y | x , θ) = √ exp − .
2πσ 2σ 2
3
Bayesian linear regression, 95% confidence region
−1
−2
−3
−5 −4 −3 −2 −1 0 1 2 3 4 5
Figure 1: Bayesian linear regression for a one-dimensional linear regression problem, y (i) =
θx(i) + ǫ(i) , with ǫ(i) ∼ N (0, 1) i.i.d. noise. The green region denotes the 95% confidence
region for predictions of the model. Note that the (vertical) width of the green region is
largest at the ends but narrowest in the middle. This region reflects the uncertain in the
estimates for the parameter θ. In contrast, a classical linear regression model would display
a confidence region of constant width, reflecting only the N (0, σ 2 ) noise in the outputs.
Assuming the same noise model on testing points as on our training points, the “output” of
Bayesian linear regression on a new test point x∗ is not just a single guess “y∗ ”, but rather
an entire probability distribution over possible outputs, known as the posterior predictive
distribution:
Z
p(y∗ | x∗ , S) = p(y∗ | x∗ , θ)p(θ | S)dθ. (3)
θ
For many types of models, the integrals in (2) and (3) are difficult to compute, and hence,
we often resort to approximations, such as MAP estimation (see course lecture notes on
“Regularization and Model Selection”).
In the case of Bayesian linear regression, however, the integrals actually are tractable! In
particular, for Bayesian linear regression, one can show (after much work!) that
1 −1 T −1
θ|S∼N A X ~y , A
σ2
1 T −1 T T −1 2
y∗ | x∗ , S ∼ N x A X ~y , x∗ A x∗ + σ
σ2 ∗
4
where A = σ12 X T X + τ12 I. The derivation of these formulas is somewhat involved.6 Nonethe-
less, from these equations, we get at least a flavor of what Bayesian methods are all about: the
posterior distribution over the test output y∗ for a test input x∗ is a Gaussian distribution—
this distribution reflects the uncertainty in our predictions y∗ = θT x∗ + ε∗ arising from both
the randomness in ε∗ and the uncertainty in our choice of parameters θ. In contrast, classical
probabilistic linear regression models estimate parameters θ directly from the training data
but provide no estimate of how reliable these learned parameters may be (see Figure 1).
3 Gaussian processes
As described in Section 1, multivariate Gaussian distributions are useful for modeling finite
collections of real-valued variables because of their nice analytical properties. Gaussian
processes are the extension of multivariate Gaussians to infinite-sized collections of real-
valued variables. In particular, this extension will allow us to think of Gaussian processes as
distributions not just over random vectors but in fact distributions over random functions.7
Since the domain of any h(·) ∈ H has only m elements, we can always represent h(·) com-
T
pactly as an m-dimensional vector, ~h = h(x1 ) h(x2 ) · · · h(xm ) . In order to specify
5
In the example above, we showed that probability distributions over functions with finite
domains can be represented using a finite-dimensional multivariate Gaussian distribution
over function outputs h(x1 ), . . . , h(xm ) at a finite number of input points x1 , . . . , xm . How
can we specify probability distributions over functions when the domain size may be infinite?
For this, we turn to a fancier type of probability distribution known as a Gaussian process.
Observe that the mean function and covariance function are aptly named since the above
properties imply that
m(x) = E[x]
k(x, x′ ) = E[(x − m(x))(x′ − m(x′ )).
for any x, x′ ∈ X .
Intuitively, one can think of a function h(·) drawn from a Gaussian process prior as an
extremely high-dimensional vector drawn from an extremely high-dimensional multivariate
Gaussian. Here, each dimension of the Gaussian corresponds to an element x from the index
set X , and the corresponding component of the random vector represents the value of h(x).
Using the marginalization property for multivariate Gaussians, we can obtain the marginal
multivariate Gaussian density corresponding to any finite subcollection of variables.
What sort of functions m(·) and k(·, ·) give rise to valid Gaussian processes? In general,
any real-valued function m(·) is acceptable, but for k(·, ·), it must be the case that for any
8
Often, when X = R, one can interpret the indices x ∈ X as representing times, and hence the variables
h(x) represent the temporal evolution of some random quantity over time. In the models that are used for
Gaussian process regression, however, the index set is taken to be the input space of our regression problem.
6
2 2 2 2 2 2
Samples from GP with k(x,z) = exp(−||x−z|| / (2*tau )), tau = 0.500000 Samples from GP with k(x,z) = exp(−||x−z|| / (2*tau )), tau = 2.000000 Samples from GP with k(x,z) = exp(−||x−z|| / (2*tau )), tau = 10.000000
2.5 2 1.5
2
1.5 1
1.5
1 1 0.5
0.5
0.5 0
0 −0.5
−0.5
−1 −0.5 −1
−1.5
−1 −1.5
−2
−2.5 −1.5 −2
0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10
Figure 2: Samples from a zero-mean Gaussian process prior with kSE (·, ·) covariance function,
using (a) τ = 0.5, (b) τ = 2, and (c) τ = 10. Note that as the bandwidth parameter τ
increases, then points which are farther away will have higher correlations than before, and
hence the sampled functions tend to be smoother overall.
7
for some τ > 0. What do random functions sampled from this Gaussian process look like?
In our example, since we use a zero-mean Gaussian process, we would expect that for
the function values from our Gaussian process will tend to be distributed around zero.
Furthermore, for any pair of elements x, x′ ∈ X .
• h(x) and h(x′ ) will tend to have high covariance x and x′ are “nearby” in the input
space (i.e., ||x − x′ || = |x − x′ | ≈ 0, so exp(− 2τ12 ||x − x′ ||2 ) ≈ 1).
• h(x) and h(x′ ) will tend to have low covariance when x and x′ are “far apart” (i.e.,
||x − x′ || ≫ 0, so exp(− 2τ12 ||x − x′ ||2 ) ≈ 0).
More simply stated, functions drawn from a zero-mean Gaussian process prior with the
squared exponential kernel will tend to be “locally smooth” with high probability; i.e.,
nearby function values are highly correlated, and the correlation drops off as a function of
distance in the input space (see Figure 2).
where the ε(i) are i.i.d. “noise” variables with independent N (0, σ 2 ) distributions. Like in
Bayesian linear regression, we also assume a prior distribution over functions h(·); in
particular, we assume a zero-mean Gaussian process prior,
8
distribution as S.10 For notational convenience, we define
— (x(1) )T — h(x(1) ) ε(1) y (1)
— (x(2) )T — h(x(2) ) ε(2) y (2)
X=
.. ∈ Rm×n
~h =
..
, ~
ε = ..
, ~
y = .. ∈ Rm ,
. . . .
(m) T
— (x ) — h(x(m) ) ε(m) y (m)
(1) (1) (1) (1)
— (x∗ )T —
h(x∗ ) ε∗ y∗
— (x(2) )T — h(x(2) ) ε(2) y (2)
∗ ∗ ∗ ∗
X∗ =
. ∈ Rm∗ ×n
~h∗ = . , ~ε∗ = . , ~y∗ = . ∈ Rm∗ .
.. .. .. ..
(m∗ ) T (m∗ ) (m) (m )
— (x∗ ) — h(x∗ ) ε∗ y∗ ∗
Given the training data S, the prior p(h), and the testing inputs X∗ , how can we compute
the posterior predictive distribution over the testing outputs ~y∗ ? For Bayesian linear regres-
sion in Section 2, we used Bayes’s rule in order to compute the paramter posterior, which we
then used to compute posterior predictive distribution p(y∗ | x∗ , S) for a new test point x∗ .
For Gaussian process regression, however, it turns out that an even simpler solution exists!
4.2 Prediction
Recall that for any function h(·) drawn from our zero-mean Gaussian process prior with
covariance function k(·, ·), the marginal distribution over any set of input points belonging
to X must have a joint multivariate Gaussian distribution. In particular, this must hold for
the training and test points, so we have
" #
~h
K(X, X) K(X, X )
X, X∗ ∼ N ~0, ∗
,
h~∗ K(X∗ , X) K(X∗ , X∗ )
where
~h ∈ Rm such that ~h = h(x(1) ) · · · h(x(m) ) T
h iT
~h∗ ∈ Rm∗ such that h~∗ = h(x(1)
∗ ) ···
(m)
h(x∗ )
K(X, X) ∈ Rm×m such that (K(X, X))ij = k(x(i) , x(j) )
K(X, X∗ ) ∈ Rm×m∗ such that (K(X, X∗ ))ij = k(x(i) , x(j)
∗ )
9
Gaussian process regression, 95% confidence region Gaussian process regression, 95% confidence region Gaussian process regression, 95% confidence region
1 1 1
0 0 0
−1 −1 −1
−2 −2 −2
0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10
Figure 3: Gaussian process regression using a zero-mean Gaussian process prior with kSE (·, ·)
covariance function (where τ = 0.1), with noise level σ = 1, and (a) m = 10, (b) m = 20, and
(c) m = 40 training examples. The blue line denotes the mean of the posterior predictive
distribution, and the green shaded region denotes the 95% confidence region based on the
model’s variance estimates. As the number of training examples increases, the size of the
confidence region shrinks to reflect the diminishing uncertainty in the model estimates. Note
also that in panel (a), the 95% confidence region shrinks near training points but is much
larger far away from training points, as one would expect.
5 Summary
We close our discussion of our Gaussian processes by pointing out some reasons why Gaussian
processes are an attractive model for use in regression problems and in some cases may be
preferable to alternative models (such as linear and locally-weighted linear regression):
11
Interestingly, it turns out that Bayesian linear regression, when “kernelized” in the proper way, turns
out to be exactly equivalent to Gaussian process regression! But the derivation of the posterior predictive
distribution is far more complicated for Bayesian linear regression, and the effort needed to kernelize the
algorithm is even greater. The Gaussian process perspective is certainly much easier!
10
1. As Bayesian methods, Gaussian process models allow one to quantify uncertainty in
predictions resulting not just from intrinsic noise in the problem but also the errors
in the parameter estimation procedure. Furthermore, many methods for model selec-
tion and hyperparameter selection in Bayesian methods are immediately applicable to
Gaussian processes (though we did not address any of these advanced topics here).
3. Gaussian process regression models provide a natural way to introduce kernels into a
regression modeling framework. By careful choice of kernels, Gaussian process regres-
sion models can sometimes take advantage of structure in the data (though, we also
did not examine this issue here).
References
[1] Carl E. Rasmussen and Christopher K. I. Williams. Gaussian Processes for Machine
Learning. MIT Press, 2006. Online: http://www.gaussianprocess.org/gpml/
11
Appendix A.1
In this example, we show how the normalization property for multivariate Gaussians can be
used to compute rather intimidating multidimensional integrals without performing any real
calculus! Suppose you wanted to compute the following multidimensional integral,
1 T
Z
T
I(A, b, c) = exp − x Ax − x b − c dx,
x 2
for some A ∈ Sm m
++ , b ∈ R , and c ∈ R. Although one could conceivably perform the
multidimensional integration directly (good luck!), a much simpler line of reasoning is based
on a mathematical trick known as “completion-of-squares.” In particular,
1 T
Z
T −1
I(A, b, c) = exp (−c) · exp − x Ax − x AA b dx
2
Zx
1 −1 T −1 T −1
= exp (−c) · exp − (x − A b) A(x − A b) − b A b dx
x 2
1
Z
T −1 −1 T
−1
= exp −c − b A b · exp − (x − A b) A(x − A b) dx.
x 2
Defining µ = A−1 b and Σ = A−1 , it follows that I(A, b, c) is equal to
(2π)m/2 |Σ|
1 1
Z
T −1
· exp − (x − µ) Σ (x − µ) dx .
exp (c + bT A−1 b) (2π)m/2 |Σ| x 2
However, the term in brackets is identical in form to the integral of a multivariate Gaussian!
Since we know that a Gaussian density normalizes, it follows that the term in brackets is
equal to 1. Therefore,
(2π)m/2 |A−1 |
I(A, b, c) = .
exp (c + bT A−1 b)
Appendix A.2
We derive the form of the distribution of xA given xB ; the other result follows immediately
by symmetry. Note that
1 1 1 T −1
p(xA | xB ) = R · exp − (x − µ) Σ (x − µ)
xA
p(xA , xB ; µ, Σ)dxA (2π)m/2 |Σ| 2
( )
T
1 1 xA µA VAA VAB xA µ
= exp − − − A
Z1 2 xB µB VBA VBB xB µB
12
To simplify this expression, observe that
T
xA µA VAA VAB xA µ
− − A
xB µB VBA VBB xB µB
= (xA − µA )T VAA (xA − µA ) + (xA − µA )T VAB (xB − µB )
+ (xB − µB )T VBA (xA − µA ) + (xB − µB )T VBB (xB − µB ).
T
Retaining only terms dependent on xA (and using the fact that VAB = VBA ), we have
1 1 T T T
p(xA | xB ) = exp − xA VAA xA − 2xA VAA µA + 2xA VAB (xB − µB )
Z2 2
where Z2 is a new proportionality constant which again does not depend on xA . Finally,
using the “completion-of-squares” argument (see Appendix A.1), we have
1 1 ′ T ′
p(xA | xB ) = exp − (xA − µ ) VAA (xA − µ )
Z3 2
where Z3 is again a new proportionality constant not depending on xA , and where mu′ =
−1
µA −VAA VAB (xB −µB ). This last statement shows that the distribution of xA , conditioned on
xB , again has the form of a multivariate Gaussian. In fact, from the normalization property,
it follows immediately that
−1 −1
xA | xB ∼ N (µA − VAA VAB (xB − µB ), VAA ).
follows from standard formulas for the inverse of a partitioned matrix. Substituting the
relevant blocks into the previous expression gives the desired result.
13
CS229 Problem Set #1 1
∇θ ℓ(θ) = X T z − λθ
where z ∈ Rm is defined by
zi = w(i) (y (i) − hθ (x(i) ))
and the Hessian is given by
H = X T DX − λI
where D ∈ Rm×m is a diagonal matrix with
For the sake of this problem you can just use the above formulas, but you should try to
derive these results for yourself as well.
Given a query point x, we choose compute the weights
Much like the locally weighted linear regression that was discussed in class, this weighting
scheme gives more when the “nearby” points when predicting the class of a new example.
CS229 Problem Set #1 2
(a) Implement the Newton-Raphson algorithm for optimizing ℓ(θ) for a new query point
x, and use this to predict the class of x.
The q2/ directory contains data and code for this problem. You should implement
the y = lwlr(X train, y train, x, tau) function in the lwlr.m file. This func-
tion takes as input the training set (the X train and y train matrices, in the form
described in the class notes), a new query point x and the weight bandwitdh tau.
Given this input the function should 1) compute weights w(i) for each training exam-
ple, using the formula above, 2) maximize ℓ(θ) using Newton’s method, and finally 3)
output y = 1{hθ (x) > 0.5} as the prediction.
We provide two additional functions that might help. The [X train, y train] =
load data; function will load the matrices from files in the data/ folder. The func-
tion plot lwlr(X train, y train, tau, resolution) will plot the resulting clas-
sifier (assuming you have properly implemented lwlr.m). This function evaluates the
locally weighted logistic regression classifier over a large grid of points and plots the
resulting prediction as blue (predicting y = 0) or red (predicting y = 1). Depending
on how fast your lwlr function is, creating the plot might take some time, so we
recommend debugging your code with resolution = 50; and later increase it to at
least 200 to get a better idea of the decision boundary.
(b) Evaluate the system with a variety of different bandwidth parameters τ . In particular,
try τ = 0.01, 0.050.1, 0.51.0, 5.0. How does the classification boundary change when
varying this parameter? Can you predict what the decision boundary of ordinary
(unweighted) logistic regression would look like?
Thus for each training example, y (i) is vector-valued, with p entries. We wish to use a linear
model to predict the outputs, as in least squares, by specifying the parameter matrix Θ in
y = ΘT x,
where Θ ∈ Rn×p .
Write J(Θ) in matrix-vector notation (i.e., without using any summations). [Hint:
Start with the m × n design matrix
— (x(1) )T —
— (x(2) )T —
X= ..
.
— (x(m) )T —
CS229 Problem Set #1 3
(y (1) )T
— —
— (y (2) )T —
Y = ..
.
— (y (m) )T —
and then work out how to express J(Θ) in terms of these matrices.]
(b) Find the closed form solution for Θ which minimizes J(Θ). This is the equivalent to
the normal equations for the multivariate case.
(c) Suppose instead of considering the multivariate vectors y (i) all at once, we instead
(i)
compute each variable yj separately for each j = 1, . . . , p. In this case, we have a p
individual linear models, of the form
(i)
yj = θjT x(i) , j = 1, . . . , p.
(So here, each θj ∈ Rn ). How do the parameters from these p independent least
squares problems compare to the multivariate solution?
4. Naive Bayes
In this problem, we look at maximum likelihood parameter estimation using the naive
Bayes assumption. Here, the input features xj , j = 1, . . . , n to our model are discrete,
binary-valued variables, so xj ∈ {0, 1}. We call x = [x1 x2 · · · xn ]T to be the input vector.
For each training example, our output targets are a single binary-value y ∈ {0, 1}. Our
model is then parameterized by φj|y=0 = p(xj = 1|y = 0), φj|y=1 = p(xj = 1|y = 1), and
φy = p(y = 1). We model the joint distribution of (x, y) according to
Qm
(a) Find the joint likelihood function ℓ(ϕ) = log i=1 p(x(i) , y (i) ; ϕ) in terms of the
model parameters given above. Here, ϕ represents the entire set of parameters
{φy , φj|y=0 , φj|y=1 , j = 1, . . . , n}.
(b) Show that the parameters which maximize the likelihood function are the same as
CS229 Problem Set #1 4
(c) Consider making a prediction on some new data point x using the most likely class
estimate generated by the naive Bayes algorithm. Show that the hypothesis returned
by naive Bayes is a linear classifier—i.e., if p(y = 0|x) and p(y = 1|x) are the class
probabilities returned by naive Bayes, show that there exists some θ ∈ Rn+1 such
that
1
p(y = 1|x) ≥ p(y = 0|x) if and only if θT ≥ 0.
x
(Assume θ0 is an intercept term.)
p(y; φ) = (1 − φ)y−1 φ, y = 1, 2, 3, . . . .
Show that the geometric distribution is in the exponential family, and give b(y), η,
T (y), and a(η).
(b) Consider performing regression using a GLM model with a geometric response vari-
able. What is the canonical response function for the family? You may use the fact
that the mean of a geometric distribution is given by 1/φ.
(c) For a training set {(x(i) , y (i) ); i = 1, . . . , m}, let the log-likelihood of an example
be log p(y (i) |x(i) ; θ). By taking the derivative of the log-likelihood with respect to
θj , derive the stochastic gradient ascent rule for learning using a GLM model with
goemetric responses y and the canonical response function.
CS229 Problem Set #2 1
we can also add a term that penalizes large weights in θ. In ridge regression, our least
squares cost is regularized by adding a term λkθk2 , where λ > 0 is a fixed (known) constant
(regularization will be discussed at greater length in an upcoming course lecutre). The ridge
regression cost function is then
m
1 X T (i) λ
J(θ) = (θ x − y (i) )2 + kθk2 .
2 i=1 2
(a) Use the vector notation described in class to find a closed-form expreesion for the
value of θ which minimizes the ridge regression cost function.
(b) Suppose that we want to use kernels to implicitly represent our feature vectors in a
high-dimensional (possibly infinite dimensional) space. Using a feature mapping φ,
the ridge regression cost function becomes
m
1X T λ
J(θ) = (θ φ(x(i) ) − y (i) )2 + kθk2 .
2 i=1 2
Making a prediction on a new input xnew would now be done by computing θT φ(xnew ).
Show how we can use the “kernel trick” to obtain a closed form for the prediction
on the new input without ever explicitly computing φ(xnew ). You may assume that
the parameter vector
Pm θ can be expressed as a linear combination of the input feature
vectors; i.e., θ = i=1 αi φ(x(i) ) for some set of parameters αi .
[Hint: You may find the following identity useful:
(λI + BA)−1 B = B(λI + AB)−1 .
If you want, you can try to prove this as well, though this is not required for the
problem.]
2. ℓ2 norm soft margin SVMs
In class, we saw that if our data is not linearly separable, then we need to modify our
support vector machine algorithm by introducing an error margin that must be minimized.
Specifically, the formulation we have looked at is known as the ℓ1 norm soft margin SVM.
In this problem we will consider an alternative method, known as the ℓ2 norm soft margin
SVM. This new algorithm is given by the following optimization problem (notice that the
slack penalties are now squared):
Pm
minw,b,ξ 21 kwk2 + C2 i=1 ξi2
.
s.t. y (i) (wT x(i) + b) ≥ 1 − ξi , i = 1, . . . , m
CS229 Problem Set #2 2
(a) Notice that we have dropped the ξi ≥ 0 constraint in the ℓ2 problem. Show that these
non-negativity constraints can be removed. That is, show that the optimal value of
the objective will be the same whether or not these constraints are present.
(b) What is the Lagrangian of the ℓ2 soft margin SVM optimization problem?
(c) Minimize the Lagrangian with respect to w, b, and ξ by taking the following gradients:
∇w L, ∂L T
∂b , and ∇ξ L, and then setting them equal to 0. Here, ξ = [ξ1 , ξ2 , . . . , ξm ] .
(d) What is the dual of the ℓ2 soft margin SVM optimization problem?
(a) Recall from class that the decision function learned by the support vector machine
can be written as
m
X
f (x) = αi y (i) K(x(i) , x) + b.
i=1
Assume that the training data {(x(1) , y (1) ), . . . , (x(m) , y (m) )} consists of points which
are separated by at least a distance of ǫ; that is, ||x(j) − x(i) || ≥ ǫ for any i 6= j.
Find values for the set of parameters {α1 , . . . , αm , b} and Gaussian kernel width τ
such that x(i) is correctly classified, for all i = 1, . . . , m. [Hint: Let αi = 1 for all i
and b = 0. Now notice that for y ∈ {−1, +1} the prediction on x(i) will be correct if
|f (x(i) ) − y (i) | < 1, so find a value of τ that satisfies this inequality for all i.]
(b) Suppose we run a SVM with slack variables using the parameter τ you found in part
(a). Will the resulting classifier necessarily obtain zero training error? Why or why
not? A short explanation (without proof) will suffice.
(c) Suppose we run the SMO algorithm to train an SVM with slack variables, under
the conditions stated above, using the value of τ you picked in the previous part,
and using some arbitrary value of C (which you do not know beforehand). Will this
necessarily result in a classifier that achieve zero training error? Why or why not?
Again, a short explanation is sufficient.
The spam classification dataset in the q4/ directory was provided courtesy of Christian
Shelton (cshelton@cs.ucr.edu). Each example corresponds to a particular email, and each
feature correspondes to a particular word. For privacy reasons we have removed the actual
words themselves from the data set, and instead label the features generically as f1, f2, etc.
However, the data set is from a real spam classification task, so the results demonstrate the
performance of these algorithms on a real-world problem. The q4/ directory actually con-
tains several different training files, named spam train 50.arff, spam train 100.arff,
etc (the “.arff” format is the default format by WEKA), each containing the corresponding
number of training examples. There is also a single test set spam test.arff, which is a
hold out set used for evaluating the classifier’s performance.
5. Uniform convergence
In class we proved that for any finite set of hypotheses H = {h1 , . . . , hk }, if we pick the
hypothesis ĥ that minimizes the training error on a set of m examples, then with probability
at least (1 − δ), r
1 2k
ε(ĥ) ≤ min ε(hi ) + 2 log ,
i 2m δ
where ε(hi ) is the generalization error of hypothesis hi . Now consider a special case (often
called the realizable case) where we know, a priori, that there is some hypothesis in our
class H that achieves zero error on the distribution from which the data is drawn. Then
we could obviously just use the above bound with mini ε(hi ) = 0; however, we can prove a
better bound than this.
(a) Consider a learning algorithm which, after looking at m training examples, chooses
some hypothesis ĥ ∈ H that makes zero mistakes on this training data. (By our
assumption, there is at least one such hypothesis, possibly more.) Show that with
probability 1 − δ
1 k
ε(ĥ) ≤ log .
m δ
Notice that since we do not have a square root here, this bound is much tighter. [Hint:
Consider the probability that a hypothesis with generalization error greater than γ
makes no mistakes on the training data. Instead of the Hoeffding bound, you might
also find the following inequality useful: (1 − γ)m ≤ e−γm .]
CS229 Problem Set #2 4
(b) Rewrite the above bound as a sample complexity bound, i.e., in the form: for fixed
δ and γ, for ε(ĥ) ≤ γ to hold with probability at least (1 − δ), it suffices that m ≥
f (k, γ, δ) (i.e., f (·) is some function of k, γ, and δ).
CS229 Problem Set #3 1
For this question you will prove the following bound. Let any δ > 0 be fixed. Then with
probability at least 1 − δ, we have that
s ! s
∗ 2 4|Hi | 2 4k
ε(ĥ) ≤ min ε(hi ) + log + log
i=1,...,k (1 − β)m δ 2βm δ
δ
(c) Let j = arg mini ε(ĥi ). We know from class that for Hj , with probability 1 − 2
s
2 4|Hj |
|ε(ĥj ) − ε̂Strain (h⋆j )| ≤ log , ∀hj ∈ Hj .
(1 − β)m δ
Use this to prove the final bound given at the beginning of this problem.
CS229 Problem Set #3 2
2. VC Dimension
Let the input domain of a learning problem be X = R. Give the VC dimension for each
of the following classes of hypotheses. In each case, if you claim that the VC dimension is
d, then you need to show that the hypothesis class can shatter d points, and explain why
there are no d + 1 points it can shatter.
There has been a great deal of recent interest in ℓ1 regularization, which, as we will see,
has the benefit of outputting sparse solutions (i.e., many components of the resulting θ are
equal to zero).
The ℓ1 regularized least squares problem is more difficult than the unregularized or ℓ2
regularized case, because the ℓ1 term is not differentiable. However, there have been many
efficient algorithms developed for this problem that work very well in practive. One very
straightforward approach, which we have already seen in class, is the coordinate descent
method. In this problem you’ll derive and implement a coordinate descent algorithm for
ℓ1 regularized least squares, and apply it to test data.
(a) Here we’ll derive the coordinate descent update for a given θi . Given the X and
~y matrices, as defined in the class notes, as well a parameter vector θ, how can we
adjust θi so as to minimize the optimization objective? To answer this question, we’ll
rewrite the optimization objective above as
1 1
J(θ) = kXθ − ~y k22 + λkθk1 = kX θ̄ + Xi θi − ~y k22 + λkθ̄k1 + λ|θi |
2 2
where Xi ∈ Rm denotes the ith column of X, and θ̄ is equal to θ except with θ̄i = 0;
all we have done in rewriting the above expression is to make the θi term explicit in
the objective. However, this still contains the |θi | term, which is non-differentiable
and therefore difficult to optimize. To get around this we make the observation that
the sign of θi must either be non-negative or non-positive. But if we knew the sign of
θi , then |θi | becomes just a linear term. That, is, we can rewrite the objective as
1
J(θ) = kX θ̄ + Xi θi − ~y k22 + λkθ̄k1 + λsi θi
2
where si denotes the sign of θi , si ∈ {−1, 1}. In order to update θi , we can just
compute the optimal θi for both possible values of si (making sure that we restrict
CS229 Problem Set #3 3
the optimal θi to obey the sign restriction we used to solve for it), then look to see
which achieves the best objective value.
For each of the possible values of si , compute the resulting optimal value of θi . [Hint:
to do this, you can fix si in the above equation, then differentiate with respect to θi
to find the best value. Finally, clip θi so that it lies in the allowable range — i.e., for
si = 1, you need to clip θi such that θi ≥ 0.]
(b) Implement the above coordinate descent algorithm using the updates you found in
the previous part. We have provided a skeleton theta = l1ls(X,y,lambda) function
in the q3/ directory. To implement the coordinate descent algorithm, you should
repeatedly iterate over all the θi ’s, adjusting each as you found above. You can
terminate the process when θ changes by less than 10− 5 after all n of the updates.
(c) Test your implementation on the data provided in the q3/ directory. The [X, y,
theta true] = load data; function will load all the data — the data was generated
by y = X*theta true + 0.05*randn(20,1), but theta true is sparse, so that very
few of the columns of X actually contain relevant features. Run your l1ls.m imple-
mentation on this data set, ranging λ from 0.001 to 10. Comment briefly on how this
algorithm might be used for feature selection.
4. K-Means Clustering
In this problem you’ll implement the K-means clustering algorithm on a synthetic data
set. There is code and data for this problem in the q4/ directory. Run load ’X.dat’;
to load the data file for clustering. Implement the [clusters, centers] = k means(X,
k) function in this directory. As input, this function takes the m × n data matrix X and
the number of clusters k. It should output a m element vector, clusters, which indicates
which of the clusters each data point belongs to, and a k × n matrix, centers, which
contains the centroids of each cluster. Run the algorithm on the data provided, with k = 3
and k = 4. Plot the cluster assignments and centroids for each iteration of the algorithm
using the draw clusters(X, clusters, centroids) function. For each k, be sure to run
the algorithm several times using different initial centroids.
5. The Generalized EM algorithm
When attempting to run the EM algorithm, it may sometimes be difficult to perform the M
step exactly — recall that we often need to implement numerical optimization to perform
the maximization, which can be costly. Therefore, instead of finding the global maximum
of our lower bound on the log-likelihood, and alternative is to just increase this lower bound
a little bit, by taking one step of gradient ascent, for example. This is commonly known
as the Generalized EM (GEM) algorithm.
Put slightly more formally, recall that the M-step of the standard EM algorithm performs
the maximization
XX p(x(i) , z (i) ; θ)
θ := arg max Qi (z (i) ) log .
θ
i (i)
Qi (z (i) )
z
The GEM algorithm, in constrast, performs the following update in the M-step:
XX p(x(i) , z (i) ; θ)
θ := θ + α∇θ Qi (z (i) ) log
i
Qi (z (i) )
z (i)
where α is a learning rate which we assume is choosen small enough such that we do not
decrease the objective function when taking this gradient step.
CS229 Problem Set #3 4
(a) Prove that the GEM algorithm described above converges. To do this, you should
show that the the likelihood is monotonically improving, as it does for the EM algo-
rithm — i.e., show that ℓ(θ(t+1) ) ≥ ℓ(θ(t) ).
(b) Instead of using the EM algorithm at all, suppose we just want to apply gradient ascent
to maximize the log-likelihood directly. In other words, we are trying to maximize
the (non-convex) function
X X
ℓ(θ) = log p(x(i) , z (i) ; θ)
i z (i)
Show that this procedure in fact gives the same update as the GEM algorithm de-
scribed above.
CS229 Problem Set #4 1
However, EM can also be applied to the supervised learning setting, and in this problem we
discuss a “mixture of linear regressors” model; this is an instance of what is often call the
Hierarchical Mixture of Experts model. We want to represent p(y|x), x ∈ Rn and y ∈ R,
and we do so by again introducing a discrete latent random variable
X X
p(y|x) = p(y, z|x) = p(y|x, z)p(z|x).
z z
For simplicity we’ll assume that z is binary valued, that p(y|x, z) is a Gaussian density,
and that p(z|x) is given by a logistic regression model. More formally
p(z|x; φ) = g(φT x)z (1 − g(φT x))1−z
−(y − θiT x)2
1
p(y|x, z = i; θi ) = √ exp i = 1, 2
2πσ 2σ 2
where σ is a known parameter and φ, θ0 , θ1 ∈ Rn are parameters of the model (here we
use the subscript on θ to denote two different parameter vectors, not to index a particular
entry in these vectors).
Intuitively, the process behind model can be thought of as follows. Given a data point x,
we first determine whether the data point belongs to one of two hidden classes z = 0 or
z = 1, using a logistic regression model. We then determine y as a linear function of x
(different linear functions for different values of z) plus Gaussian noise, as in the standard
linear regression model. For example, the following data set could be well-represented by
the model, but not by standard linear regression.
CS229 Problem Set #4 2
(a) Suppose x, y, and z are all observed, so that we obtain a training set
{(x(1) , y (1) , z (1) ), . . . , (x(m) , y (m) , z (m) )}. Write the log-likelihood of the parameters,
and derive the maximum likelihood estimates for φ, θ0 , and θ1 . Note that because
p(z|x) is a logistic regression model, there will not exist a closed form estimate of φ.
In this case, derive the gradient and the Hessian of the likelihood with respect to φ;
in practice, these quantities can be used to numerically compute the ML esimtate.
(b) Now suppose z is a latent (unobserved) random variable. Write the log-likelihood of
the parameters, and derive an EM algorithm to maximize the log-likelihood. Clearly
specify the E-step and M-step (again, the M-step will require a numerical solution,
so find the appropriate gradients and Hessians).
z ∼ N (0, I)
x|z ∼ N (U z, σ 2 I).
(a) Use the rules for manipulating Gaussian distributions to determine the joint distri-
bution over (x, z) and the conditional distribution of z|x. [Hint: for later parts of
this problem, it will help significantly if you simplify your soluting for the conditional
distribution using the identity we first mentioned in problem set #1: (λI +BA)−1 B =
B(λI + AB)−1 .]
(b) Using these distributions, derive an EM algorithm for the model. Clearly state the
E-step and the M-step of the algorithm.
(c) As σ 2 → 0, show that if the EM algorithm convergences to a parameter vector U ⋆
(and such convergence is guarenteed by the argument presented in class), then U ⋆
1
Pm (i) (i) T
must be an eigenvector of the sample covariance matrix Σ = m i=1 x x — i.e.,
U ⋆ must satisfy
λU ⋆ = ΣU ⋆ .
[Hint: When σ 2 → 0, Σz|x → 0, so the E step only needs to compute the means
µz|x and not the variances. Let w ∈ Rm be a vector containing all these means,
wi = µz(i) |x(i) , and show that the E step and M step can be expressed as
XU XT w
w= , U=
UT U wT w
CS229 Problem Set #4 3
respectively. Finally, show that if U doesn’t change after this update, it must satisfy
the eigenvector equation shown above. ]
For reference, computing the ICA W matrix for the entire set of image patches takes about
5 minutes on a 1.6 Ghz laptop using our implementation.
After you’ve learned the U matrix for PCA (the columns of U should contain the principal
components of the data) and the W matrix of ICA, you can plot the basis functions using
the plot ica bases(W); and plot pca bases(U); functions we have provide. Comment
briefly on the difference between the two sets of basis functions.
1 Recall that the first step of performing PCA is to subtract the mean and normalize the variance of the features.
For the image data we’re using, the preprocessing step for the ICA algorithm is slightly different, though the
precise mechanism and justification is not imporant for the sake of this problem. Those who are curious about
the details should read Bell and Sejnowki’s paper “The ’Independent Components’ of Natural Scenes are Edge
Filters,” which provided the basis for the implementation we use in this problem.
CS229 Problem Set #4 4
(a) Prove that if V1 (s) ≤ V2 (s) for all s ∈ S, then B(V1 )(s) ≤ B(V2 )(s) for all s ∈ S.
(b) Prove that for any V ,
kB π (V ) − V π k∞ ≤ γkV − V π k∞
where kV k∞ = maxs∈S |V (s)|. Intuitively, this means that applying the Bellman
operator B π to any value function V , brings that value function “closer” to the value
function for π, V π . This also means that applying B π repeatedly (an infinite number
of times)
B π (B π (. . . B π (V ) . . .))
will result in the value function V π (a little bit more is needed to make this completely
formal, but we won’t worry about that here).
[Hint: Use the fact that for any α, x ∈ Rn , if i αi = 1 and αi ≥ 0, then i αi xi ≤
P P
maxi xi .]
(c) Now suppose that we have some policy π, and use Policy Iteration to choose a new
policy π ′ according to
X
π ′ (s) = arg max Psa (s′ )V π (s′ ).
a∈A
s′ ∈S
Show that this policy will never perform worse that the previous one — i.e., show
′
that for all s ∈ S, V π (s) ≤ V π (s).
′
[Hint: First show that V π (s) ≤ B π (V π )(s), then use the proceeding excercises to
′ ′
show that B π (V π )(s) ≤ V π (s).]
(d) Use the proceeding exercises to show that policy iteration will eventually converge
(i.e., produce a policy π ′ = π). Furthermore, show that it must converge to the
optimal policy π ⋆ . For the later part, you may use the property that if some value
function satisfies
X
V (s) = R(s) + γ max s′ ∈ SPsa (s′ )V (s′ )
a∈A
then V = V ⋆ .
0.6
0.4
0.2
−0.2
−0.4
−0.6
All states except those at the top of the hill have a constant reward R(s) = −1, while the
goal state at the hilltop has reward R(s) = 0; thus an optimal agent will try to get to the
top of the hill as fast as possible (when the car reaches the top of the hill, the episode is
over, and the car is reset to its initial position). However, when starting at the bottom
of the hill, the car does not have enough power to reach the top by driving forward, so
it must first accerlaterate accelerate backwards, building up enough momentum to reach
the top of the hill. This strategy of moving away from the goal in order to reach the goal
makes the problem difficult for many classical control algorithms.
As discussed in class, Q-learning maintains a table of Q-values, Q(s, a), for each state and
action. These Q-values are useful because, in order to select an action in state s, we only
need to check to see which Q-value is greatest. That is, in state s we take the action
arg max Q(s, a).
a∈A
The Q-learning algorithm adjusts its estimates of the Q-values as follows. If an agent is in
state s, takes action a, then ends up in state s′ , Q-learning will update Q(s, a) by
Q(s, a) = (1 − α)Q(s, a) + γ(R(s′ ) + γ max Q(s′ , a′ ).
′ a ∈A
At each time, your implementation of Q-learning can execute the greedy policy π(s) =
arg maxa∈A Q(s, a)
Implement the [q, steps per episode] = qlearning(episodes) function in the q5/
directory. As input, the function takes the total number of episodes (each episode starts
with the car at the bottom of the hill, and lasts until the car reaches the top), and outputs
a matrix of the Q-values and a vector indicating how many steps it took before the car was
able to reach the top of the hill. You should use the [x, s, absorb] = mountain car(x,
actions(a)) function to simulate one control cycle for the task — the x variable describes
the true (continuous) state of the system, whereas the s variable describes the discrete
index of the state, which you’ll use to build the Q values.
Plot a graph showing the average number of steps before the car reaches the top of the
hill versus the episode number (there is quite a bit of variation in this quantity, so you will
probably want to average these over a large number of episodes, as this will give you a
better idea of how the number of steps before reaching the hilltop is decreasing). You can
also visualize your resulting controller by calling the draw mountain car(q) function.
List of related AI Classes
CS229 covered a broad swath of topics in machine learning, compressed into a sin-
gle quarter. Machine learning is a hugely inter-disciplinary topic, and there are many
other sub-communities of AI working on related topics, or working on applying machine
learning to different problems.
Stanford has one of the best and broadest sets of AI courses of pretty much any
university. It offers a wide range of classes, covering most of the scope of AI issues. Here
are some some classes in which you can learn more about topics related to CS229:
AI Overview
• CS221 (Aut): Artificial Intelligence: Principles and Techniques. Broad overview
of AI and applications, including robotics, vision, NLP, search, Bayesian networks,
and learning. Taught by Professor Andrew Ng.
Robotics
• CS223A (Win): Robotics from the perspective of building the robot and controlling
it; focus on manipulation. Taught by Professor Oussama Khatib (who builds the
big robots in the Robotics Lab).
• CS225A (Spr): A lab course from the same perspective, taught by Professor Khatib.
• CS225B (Aut): A lab course where you get to play around with making mobile
robots navigate in the real world. Taught by Dr. Kurt Konolige (SRI).
• CS277 (Spr): Experimental Haptics. Teaches haptics programming and touch
feedback in virtual reality. Taught by Professor Ken Salisbury, who works on
robot design, haptic devices/teleoperation, robotic surgery, and more.
• CS326A (Latombe): Motion planning. An algorithmic robot motion planning
course, by Professor Jean-Claude Latombe, who (literally) wrote the book on the
topic.
Probabilistic Methods
• CS228 (Win): Probabilistic models in AI. Bayesian networks, hidden Markov mod-
els, and planning under uncertainty. Taught by Professor Daphne Koller, who
works on computational biology, Bayes nets, learning, computational game theory,
and more.
1
Perception & Understanding
Multi-agent systems
Convex Optimization
AI Project courses