0% found this document useful (0 votes)
102 views114 pages

Nisheeth VishnoiFall2014 ConvexOptimization PDF

The document provides an introduction to convex optimization techniques that are increasingly being used to design fast algorithms for discrete problems. It covers gradient descent and its variants, multiplicative weight update methods, Newton's method, and interior point methods. The material is based on lectures given by the author and is inspired from several sources on convex optimization. The notes are preliminary and intended to be revised based on comments and suggestions.

Uploaded by

Neha Jawalkar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
102 views114 pages

Nisheeth VishnoiFall2014 ConvexOptimization PDF

The document provides an introduction to convex optimization techniques that are increasingly being used to design fast algorithms for discrete problems. It covers gradient descent and its variants, multiplicative weight update methods, Newton's method, and interior point methods. The material is based on lectures given by the author and is inspired from several sources on convex optimization. The notes are preliminary and intended to be revised based on comments and suggestions.

Uploaded by

Neha Jawalkar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 114

A Mini-Course on Convex Optimization

with a view toward designing fast algorithms

Nisheeth K. Vishnoi
i

Preface
These notes are largely based on lectures delivered by the author in the
Fall of 2014. The first three lectures are meant to be an introduction to
the core techniques in convex optimization which are increasingly being
deployed to design fast algorithms for discrete problems. Many thanks
to the scribes: Elisa Celis, Slobodan Mitrovic, Damian Straszak and
Jakub Tarnawski. The material presented in these lectures is inspired
from several excellent sources such as Boyd and Vanderberghe’s Convex
optimization, Nesterov’s Introductory lectures on convex optimization
and Arora et al.’s survey on the Multiplicative weight update method.
Thanks to Sushant Sachdeva for many enlightening discussions about
Interior point methods which have influenced the last part of these
notes. Finally, these notes are quite preliminary and will be revised.
Please feel free to email your comments and suggestions.

Nisheeth K. Vishnoi
7 Oct. 2015, EPFL
nisheeth.vishnoi@epfl.ch
Contents

1 Basics, Gradient Descent and Its Variants 1


1.1 Overview 1
1.2 Basic Notions 2
1.3 Gradient Descent 6
1.4 Convex Functions, Oracles and their Properties 8
1.5 Proofs of Convergence Rates 13
1.6 Strong Convexity - Solving PSD linear systems 17
1.7 Discussion 19

2 Multiplicative Weights Update vs. Gradient


Descent 27
2.1 Overview 27
2.2 The Multiplicative Weights Update Method 28
2.3 Multiplicative Weights Update as Gradient Descent 37
2.4 Appendix: A Multiplicative Weights Update for Matrices 41

3 Newton’s Method and the Interior Point Method 46

ii
Contents iii

3.1 Overview 46
3.2 Newton’s Method and its Quadratic Convergence 47
3.3 Constrained Convex Optimization via Barriers 52
3.4 Interior Point Method for Linear Programming 55
3.5 Self-Concordant Barrier Functions 68
3.6 Appendix: The Dikin Ellipsoid 69
3.7 Appendix: The Length of Newton Step 74

4 Volumetric Barrier, Universal Barrier and John


Ellipsoids 78
4.1 Overview 78
4.2 Restating the Interior Point Method for LPs 79
4.3 Vaidya’s Volumetric Barrier 82
4.4 Appendix: The Universal Barrier 89
4.5 Appendix: Calculating the Gradient and the Hessian of
the Volumetric Barrier 91
4.6 Appendix: John ellipsoid 93

5 A Primal-Dual IPM for Linear Programs (without


the Barrier) 98

5.1 Overview 98
5.2 Linear Programming and Duality 98
5.3 A Primal-Dual Interior Point Method 101
References 110
1
Basics, Gradient Descent and Its Variants

1.1 Overview
Convex optimization is about minimizing a convex function over a con-
vex set. For a convex set K, and a convex function f whose domain
contains K, the goal is to solve the following problem:

inf f (x).
x∈K

Convex optimization is a classical area with a long and rich history


and diverse applications. Traditionally, a large fraction of algorithms in
TCS have been obtained by formulating the underlying discrete prob-
lem (for example: Shortest Path; Maximum Flow; Matching,
Graph Partitioning) as a linear (or a semi-definite) program and,
consequently, deploying standard methods such as the ellipsoid method
or the interior point methods from convex programming to solve these
programs. However, with the growth in the sizes of the real-world in-
stances, there has been a pressing need to design faster and faster
algorithms, ideally in almost the same time as it takes to read the
data. In the last decade or so TCS researchers are coming remarkably
close to what would have been considered a pipe-dream a few years

1
2 Basics, Gradient Descent and Its Variants

ago. And tools and techniques from convex optimization are playing
a bigger and bigger role in this goal. Since general methods of solv-
ing convex programs have runtimes which are far from linear, often
this goal requires adapting known methods from convex optimization
and, more often than not, coming up with novel problem-specific so-
lutions. Thus, techniques from convex optimization have become an
indispensable tool in the toolkit of any algorithm designer and in these
three lectures, we cover a large ground in this regard. The methods we
present include gradient descent and its many variants, multiplicative
weight update methods and their application to approximately solving
linear programs and semi-definite programs quickly, Newton’s method
and its application to path-following interior point methods. Through
these methods, we will see a very intimate connection between convex
optimization and online convex optimization. With a good understand-
ing of the material covered in these three lectures, a student should be
well-equipped to understand many recent breakthrough works in TCS
and, hopefully, push the state-of-the-art.

1.2 Basic Notions


1.2.1 Notation

We are concerned with functions f : Rn 7→ R. By x, y, . . . we typically


mean vectors in Rn and we use x1 , x2 , . . . to denote its coordinates.
When the function is smooth enough, we can talk about its gradients
and Hessians. The gradient is denoted by
 
def ∂f ∂f
∇f (x) = (x), . . . , (x) .
∂x1 ∂xn

The Hessian of f at a point x is a matrix denoted by ∇2 f (x) whose


2f
(i, j)-th entry is ∂x∂i ∂x j
(x). We say that a symmetric matrix M  lI, if
all its eigenvalues are at least l. When l ≥ 0, the matrix is said to be
positive semi-definite (psd). Similarly, we denote by M  LI the fact
that all eigenvalues of M are at-most L. hx, yi denotes the inner product
between x and y. kxk denotes the `2 norm unless stated otherwise. An
important but easy to prove result is the Cauchy-Schwarz inequality
1.2. Basic Notions 3

which states that


hx, yi ≤ kxk · kyk.
An important tool from calculus is Taylor expansion of a function f at
y around a point x
1
f (y) = f (x) + h∇f (x), y − xi + (y − x)> ∇2 f (x)(y − x) + · · ·
2
when the function is infinitely differentiable. The k-th order Taylor
series approximation is obtained by truncating the above expression
at k. However, if the function has k + 1 derivatives, then one can use
the Mean Value Theorem to truncate the above expression at k by
a suitable modification and know the exact error. For instance, when
k=1
1
f (y) − (f (x) + h∇f (x), y − xi) = (y − x)> ∇2 f (ζ)(y − x)
2
for some ζ in the line segment joining x and y.

1.2.2 Convexity
We start by defining the basic notions of a convex set and a convex
function.

Definition 1.1. A set K ⊆ Rn is convex if, for every two points in


K, the line segment connecting them is contained in K, i.e., for any
x, y ∈ K and λ ∈ [0, 1] we have

λx + (1 − λ)y ∈ K. (1.1)

Definition 1.2. A function f : K → R is convex if it satisfies

f (λx + (1 − λ)y) ≤ λf (x) + (1 − λ)f (y) (1.2)

for any x, y ∈ K and λ ∈ [0, 1]. If inequality (1.2) is strict for all x 6= y
and λ ∈ (0, 1), then we say that f is strictly convex.
4 Basics, Gradient Descent and Its Variants

x y

Fig. 1.1 A convex function.

Geometrically, this means that for any x, y ∈ K, the segment connect-


ing points (x, f (x)) and (y, f (y)) lies above the graph of f (in the space
Rn × R): see Fig. 1.1.
While this is the most general definition of convexity, but not always
the most useful. When f is smooth enough, we can give other equivalent
definitions:

Proposition 1.3. If f : K → R is differentiable, then it is convex iff


f (y) ≥ f (x) + h∇f (x), y − xi (1.3)
for any x, y ∈ K.

The geometric interpretation of this condition is that for any x ∈ K,


the tangent space of f at x should lie below the graph of f : see Fig. 1.2.
In other words, the right-hand side of Eq. (1.3), which is the first-order
Taylor approximation of f at x, is an under-estimation for the value of
f at every other point y ∈ K.

Proof. We prove the one-dimensional case first.


Suppose f is convex, and fix any x, y ∈ K. Then for every λ ∈ (0, 1]
we have
(1 − λ)f (x) + λf (y) ≥ f (λy + (1 − λ)x) = f (x + λ(y − x)).
Subtracting (1 − λ)f (x) and dividing by λ yields
f (x + λ(y − x)) − f (x)
f (y) ≥ f (x) +
λ
1.2. Basic Notions 5

≥0

x y

Fig. 1.2 The first-order condition for convexity.

and now it suffices to take the limit λ → 0. (When λ = 0, there is


nothing to prove.)
Conversely, suppose f satisfies Eq. (1.3) and fix x, y ∈ K and λ ∈
[0, 1]. Let z = λx + (1 − λ)y. The first-order approximation of f at z
underestimates both f (x) and f (y); the difference between the linear
approximation of f at z using the values f (x) and f (y) and the actual
value f (z) is a weighted average of these underestimations and thus
also nonnegative. Formally, we have

f (x) ≥ f (z) + f 0 (z)(x − z), (1.4)


0
f (y) ≥ f (z) + f (z)(y − z), (1.5)

and multiplying (1.4) by λ and (1.5) by 1 − λ yields

λf (x) + (1 − λ)f (y) ≥ f (z).

To extend the proof to many dimensions, just note that after fixing
points x, y ∈ K it is enough to restrict our attention to the line segment
connecting them.

If the function has second derivatives, we can provide another charac-


terization of convexity below. We leave the proof as an exercise.

Proposition 1.4. Suppose K is convex and open. If f : K → R is


twice differentiable, then it is convex iff

∇2 f (x)  0
6 Basics, Gradient Descent and Its Variants

for any x ∈ K. If the inequality is strict for all x ∈ K, the function is


called strictly convex.

In the rest of this lecture we will most often use the definition of The-
orem 1.3.

1.3 Gradient Descent


We now introduce perhaps the simplest but extremely powerful gradient
descent method to solve a convex program. Actually, it is not a single
method, but a general framework with many possible realizations. We
describe some concrete variants and analyze their performance. The
performance guarantees that we are going to obtain will depend on
assumptions that we make about f .
We describe the core ideas of the gradient descent methods in the
unconstrained setting, i.e., K = Rn . In Section 1.7.7 we discuss the
constrained setting. Also, let us assume for simplicity that f has a
unique minimum x? (this follows if f is strictly convex) and that it is
differentiable.
What does the gradient have to do with optimizing a convex func-
tion? We discuss this informally below. Suppose that we are at a point
x, along with a value f (x) that we want to decrease as much as possible.
We want to accomplish this by moving to a point y at some pre-specified
small distance from x. If we consider points y in a near neighbourhood
of x, we should expect that the first-order Taylor approximation
f (y) ≈ f (x) + h∇f (x), y − xi
def
will be very tight: essentially an equality. Thus, if we set ∆x = y − x,
then we are tasked with minimizing the scalar product
h∇f (x), y − xi = h∇f (x), ∆xi
where x is fixed, and k∆xk is also given. The optimal solution of this
problem has the form1
∆x = −η · ∇f (x)
1 To
see this, consider the problem of minimizing hv, wi over all v with kvk = 1 for a fixed w.
One cannot do better than − kwk, because |hv, wi| ≤ kvk·kwk = kwk by Cauchy-Schwartz.
1.3. Gradient Descent 7

and it yields
f (y) ≈ f (x) − η k∇f (x)k2 .

Thus we see that the norm of the gradient controls the rate at which
we can decrease f . More vividly, the gradient of f at x is the direction
in which f grows the fastest, so if we are looking to minimize f , it
makes the most sense to go in the opposite direction.
Any gradient descent method will work by constructing a sequence
of points x1 , x2 , . . . , xT with the objective of getting very close to the
optimum x? after a small number of iterations. Usually, it will try to
ensure that f (x0 ) ≥ · · · ≥ f (xT ) (although the first method that we
will show does not guarantee this). We will not be able to find the exact
optimum; we can only hope to get ε-close to it, for a given accuracy
requirement ε.
The first nice property of convex functions that we can discover in
this context, and a reason why gradient descent does not work for a
general f , is the following: if we keep decreasing the value of f , we will
not get ”stuck” in a local minimum which is not a global minimum.
This is guaranteed by the following fact, which again points out the
role of the gradient in optimizing f .

Proposition 1.5. For a differentiable convex function f : Rn → R


and a point x, the following conditions are equivalent:

(a) x is a global minimum of f ,


(b) x is a local minimum of f ,
(c) ∇f (x) = 0.

Proof. The direction (a)–(b) is trivial, and (b)–(c) holds for all f . For
(c)–(a), just note that for any y ∈ K,

f (y) ≥ f (x) + h∇f (x), y − xi = f (x).


8 Basics, Gradient Descent and Its Variants

Remark 1.6. The above proposition generalizes to the case K ⊆ Rn


as follows: for a closed convex set K and a convex function f : K → R,
a point x ∈ K minimizes f iff

h∇f (x), y − xi ≥ 0

for all y ∈ K. In other words, when there is no direction in K where


the gradient decreases.

Let us proceed to the description of a general variant of gradient de-


scent.
The algorithm will construct a sequence of points x1 , x2 , . . . , xT , for
a value T which will depend on the assumptions we make about the
structure of f and will be specified later. At the t-th iteration, knowing
xt , the algorithm takes xt+1 to be
def
xt+1 = xt − ηt ∇f (xt )

where ηt is the step size (which will also depend on f and will be
determined as per the setting.). For a given ε, our objective is to get
ε-close to the optimum f (x? ) in as less iterations as possible. We want
to understand what values of ηt will enable us to do achieve this goal.

1.4 Convex Functions, Oracles and their Properties


Till now we have ignored a very important detail: how is f given to the
algorithm. In particular, to use the gradient algorithmically, one must
be able to compute it. Here we will assume that the algorithm has access
to an oracle which returns values of f (x) and ∇f (x) for a query point
x ∈ K. Such methods, which work in this oracle setting are referred
to as first-order methods. Later, we will consider the setting where we
would need access to a second-order oracle for f : given x, y output
(∇2 f (x))−1 y. Such methods are referred to as second order methods.
We also assume that we have a bound on the distance between the
starting point and the optimum:

kx1 − x? k ≤ D.
1.4. Convex Functions, Oracles and their Properties 9

If K were a bounded set, then D = diam(K) would be a valid choice;


in the unconstrained setting, we must assume something about how far
from the optimum we are starting.
Unfortunately, in general, it is still difficult to optimize f if we have
no control at all over the magnitude of its gradient. We have to impose
additional conditions on f .
We propose three such conditions. In each of these settings, we will
obtain a guarantee on the convergence rate of our method, as well as
a way to set the step lengths ηt .

1.4.1 Bounded gradient (Lipschitz f )


The simplest condition that we can propose is that the gradient be
bounded from above: there should exist a G > 0 such that for all
x ∈ K,
k∇f (x)k ≤ G.
Why is this useful? Intuitively, a large gradient means that the function
decreases very fast in the direction in which we are moving, which
should be desirable. However, as we will see, this would make us unable
to control the step size well enough.

Remark 1.7. This condition is equivalent to saying that f is G-


Lipschitz, that is,

kf (y) − f (x)k ≤ G · ky − xk

for all x, y ∈ K. We will use this term in the sequel.

In this case we can get the following bound, which we prove in Sec-
tion 1.5.1:

Theorem 1.8. There is a gradient-descent method which, given an ε,


a starting point x1 satisfying kx1 − x? k ≤ D, and a function f which
is G-Lipschitz, produces a sequence of points x1 , . . . , xT such that
T
!
1X
f xt − f (x? ) ≤ ε
T
t=1
10 Basics, Gradient Descent and Its Variants

where
 2
DG
T = .
ε

We mostly focus on the dependence of T on ε, and in this case it is


 
−2
 1
T =O ε , ε=O √ .
T
Note that we did not get any guarantee for the points x1 , . . . , xT – just
for their running averages.

Remark 1.9. We are using the Euclidean norm here and assuming
that k∇f k2 = O(1). But sometimes we might find ourselves in a setting
where k∇f k∞ = O(1), and a naive bound would give us just k∇f k2 =

O( n). We will see in later lectures how to deal with such situations
(see also Section 1.7.6).

1.4.2 Lipschitz gradient (smooth f )

We could also demand that the gradient be Lipschitz continuous.

Definition 1.10. We say that a function f is L-smooth (with a con-


stant L > 0) if for all x, y ∈ K,

k∇f (x) − ∇f (y)k ≤ L · kx − yk .

Remark 1.11. L is the Lipschitz constant of ∇f . We can equivalently


say that ∇2 f (x)  LI for all x ∈ K.

To understand the usefulness of this condition, note that it implies the


following crucial property:
1.4. Convex Functions, Oracles and their Properties 11

∈ [0, L · kx − yk2 ]

x y

Fig. 1.3 The Bregman divergence.

Lemma 1.12. If f is L-smooth, then for any x, y ∈ K we have

0 ≤ f (y) − f (x) − h∇f (x), y − xi ≤ L · kx − yk2 .

This means that the distance of f (y) from its first-order Taylor approx-
imation at x is between 0 and L · kx − yk2 , which is to be thought of
as small; see Fig. 1.3. This distance is called the Bregman divergence
with respect to the `2 -norm.

Proof. We begin from the definition of convexity: f (x) ≥ f (y) +


h∇f (y), x − yi, and use Cauchy-Schwarz along the way:

f (y) − f (x) ≤ h∇f (y), y − xi


= h∇f (x), y − xi + h∇f (y) − ∇f (x), y − xi
≤ h∇f (x), y − xi + k∇f (y) − ∇f (x)k · ky − xk
≤ h∇f (x), y − xi + L · kx − yk2 .

On the other hand we have (again from convexity)

f (y) − f (x) ≥ h∇f (x), y − xi .

This will enable us to obtain a better dependence of T on ε. The fol-


lowing bound is proved in Section 1.5.2:
12 Basics, Gradient Descent and Its Variants

Theorem 1.13. There is a gradient-descent method which, given an


ε, a starting point x1 and an L-smooth function f , produces a sequence
of points x1 , . . . , xT such that

f (xT ) − f (x? ) ≤ ε

and
LD2
 
T =O ,
ε
where D = kx1 − x? k.

Again suppressing the constants, we see the improved dependence


 
−1
 1
T =O ε , ε=O .
T
Moreover, this time we can show that we are really getting closer and
closer to the optimum (not just on average).
While we will not cover it in the lectures, it is possible to at-
tain a quadratic improvement using the important accelerated gradient
method of Nesterov:

Theorem 1.14. There is an algorithm which, given an ε, a starting


point x1 satisfying kx1 − x? k ≤ D, and an L-smooth function f , pro-
duces a sequence of points x1 , . . . , xT such that

f (xT ) − f (x? ) ≤ ε

and √ !
LD
T =O √ .
ε

1.4.3 Strong convexity


Another natural restriction would be to strongly convex functions, i.e.,
those that have all eigenvalues of the Hessian bounded below by a
constant ` > 0. In other words:
1.5. Proofs of Convergence Rates 13

Definition 1.15 (strong convexity). We say that f is `-strongly


convex (with ` > 0) if for all x ∈ K we have

∇2 f (x)  `I.

Remark 1.16. This is equivalent to saying that


`
f (y) ≥ f (x) + h∇f (x), y − xi + kx − yk2 (1.6)
2
for all x, y ∈ K. In other words, the corresponding Bregman divergence
is lower bounded by 2` kx − yk2 .

If we also assume that f is G-Lipschitz, then we are able to get the


following bound, whose proof appears in Section 1.5.3.

Theorem 1.17. There is an algorithm which, given an ε, a starting


point x1 , and a function f which is both G-Lipschitz and `-strongly
convex, produces a sequence of points x1 , . . . , xT such that

f (xT ) − f (x? ) ≤ ε

and
G2
 
T =O .

1.5 Proofs of Convergence Rates


Now we prove the first two theorems stated in the previous section.

1.5.1 Lipschitz functions – proof of Theorem 1.8


Recall that we are guaranteed that f is G-Lipschitz. We will fix our
step size η to be constant (independent of t).
14 Basics, Gradient Descent and Its Variants

We will be interested in understanding the quantity f (xt ) − f (x? )


– in particular, how fast it decreases with t. We begin by applying
convexity and get

f (xt ) − f (x? ) ≤ h∇f (xt ), xt − x? i


1
= hxt − xt+1 , xt − x? i
η
1  
= kxt − xt+1 k2 + kxt − x? k2 − kxt+1 − x? k2 ,

(1.7)

where we used the well-known equality 2 ha, bi = kak2 +kbk2 −ka − bk2 .
We can now take advantage of our assumption about f and write

kxt − xt+1 k2 = η 2 k∇f (x)k2 ≤ η 2 G2 . (1.8)

From (1.7) and (1.8) we get


1  2 2 
f (xt ) − f (x? ) ≤ η G + kxt − x? k2 − kxt+1 − x? k2

for every t = 1, . . . , T . Notice that if we add all these inequalities, we
will get a telescoping sum:
T
X 1  2 2 
(f (xt ) − f (x? )) ≤ T η G + kx1 − x? k2 − kxT +1 − x? k2 .

t=1

We may bound the last term by simply zero. For the middle one, recall
that we have introduced D which bounds the distance from the starting
point to the optimum. Thus, using kx1 − x? k2 ≤ D2 , we obtain
T
D2
 
X
? 1  1
(f (xt ) − f (x )) ≤ T η 2 G2 + D 2 = 2
T ηG + .
2η 2 η
t=1

D2
The minimizing choice of η here is the one which satisfies T ηG2 = η ,
so that
D
η= √ .
G T
1.5. Proofs of Convergence Rates 15

1 PT ?
We have bounded the quantity  TP t=1 f(xt ) − fP(x ). But from con-
vexity of f we have2 that f T1 Tt=1 xt ≤ T1 Tt=1 f (xt ). Thus we
get
T
!
D2
 
1X ? 1 2 DG
f xt − f (x ) ≤ T ηG + = √ .
T 2T η T
t=1
2
√ ≤ ε we need to set T ≥ DG .
In order to get DG

T ε

1.5.2 Smooth functions – proof of Theorem 1.13


Recall that we are assuming that f is L-smooth, i.e., that ∇f is L-
Lipschitz. This implies Theorem 1.12 (a bound on the Bregman diver-
gence).
We prove this theorem for D = max{kx − x? k : f (x) ≤ f (x1 )}. To
prove the above theorem with D = kx1 − x? k one needs to show that
in all the subsequent steps kxt − x? k ≤ kx1 − x? k. This requires us to
prove the following inequality which follows from lower bounding the
Bregman divergence. We omit its proof.

k∇f (x) − ∇f (y)k2 ≤ L · hx − y, ∇f (x) − ∇f (y)i.

Once again, we will keep the step length η constant and take

xt+1 − xt = −η · ∇f (xt ).

Let us see how the value of f changes as we iterate. Use Theorem 1.12
to get

f (xt+1 ) − f (xt ) ≤ h∇f (xt ), xt+1 − xt i + L · kxt+1 − xt k2


= −η k∇f (xt )k2 + Lη 2 k∇f (xt )k2 .
1
The minimizing value of η is 2L . By substituting it, we get
1
f (xt+1 ) − f (xt ) ≤ − k∇f (xt )k2 . (1.9)
4L
Intuitively, this means that, at every step, either the gradient is large
and we are making good progress, or it is small, which means that we
2 Extend Eq. (1.2) to T points.
16 Basics, Gradient Descent and Its Variants

are already close to the optimum. Indeed, if we are at a distance θ from


the optimum value:
f (xt ) − f (x? ) ≥ θ,
then by convexity and Cauchy-Schwarz,

θ ≤ f (xt ) − f (x? ) ≤ h∇f (xt ), xt − x? i ≤ k∇f (xt )k · kxt − x? k

and if we bound kxt − x? k by D (this follows since f (xt ) ≤ f (x1 )), then
we get
θ
k∇f (xt )k ≥
D
and, by (1.9),
θ2
f (xt+1 ) − f (xt ) ≤ − .
4LD2
θ
Until our distance
 from
 the optimum goes down below 2 , we will make 
θ 2 2
a progress of Ω LD 2 at every step, so we will take at most O LD θ
steps before this happens. Then this process of halving is repeated,
and so on, until we reach the required
 2 distance
 ε. Bringing the distance
θ θ LD 2i
down from 2i to 2i+1 requires O θ steps, so we get
θ
log θ
!
Xε LD2 2i LD2 2log ε LD2
   
O =O =O
θ θ ε
i=0

steps before we are ε-close to the optimum.

1.5.3 Strongly convex functions – proof of Theorem 1.17


Recall that f is `-strongly convex and G-Lipschitz. As a start, we will
try to mimic the proof from Section 1.5.1. However, we will not fix
the step size ηt to be constant, and we use the first-order condition for
convexity in the strong form (Eq. (1.6)). We easily get
1  2 2  `
f (xt )−f (x? ) ≤ ηt G + kxt − x? k2 − kxt+1 − x? k2 − kxt − x? k2
2ηt 2
(1.10)
for every t = 1, . . . , T . Now, to obtain a better bound than previously,
we must make good use of the new term (the last one). Intuitively, if
kxt − x? k is large, then it is very helpful, but as t grows and kxt − x? k
1.6. Strong Convexity - Solving PSD linear systems 17

decreases, its importance is diminished. We will try to prevent this


from happening by multiplying our inequality by t. And we will still
aim to obtain a telescoping sum. So let us sum all the t-multiples of
Eq. (1.10):

T T T
G2 X
 
X
?
X
? 2 t `t t−1
t (f (xt ) − f (x )) ≤ tηt + kxt − x k · − −
2 2ηt 2 2ηt−1
t=1 t=1 t=2
 
1 ` T
+ kx1 − x? k2 · − − kxT +1 − x? k2 · .
2η1 2 2ηT
As before, we bound the last term by just zero. Now, to make the
sum telescoping, we would like to get 2ηt t − `t2 − 2ηt−1
t−1
= 0 for every
t = 2, . . . , T . As for the term 2η11 − 2` , we would also prefer to remove
it, so as not to have any dependence on kx1 − x? k. Solving this system
of equations (beginning with 2η11 − 2` = 0) yields the following setting
of ηt s:
2
ηt = .
`(t + 1)
We are thus left with:
T T T T
X G2 X X G2 t X G2 G2 T
t (f (xt ) − f (x? )) ≤ tηt = ≤ = .
2 `(t + 1) ` `
t=1 t=1 t=1 t=1

T (T +1)
The rest is straightforward: we normalize by (1 + · · · + T ) = 2
and use convexity of f to get
T
!
2 X 2G2
f t · xt − f (x? ) ≤ ,
T (T + 1) `(T + 1)
t=1

and bringing this down to ε requires setting


 2
G
T =O .

1.6 Strong Convexity - Solving PSD linear systems


Suppose we have a linear system with a positive definite constraint
matrix, i.e. we want to find a vector x satisfying Ax = b, where A  0.
18 Basics, Gradient Descent and Its Variants

Let us formulate it as a convex problem. Define


1
f (x) = x> Ax − x> b.
2
Then f is strongly convex:

∇2 f (x) = A  0

and one can verify that

∇f (x) = Ax − b

so that x? minimizes f iff it is a solution to the linear system Ax = b. So


solving the system amounts to minimizing f , and we will do this using
gradient descent. We will construct a sequence of points x1 , . . . , xT
using the formula
xt+1 = xt − ηt · ∇f (xt )

as previously. However, this time we will not use the same step length
ηt for every iteration, but instead (exercise) analytically find a closed-
form expression for the best ηt , i.e.

ηt = argminη f (xt − η · ∇f (xt )) .

So at each step we will decrease f as much as possible by going in the


direction opposite to the gradient. This is called the steepest descent
method.
If we do this, then one can show (exercise) that the convergence
rate of this method (to a predefined accuracy ε: f (xT ) − f (x? ) ≤ ε) is
given by
 
1
T = O κ(A) · log ,
ε

where κ(A) = L` , the ratio between the largest and the smallest eigen-
value of A, is the condition number of the matrix A.3

3 Formore information on this and the related Conjugate Gradient method, see Chapter 15
of the monograph [Vis13].
1.7. Discussion 19

1.7 Discussion
1.7.1 Gradient vs. Sub-gradient

Throughout this lecture we assumed the function f to be differen-


tiable or twice differentiable as and when we needed. However, in
many applications (practical and theoretical), f is not differentiable
everywhere. Could any of the methods presented in this section work
in this non-differentiable setting? Start by noting that if f is convex
then it is continuous. Hence, we need to only consider the issue of non-
differentiable f. Non-differentiability poses the problem that there may
be some points at which the gradient is not unique. In general, at a
non-differentiable point x, the set of gradients forms a set which which
we denote by ∂f (x) and an element of it is referred to as a sub-gradient.
First we note that the convexity of f implies that for any x, y

f (y) ≥ f (x) + hg, y − xi

for all g ∈ ∂f (x). Thus, we could modify the gradient descent in a


natural manner:
xt+1 = xt − ηt g
for any g ∈ ∂f (xt ) given to us by the first order oracle for f. Further, it
can be easily checked that the guarantee claimed in Theorem 1.8 and
its corollaries carry over with the following parameter:
def
G = sup sup kgk.
x g∈∂f (x)

1.7.2 Dependence on ε

We have seen in Theorems 1.8, 1.13, 1.14 and 1.17 that our methods
converge to the ε-approximate solution in a number of iterations which
is O poly 1ε , with the exponent equal to 2, 1 or 12 . However, a poly-


nomial dependence on ε does not look very good if we hope to use these
methods in computer science. For such applications, the right answer
should be O poly log 1ε . And, if the function is both strongly convex


and smooth, it is possible to obtain a convergence rate similar to the


one in Section 1.6: see e.g. Section 3.4.2 of [Bub14].
20 Basics, Gradient Descent and Its Variants

1.7.3 Dependence on n
Note that n does not appear anywhere in the oracle complexity4 of
the presented algorithms. This is an important feature of first-order
methods. (Of course, the time complexity of a single iteration will still
depend on n.)

1.7.4 Coordinate Descent


Instead of moving in the exact direction in which f decreases the fastest,
we may limit ourselves only to moving along the standard basis vectors,
i.e. to changing only one coordinate of x at a time.
For example, rather than computing the whole gradient
∂f ∂f
∇f (xt ) = (xt ) · e1 + · · · + (xt ) · en
∂x1 ∂xn
(where e1 , . . . , en are standard basis vectors), we can only pick one
random coordinate i and update
∂f
xt+1 = xt − ηt · (xt ) · ei .
∂xi
We can analyze this random coordinate descent by examining the ex-
pected decrease in the function value. One can prove the following
theorem. We omit its simple proof.

Theorem 1.18. There is an algorithm which, given an ε, a starting


point x1 satisfying kx1 − x? k ≤ D, and a function f which is G-
Lipschitz, produces a sequence of (random) points x1 , . . . , xT such that
T
" !#
1X
E f xt − f (x? ) ≤ ε
T
t=1

and  2 !
DG
T =O ·n .
ε

4 The number of queries for f (x), ∇f (x) etc.


1.7. Discussion 21

As we would expect, changing only one coordinate at a time comes at


a cost of an n-fold increase in the number of iterations (compared to
our first method of Theorem 1.8).

1.7.5 Online Convex Optimization


Consider the following game against an adversary. Given are: a convex
set of strategies K, and a family of convex functions F (for example, all
convex functions which are G-Lipschitz). The game proceeds in rounds.
In the t-th round, the player picks a point xt ∈ K, and then the adver-
sary picks a function ft ∈ F. The value ft (xt ) is considered to be the
player’s loss at time t. The objective of the game is to minimize the
regret of the player up to a time T :

Definition 1.19. For a sequence of points x1 , . . . , xT and functions


f1 , . . . , fT , the regret up to time T is defined as
T
X T
X
RegretT = ft (xt ) − min ft (x).
x∈K
t=1 t=1

I.e., we compare the player’s loss to the minimum loss which could be
achieved if one knew all the functions ft beforehand, but were only
allowed to pick a single argument x for all of them.5 The player is
trying to minimize the regret, and the adversary is trying to maximize
it. This model has many practical applications, such as stock pricing
and portfolio selection, see [Haz14].
One can describe a strategy for the player which uses gradient de-
scent:
xt+1 = xt − ηt · ∇ft (xt )
for a suitable choice of ηt . Even though the functions ft change from
iteration to iteration, one can still obtain a similar guarantee to the
one from our first analysis. Namely, we are able to make the average
regret go to 0 at a rate inversely proportional to the square root of T :
 
RegretT 1
=O √ .
T T
5 For some choices of x1 , . . . , xt , f1 , . . . , ft , regret can be negative.
22 Basics, Gradient Descent and Its Variants

Indeed, note that in the proof of Section 1.5.1 we never used the fact
that the function f remains the same between iterations. We actually
proved the following:

Corollary 1.20. There is an algorithm which, given a starting point


x1 , parameters ε and D, and a family f1 , . . . , fT of convex functions
which are G-Lipschitz, produces a sequence of points x1 , . . . , xT such
that
T
1X DG
(ft (xt ) − ft (x? )) ≤ √
T T
t=1
for any x? satisfying kx1 − x? k ≤ D, and T being

DG 2
 
T = .
ε
Moreover, it produces each point xt+1 knowing only the functions
f1 , . . . , ft .

However, we will see in the next lecture that multiplicative weight


update methods are often able to perform much better in the online
learning setting.
There are also other optimization scenarios where access to infor-
mation is even more limited (and such models are thus closer to real
life applications). For example, we might be only given oracle access to
the function (i.e., we do not see any ”formula” for f ). In particular, we
cannot compute the gradient directly. (This is called the bandit setting.)
Even then, methods based on gradient descent are very competitive.

1.7.6 Non-Euclidean Norms


Our algorithm for the L-smooth case can be easily adapted to work with
functions which are smooth with respect to any pair of dual norms.6
6 Let k·k be any norm. We define the dual norm k·k? as follows:

kyk? = sup |hx, yi| .


x∈Rn :kxk=1

Then we say that a function f is L-smooth with respect to k·k if for any x, y ∈ K,
k∇f (x) − ∇f (y)k? ≤ L · kx − yk .
1.7. Discussion 23

Now, moving in the direction of the gradient makes progress which we


can measure using the Euclidean norm, because then

h∇f (xt ), xt+1 − xt i = h∇f (xt ), −η∇f (xt )i = −η k∇f (xt )k22 .

If we want to utilize a different pair of dual norms, we must adapt our


direction of descent to the geometry that they induce. To this end, we
move instead in the direction

xt+1 = xt − η (∇f (xt ))# ,

where for any x ∈ Rn we define x# to be the minimizer of


1 # 2
D E
x − x# , x . (1.11)
2
(Note that for k·k = k·k2 we have x# = x.) Theorem 1.12 still holds in
the new setting, and one can show that a step size of η = L1 allows us
to get
1
f (xt+1 ) − f (xt ) ≤ − k∇f (xt )k2? .
2L
Then the analysis proceeds as in Section 1.5.2, giving us the following
generalization of Theorem 1.13:

Theorem 1.21. There is an algorithm which, given an ε, a starting


point x1 satisfying kx1 − x? k ≤ D, and a function f which is L-smooth
with respect to the norm k·k, produces a sequence of points x1 , . . . , xT
such that
f (xT ) − f (x? ) ≤ ε
and
LD2
 
T =O .
ε
To produce each point xt , it makes one call to a routine which computes
x# for a given x, i.e. minimizes an expression of the form (1.11).

1.7.7 Constrained setting – projection


If K is not the whole space Rn , then our choice of the next iterate xt+1
in the gradient descent method might fall outside of the convex body
24 Basics, Gradient Descent and Its Variants

K. In this case we need to project it back onto K: to find the point in


K with the minimum distance to x, and take it to be our new iterate
instead:
xt+1 = projK (xt − ηt · ∇f (xt )) .
The convergence rates remain the same (for example, the first proof
just carries over to this new setting, since the value of f at the new
point will be at least as good as the bound that we had derived for its
value at the original point). However, depending on K, the projection
may or may not be difficult (or computationally expensive) to perform.
More precisely, as long as the algorithm has access to an oracle
which, given a query point x, returns the projection projK (x) of x onto
K, then for the G-Lipschitz case we have the following analogue of
Theorem 1.8:

Theorem 1.22. There is an algorithm which, given an ε, a function


f : K → R which is G-Lipschitz and has a minimum x? , and a start-
ing point x1 satisfying kx1 − x? k ≤ D, produces a sequence of points
x1 , . . . , xT ∈ K such that
T
!
1X
f xt − f (x? ) ≤ ε
T
t=1

and  2
DG
T = .
ε
To compute each point xt , it uses one gradient query and one projection
query.

And for the L-smooth case we get the following analogue of Theo-
rem 1.13.

Theorem 1.23. There is an algorithm which, given an ε, a function f :


K → R which is L-smooth and has a minimum x? , and a starting point
x1 satisfying kx1 − x? k ≤ D, produces a sequence of points x1 , . . . , xT ∈
K such that
f (xT ) − f (x? ) ≤ ε
1.7. Discussion 25

and
LD2 + f (x1 ) − f (x? )
 
T =O .
ε
To compute each point xt , it uses one gradient query and one projection
query.

1.7.8 Constrained setting – the Frank-Wolfe algorithm

In cases where computing projections is prohibitively difficult (for ex-


ample, harder than the original problem), one can use another variant
of gradient descent: the Frank-Wolfe algorithm7 . At the t-th step, it
first considers the first-order Taylor approximation of f around xt , and
tries to minimize this function over K, obtaining a minimizer yt . Then
it takes the new point xt+1 to be a weighted average of xt and yt ,
which is guaranteed to also be in K. More precisely, it first minimizes
the function
f (xt ) + h∇f (x), yt − xt i

over yt ∈ K. But this function is linear in yt , and the task amounts


to finding the minimizer of hyt , ∇f (xt )i over yt ∈ K. The new point is
selected as
xt+1 = (1 − γt )xt + γt yt

with γt ∈ [0, 1] being a parameter8 . Because this is a convex combi-


nation, we get that xt+1 ∈ K (as long as the starting point x1 was in
K).
This way, instead of having to project onto K, we need to be able
to optimize linear functions over K, which may very well be easier. For
example, if K is a polytope, then we are left with a linear programming
subproblem.
Using this method one can obtain the same bound on the number
of iterations as in Theorem 1.13: (see also Theorem 3.4 in [Bub14]):

7 Also called the conditional gradient descent method.


8A 2
good choice is γt = t+1 .
26 Basics, Gradient Descent and Its Variants

Theorem 1.24. There is an algorithm which, given parameters ε and


D, where D = diam(K), a function f : K → R which is L-smooth (with
respect to any norm k·k) and has a minimum x? , and any starting point
x1 ∈ K, produces a sequence of points x1 , . . . , xT ∈ K such that

f (xT ) − f (x? ) ≤ ε

and
LD2
 
T =O .
ε
To compute each point xt , it uses one gradient query and one call to a
routine which optimizes a linear function over K.

This algorithm has another important feature: if K is a polytope and


x1 is a vertex of K, then any iterate xt can be written as a convex
combination of t vertices of K. In constrast, Caratheodory’s theorem
guarantees that any x ∈ K can be written as a convex combination of at
most n + 1 vertices of K. Because T is independent of n (and will often
be much smaller), we can say that xT is sparse in the polytope-vertex
representation.
2
Multiplicative Weights Update vs. Gradient
Descent

2.1 Overview
In the previous lecture we discussed gradient descent-type methods
from convex optimization and showed how they can be straight-
forwardly translated into regret-minimizing algorithms in the online
convex optimization setting. The key observation was that several
gradient-descent type methods are oblivious to the fact that the same
function is used in every round. Today we turn this idea on its head:
namely, we begin with an algorithm that solves an online problem,
and end up with a (new) convex optimization method. As a result, we
can get improved dependence on the number of iterations when we are
in settings other than the Euclidean space or have guarantees on the
function on the Euclidian norm. For instance, suppose we know that

k∇f (x)k∞ = O(1) which, at best, implies a bound k∇f (x)k2 = O( n).
We obtain a gradient-descent type algorithm that takes O(log n/ε2 ) it-

erations as opposed to O( n/ε2 ) iterations if we use the algorithm from
the previous lecture. The journey we take in deducing this result is
long and rich. The central player is the old, famous and versatile: the
Multiplicative Weights Update (MWU) method. In the process we also

27
28 Multiplicative Weights Update vs. Gradient Descent

explain how to bet on stocks, how to solve linear programs approxi-


mately and quickly, and how MWU can be thought of as a method
in convex optimization. We also hint how one may solve semi-definite
programs approximately using a matrix variant of the MWU.

2.2 The Multiplicative Weights Update Method


MWU has appeared in the literature at least as early as the 1950s and
has been rediscovered many times in many fields since then. It has ap-
plications in optimization (solving LPs), game theory, finance (portfolio
optimization), machine learning (Winnow, AdaBoost, Hedge), theoret-
ical computer science (devising fast algorithms for LPs and SDPs), and
many more. We refer the reader to the survey of Arora, Hazan, and
Kale [AHK12] for a survey on Multiplicative Weight Update method.
To describe MWU, consider the following game. We want to invest
in the stock market, i.e., to buy/sell stocks and make money. To sim-
plify, assume that each day we can either buy or sell one share of one
particular kind of stock. Now, our goal is to trade stock each day in
order to maximize our revenue.
Unfortunately, we do not know much about the stock market. How-
ever, we have access to n experts that know something (or a lot) about
the stock market. At the beginning of each day, each expert i advices
us on whether we should buy or sell. Once we hear all of the advice, we
decide what to do. At the end of each day we observe whether our profit
went up or down. If the advice of an expert was incorrect (i.e., they
said ”buy” when the best option was ”sell”), we say that the expert
made a mistake.
Now, of course, experts may make mistakes, or may even be ad-
versarial and collude against us. Nevertheless, our goal is to, based on
advice given by the experts, do as well as possible; i.e., consider the
expert ? who gave the most correct advice, our aim is to do almost as
good as expert ?.
To formalize the setup, we define mti and M t , for each expert i and
each day t, as follows

def
mti = number of mistakes made by expert i until day t,
2.2. The Multiplicative Weights Update Method 29

and
def
M t = number of mistakes made by us until day t.
Having these definitions in hand, our goal is to ensure that the regret
def
Regrett = M t − mt?

is minimized, where mt? = mini mti . Note that M t may even be smaller
than mt? .
Let us first consider some obvious approaches to regret-
minimization given the advice of experts. One natural strategy is to
take advice from an expert who was right on the previous day. However,
one can very easily design examples where this strategy will perform
very poorly.1 Alternately, we can consider Majority Vote in which we
simply see the advice for a given day, and take the majority option. In
other words, we will buy if and only if the majority of the experts say
to buy. However, in this case we can also design examples where this
algorithm performs poorly.

2.2.1 The Weighted Majority Algorithm


In the two aforementioned approaches we were either too radical (i.e.,
ignoring experts who just made a mistake) or too forgiving (i.e., treat-
ing all experts equally regardless of their history). Instead, we could
consider a strategy which combines the best from both the worlds by
considering Weighted Majority. Here, we take an expert’s past advice
into consideration using weights, and take the weighted majority in
order to determine which advice to follow.
Let wit be the weight of expert i at the beginning of day t. We can
also think of wit as our confidence in expert i – the larger wit the more
confidence we have. We are unbiased in the beginning, and let

wi1 = 1, ∀i.

If expert i is right on day t, then we do not change wit , but otherwise

1 Forexample, if half the experts are right on even days and wrong on odd days, and the
other half or the experts are wrong on even days and right on odd days, we will always
make the wrong choice.
30 Multiplicative Weights Update vs. Gradient Descent

penalise that expert. Formally, define fit to be


(
t def 1 if expert i makes a mistake on day t
fi =
0 otherwise

Then, we set
def
wit+1 = wit (1 − εfit )

for an ε > 0. This introduction of ε is crucial and is a reflection of


our trust in the predictions of the experts. ε = 1 would correspond
to the setting when we know that there is some expert who is always
correct. On the other hand, if we set ε = 0, we are discarding the his-
tory. Morally, this parameter plays the same role as the η-parameter in
gradient-descent type methods and could depend on t and, in princi-
ple, be different for each expert. For now, we just think of ε as a small
enough number.
How do we make a decision given these weights? Towards this, let
t def P t
Φ = i wi denote the sum of all weights of the experts. On day t
we decide to buy if the sum of the weights of experts saying to buy is
at least Φt/2, and sell otherwise. This completes the description of the
Weighted Majority Algorithm (WMA).
How well does WMA perform? Before we delve into the proofs, let
us state two fact that we will use repeatedly.

Fact 2.1. For any x the following holds

(1 − x) ≤ e−x .

Fact 2.2. For any |x| ≤ 1/2 we have

− ln (1 − x) ≤ x + x2 .

Now we are ready to prove the following theorem.


2.2. The Multiplicative Weights Update Method 31

Theorem 2.3. After t steps, let mti denote the number of mistakes
made by expert i and M t the number of mistakes WMA had made,
and let 0 < ε ≤ 1/2. Then, for every i we have
2 ln n
M t − 2(1 + ε)mti ≤ .
ε

In other words, the performance of the WMA is within a factor of


2(1 + ε) of the best expert up to an additive factor of about ln n/ε. If we
consider the average after time t, i.e., M t/t, the above theorem implies
that if t ≥ 2 ln n/ε2 , then the difference between the average mistakes
made by the WMA is no more than an additive ε of that of the best
expert. This is formalized in the following corollary:

Corollary 2.4. After T days, let mT? be the number of mistakes the
best expert has made and let M T be the number of mistakes WMA
has made. If T ≥ 2 εln2 n where 0 < ε ≤ 1/2, then
1
M T − 2(1 + ε)mT? ≤ ε.

T

Note that the corollary says that in the long run, i.e., for large T , the
algorithm essentially makes at most two times more mistakes than the
best expert in hindsight!2
The proof is easy and relies on observing that every time the WMA
makes a mistake, the total weight on the experts reduces by a multi-
plicative factor of (1 − ε/2).

Proof. Assume that we make a mistake on day t. Then, at least half of


the total weight of the experts will get decreased by (1 − ε), otherwise
the weighted majority would have told us to take the other option.
Hence,  
t 1−ε 1 ε

t+1
Φ ≤Φ + = Φt 1 − .
2 2 2
2 Interestingly,the factor of 2 in the bound is optimal for a deterministic strategy. We will
later see that randomization can help.
32 Multiplicative Weights Update vs. Gradient Descent

Therefore, every time we make a mistake the total weight of the experts
decreases by the multiplicative factor of 1 − 2ε . Hence, we can upper


bound Φt+1 as follows


 ε M t  ε M t −εM t/2
Φt+1 ≤ Φ1 1 − =n 1− ≤ ne , (2.1)
2 2
where we recall that all weights were initialized to 1 so Φ1 = n and use
Fact 2.1.
On the other hand, since we have that Φt+1 is the sum of all the
weights of the experts on day t + 1, we also have the following lower
bound ∀i

Φt+1 ≥ wit+1
t
= wi1 (1 − ε)mi
t
= (1 − ε)mi , (2.2)

where again we use the fact that wi1 = 1.


Now, putting together the upper bound (2.1) and lower bound (2.2),
we obtain
t −εM t/2
(1 − ε)mi ≤ ne .
Taking logarithms on both sides we get that
ε
mti ln (1 − ε) ≤ ln n − M t . (2.3)
2
Using Fact 2.2, from (2.3) we conclude that
ε
−ε(1 + ε)mti ≤ ln n − M t ,
2
which further implies that
2 ln n
M t − 2(1 + ε)mti ≤ .
ε
This gives a bound on the regret as desired.

2.2.2 The Multiplicative Weights Update Algorithm


The algorithm we saw so far makes, up to a factor of two, optimal
choices about trading a single item. Let us now consider a more general
2.2. The Multiplicative Weights Update Method 33

setting. As before, there are n experts, and following solely expert i’s
advice on day t incurs a cost fit where f t : [n] → [−1, 1]. However,
in many cases, as with stocks, we can take fractional decisions. Hence,
for a given vector pt ∈ ∆n which represents a convex combination of
experts,3 we incur a loss of pt , f t .

Equivalently, we can think of pt as a probability distribution from


where we select a single expert xt , and incur a loss of f t (xt ). In this case,
the expected loss Ext ∼pt [f t (xt )] = pt , f t . Furthermore, we compete

against the best convex combination of the experts, and are interested
in minimizing the regret
T
X T
X

t t
p, f t .


p , f − min
p∈∆n
t=1 t=1

How do we update pt ? The strategy is similar to the WMA: maintain


a weight wit for the i-th expert where we start with wt1 = 1. Given the
loss function f t , the weights are updated as
def
wit+1 = wit (1 − εfit )

for a parameter ε > 0 which has the same intention as before. Since
fit could be negative or positive, the weight can increase or decrease.
However, since kf t k∞ ≤ 1 for all t, the weights always remain non-
negative. Finally, since we need a probability distribution (or convex
combination) pt ∈ ∆n , we normalize the weight vector wt by the total
def P
weight Φt = i wit to obtain

def wt
pt = .
Φt
We call this method the Multiplicative Weights Update (MWU) algo-
rithm. We are now ready to prove the main theorem of this section.
The only difference from Theorem 2.3 is that the factor of 2 goes away!
The proof is similar to that of Theorem 2.3.

3A point p is in ∆n if kpk1 = 1 and pi ≥ 0 for all i.


34 Multiplicative Weights Update vs. Gradient Descent

Theorem 2.5. Assume that kf t k∞ ≤ 1 for every t, and let 0 < ε ≤ 1/2.
Then, the MWU algorithm produces a sequence of probability distri-
butions p1 , . . . , pT such that
T T
X
t t X ln n
p, f t ≤

p , f − inf + εT. (2.4)


p∈∆n ε
t=1 t=1

Thus, the number of iterations after which the average regret becomes
less than 2ε is at most lnε2n .

Proof. We have
X X
Φt+1 = wit+1 = wit 1 − εfit .


i i

Now, using the facts that pti = /Φt , kpt k1 = 1 and pti ≥ 0, we rewrite
wit

the above equation as follows


X
Φt+1 = pti Φt 1 − εfit
 

i
X
= Φt − εΦt pti fit
i
= Φt 1 − ε pt , f t .


(2.5)

Following Fact 2.1, equality (2.5) can be written as

Φt+1 ≤ Φt e−εhp i.
t ,f t

Therefore, after T rounds we have


PT
ΦT +1 ≤ Φ1 e−ε t=1 hpt ,f t i
PT
= ne−ε t=1 hpt ,f t i , (2.6)

since Φ1 = n.
Our next step is to provide a lower bound on ΦT +1 . As before, we
observe that Φt ≥ wit for any t and any i, and obtain
T
Y
ΦT +1 ≥ wiT +1 = (1 − εfit ).
t=1
2.2. The Multiplicative Weights Update Method 35

Using Fact 2.2 we can further write


PT t 2
PT 2
ΦT +1 ≥ e−ε t=1 fi −ε t=1 (fit ) . (2.7)
Putting together the lower and the upped bound on ΦT +1 , i.e. equations
(2.7) and (2.6), and taking logarithm we obtain
T T T
X
t t
X X 2
fit 2
fit


ln n − ε p ,f ≥ −ε −ε ,
t=1 t=1 t=1

which after rearranging and dividing by ε gives


T T T
X
t t
X ln n X 2 ln n
fit fit ≤


p ,f − ≤ +ε + εT.
ε ε
t=1 t=1 t=1

The last inequality is true since kf t k∞ ≤ 1. Since this is true for any
i, by convexity we obtain that
T T
X
t t X ln n
p, f t ≤

p ,f − + εT
ε
t=1 t=1

for all p ∈ ∆n . This completes the proof of the theorem.

The width. What happens if instead of kf t k∞ ≤ 1, we have kf t k∞ ≤


ρ (for all t) some ρ > 1? This parameter is often referred to as the width
of the loss function. In this case, to maintain the non-negativity of the
weights, the update rule must be modified to
εfit
 
t+1 def t
wi = wi 1 − .
ρ
In this case, everything goes through as before except that the term lnεn
2
in the regret bound in Theorem 2.3 becomes ρ εln n . Thus, the number
of iterations after which the average regret becomes less than 2ε is
ρ2 ln n
ε2
.

2.2.3 Solving Linear Programs Approximately using MWU


In this section we illustrate one of many of the applications of the MWU
algorithm: solving linear programs. Rather than solving the optimiza-
tion version of linear programming, we chose a very simple setting to
36 Multiplicative Weights Update vs. Gradient Descent

illustrate the powerful idea of using the MWU algorithm. The method
has been used in numerous ways to speed up solutions to linear pro-
grams for fundamental problems. We consider the following feasibility
problem: given an m × n matrix A, and b ∈ Rn does there exist an x
such that Ax ≥ b?

∃?x : Ax ≥ b. (2.8)
For this problem we give an algorithm that, given an error parameter
ε > 0, either outputs a point x such that Ax ≥ b − ε1 or proves that
there is no solution to this linear system of inequalities. We also assume
the existence of an oracle that, given vector p ∈ ∆n , solves the following
relaxed problem
∃?x : p> Ax ≥ p> b. (2.9)
Note that the above feasibility problem involves only one inequality.
This, often, may be significantly easier algorithmically than the original
feasibility problem. In any case, we will assume that. Clearly, if there is
an x that satisfies (2.8), then x satisfies (2.9) as well for all p ∈ ∆n . On
the other hand, if there is no solution to (2.9), then the system (2.8)
is infeasible. Finally, we assume that when the oracle returns a feasible
solution for a p, the solution x that it returns is not arbitrary but has
bounded width:
max |(Ax)i − bi | ≤ 1.
i
In this setting, we can prove the following theorem:

Theorem 2.6. If there is a solution to (2.8), then there is an algorithm


that outputs a x which satisfies the system (2.8) up to an additive error
of 2ε. The algorithm makes at most lnε2m calls to a width-bounded oracle
for the problem 2.9. On the other hand, if there is no solution to (2.8),
then the algorithm says so.

The algorithm in the proof of the theorem is the MWU: we just have to
identify the loss function at time t. We maintain a probability distribu-
tion pt at any time t and pass it to the oracle. As is usual, the starting
probability distribution, p1 , is uniform. We pass this pt to the oracle.
If the oracle returns that the system 2.9 is infeasible, the algorithm
2.3. Multiplicative Weights Update as Gradient Descent 37

returns that the system (2.8) is infeasible. As feasibility is preserved


under taking convex combinations, we know that the algorithm is cor-
rect. On the other hand, if the oracle returns an xt , we set the loss
function to
def
f t = Axt − b.
Since |(Axt )i − bi | ≤ 1, kf t k∞ ≤ 1. Finally, if the algorithm succeeds
def ln m
for T = ε2
iterations, then we let

def 1X t
x̃ = x.
T t

Thus, to prove the theorem, it is sufficient to prove that x̃ satisfies (2.8)


up to an additive error of 2ε.
Towards that end, we begin by observing that, since xt is feasible
for the system (2.9) for pt for every t, we have
> >
pt , f t = Axt − b, pt = pt Axt − pt b ≥ 0.


Thus, the loss at every step is at least 0. Hence, from Theorem 2.5, for
the choice of T = ln m/ε2 we obtain that for every i, − T1 Tt=1 fit ≤ 2ε.
P

This is the same as


T
1X
(Axt )i − bi ≤ 2ε.


T
t=1

1 P t
Now, using the definition of x̃ = T tx , we obtain that for all i,

(Ax̃)i ≥ bi − 2ε.

Thus, x̃ is 2ε-feasible for (2.8). This concludes the proof of Theorem


2.6.

2.3 Multiplicative Weights Update as Gradient Descent


Now we begin our journey towards using the ideas developed in the
previous sections to give a new algorithm for convex optimization. In
this section we first see how the MWU algorithm is in fact a form of
gradient descent. The connection is via the entropy function.
38 Multiplicative Weights Update vs. Gradient Descent

For a non-negative vector w, let H(w) denote the function


def
X
H(w) = wi ln wi .
i
(This function is the negative entropy when w ∈ ∆n .) The gradient of
H(w) is the vector x(w) where
xi = (1 + ln wi )i .
Thus, if w varies over the non-negative orthant, x(w) varies over Rn .
Moreover, this map is invertible: given x ∈ Rn , one can find a w such
that x = x(w) :
wi = exi −1 .
def
Thus, at iteration t, for wt we let xt = x(wt ). The loss function is
def
f t , and the loss is F t (w) = hf t , wi. Thus, ∇w F t = f t . Thus, our
assumption that kf t k∞ ≤ 1 is the same as
k∇w F t k∞ ≤ 1.
Suppose we take the gradient step of size η in the x-space with respect
to ∇w F t . Then, we obtain a new point in the x-space:
xt+1 = xt − η∇w F t = xt − ηf t .
The corresponding wt+1 is obtained by equating, for each i,
1 + ln wit+1 = 1 + ln wit − ηfit .
Exponentiating, the update step in the w-space is
t
wit+1 = wit e−ηfi .
While this is not quite the update in the MWU algorithm, it is the
Hedge variant of the MWU. For these updates we can derive the same
regret bound as Theorem 2.5. It is the same as the MWU algorithm
when ηfit  1 for all t, i since we can then use the approximation
e−x ≈ 1 − x, giving us the familiar update step:
wit+1 = wit (1 − ηfit ).
Thus, if we allow going back and forth to a space via the gradient,
there is a function, namely the negative entropy function, for which
the MWU algorithm is nothing but a gradient-descent type method!
How general is this paradigm?
2.3. Multiplicative Weights Update as Gradient Descent 39

2.3.1 A Gradient Descent Method Inspired from MWU


It turns out that there is a method for convex optimization that can be
abstracted out from the previous section which works well beyond the
MWU setting. As we mentioned in the beginning of this lecture, the
method we will derive could give better convergence rate than doing the
usual gradient descent in the ambient space. We are back in the setting
where we are given a convex function f and a convex set K ⊆ K 0 , and
the goal is to find a point x ∈ K such that

f (x) − f (x? ) ≤ ε.

(The reader can keep in mind K = ∆n and K 0 = Rn≥0 to draw the


analogy with the previous section.) Assume that the gradient of f is
bounded by 1 with respect to some norm k · k. (This corresponds to
the assumption kf t k∞ ≤ 1 for all t in the MWU setting.) Just like we
chose the negative entropy map H in the previous section, the method
depends on a map M which is assumed to be continuously differentiable
and strictly convex function M over K 0 . Further, similar to H, the map
M should have additional properties (we do not spell them all out): the
map ∇M : K 0 7→ S should be one-to-one and invertible. (S = Rn in the
previous section.) We can now generalize the algorithm in the previous
section.
The initial point x1 is chosen to be
def
x1 = arg inf M (x).
x∈K

(For negative entropy, this results in the choice of the vector where all
the coordinates are the same, i.e., the uniform distribution over the
experts or the vector n1 1). Let xt ∈ K. Once we map xt to ∇M (xt )
and do the gradient step, we may move out of K but should remain
in K 0 . Let y t+1 be the point in K 0 which corresponds to the resulting
point in S. In other words, given xt and ∇f, for t ≥ 1 we let y t+1 be
the point such that

∇M y t+1 = ∇M xt − η∇f xt .
  
(2.10)

As y t+1 might lie outside of K, we project it back to K and let xt+1 be


its projection. We obtain the projection by minimizing the Bregman
40 Multiplicative Weights Update vs. Gradient Descent

divergence as follows
def
xt+1 = arg inf DM x, y t+1 .

x∈K

Recall that the Bregman divergence DM (x, y) associated with M for


points x, y ∈ K 0 is defined to be
def
DM (x, y) = M (x) − M (y) − h∇M (y), x − yi .

(In the MWU setting, the Bregman divergence of the negative entropy
function corresponds to the Kullback-Liebler divergence and projecting
according to it is nothing but normalizing a non-negative vector to
make its `1 norm 1.) In this setting, we can prove the following theorem
which is a generalization the MWU method (and from which we can
recover the MWU method):

Theorem 2.7. Let the gradient of f be bounded in a norm k · k


by G. Let M be a map satisfying the properties listed above and,
in addition, is l-strongly convex with respect to norm k · k? . Let
def def
D = supx∈K M (x) − M (x1 ). Then, for η = √DT , the algorithm above
produces a sequence of points x1 , . . . , xT such that
T
!
1X
f xt − f (x? ) ≤ ε
T
t=1

where  2
1 DG
T = · .
l ε

To complete the analogy to the MWU method in the previous section,


def √
we note that if we let x1 = n1 1, then D = ln n. Further, Pinsker’s
inequality implies that the negative entropy function is 1-strongly con-
vex with respect to the k · k1 norm, the dual norm to k · k∞ . The proof
of this theorem is left as an exercise. No new idea, other than those
discussed in this lecture and the last are required to complete the proof.
Thus, we have achieved the goal promised in the beginning of the
lecture. In conclusion, we started with the MWU algorithm in the online
2.4. Appendix: A Multiplicative Weights Update for Matrices 41

convex optimization setting, interpreted it as a gradient descent method


and then abstracted the essence to obtain a new method in convex
optimization. This method is often referred to as Mirror Descent or
Dual Averaging algorithm. In the appendix we present a matrix version
of the MWU method which has applications to approximately solving
SDPs quickly.

2.4 Appendix: A Multiplicative Weights Update for Matrices


In this section we discuss the generalization of our MWU framework
to the matrix world. As before we have n experts. So far, the decision
for round t was essentially the index i of the expert we follow in this
round. After this decision is made, the adversary reveals the cost vector
f t ∈ [−1, 1]n , and we pay fit . This is equivalent to choosing on every
round t a vector v t ∈ {e1 , e2 , ..., en } and having loss hv t , f t i. Recall that
the strategy we devised for the game was randomized and our v t was
a random vector chosen according to the distribution P [v t = ei ] = pti ,
where pt was the probability distribution at round t. Then the expected
loss was simply i pi fit = hpt , f t i.
P

In the matrix case we allow the decision v t to be any vector of `2 -


norm 1 (i.e. vector from unit sphere Sn−1 ). The loss at round t is given
by
hv t , F t v t i = (v t )> F v t ,
where F t is the loss matrix chosen by the adversary right after our
decision at round t. We assume that this matrix is symmetric. Similarly
to the basic case, where we had the requirement |fit | ≤ 1, we impose a
bound on the cost matrix: kF t k ≤ 1 (k · k is the spectral norm, i.e. we
require that the eigenvalues of F t lie all in the interval [−1, 1]). Observe
that in our previous setting all the loss matrices were diagonal and we
could pick our decision only from a finite set {e1 , e2 , ..., en } ⊆ Sn−1 .
Like before we will give a randomized strategy for this online problem.
Our goal is, of course, to minimize the expected total loss comparing
to the best expert. However, now the set of experts is much bigger, it
consists of all the unit length vectors.
def
Suppose our algorithm chooses the vector v = v t according to dis-
def def
tribution µ = µt and the cost matrix is F = F t . We calculate the loss
42 Multiplicative Weights Update vs. Gradient Descent

at round t:

Ev∼µ [hv, F vi] = Ev∼µ [(vv T ) • F ] = Ev∼µ [vv T ] • F

def
where A • B = Tr(AT B) is the usual matrix inner product. So if
we denote P t = Ev∼µ [vv T ], then the loss at round t is simply P t •
F t . Observe that P t is PSD and Tr(P t ) = 1. Our algorithm, instead
of working directly with a distribution over Sn−1 , will keep a matrix
P t . Our objective is to ensure that Tt=1 P t • Ft as small as possible,
P
PT
compared to minkwk2 =1 t=1 wT F t w. The latter is nothing but the
smallest eigenvalue of Tt=1 F t . We proceed with the description of our
P

algorithm based on MWU.


Matrix Multiplicative Weights Update (MMWU):

(1) Initialize Q1 = I.
(2) In round t, use the matrix
Qt
Pt = ,
Φt
def
where Φt = Tr(Qt ).
(3) Observe the loss matrix F t , and update Qt as follows
Pt
Fk
Qt+1 = e−ε k=1 .
def P∞ Ak
where: eA = k=0 k!

We will now prove an analogue of Theorem 2.5 in the matrix world.

Theorem 2.8. Assume that −I  F t  I for every t, and let 0 <


ε ≤ 1. Then, the algorithm MMWU produces a sequence of matrices
P 1 , . . . , P T with Tr(P t ) = 1 such that:

T T
X
t t
X ln n
P • F − inf vT F tv ≤ + εT. (2.11)
v∈Sn−1 ε
t=1 t=1
2.4. Appendix: A Multiplicative Weights Update for Matrices 43

ln n
Thus, after T = ε2
,

T T
!
1X t 1X t
P • F t ≤ λmin F + 2ε.
T T
t=1 t=1

Thus, if we know that P t • F t ≥ 0 for all t, then


T
1X t
−2εI  F .
T
t=1

This is a powerful implication and has far reaching consequences when


one carefully constructs the setting, just as in Section 2.2.3. For in-
stance, this theorem is sufficient to construct fast and approximate
SDP solvers and show the existence of near-linear spectral sparsifiers.
We omit the details.
The proof has the same structure as the proof of Theorem 2.5. How-
ever, in the matrix world where we are plagued with non-commutativty:
AB 6= BA in general for matrices. For instance it would have been great
if eA+B = eA eB . But this is false in general. What is true instead is the
following fact which is known as the Golden-Thompson inequality:

Fact 2.9. Let A, B be symmetric matrices, then:

Tr eA+B ≤ Tr eA eB .
 

We note that this is not true for 3 matrices A, B, C! We also need


some simple inequalities necessary in the proof which essentially are a
consequence of the corresponding scalar inequalities.

Fact 2.10. Let A, B, C be symmetric matrices, let A be PSD and B 


C, then:
Tr(AB) ≤ Tr(AC).
44 Multiplicative Weights Update vs. Gradient Descent

Fact 2.11. Let A be a symmetric matrix and v be a vector of unit


length, then:
>
ev Av ≤ Tr eA .


Proof. Suppose the eigenvalues of A are λ1 ≤ · · · ≤ λn . The eigenvalues


of eA are precisely eλ1 , . . . , eλn . We know that v > Av ≤ λn . Therefore
>
ev Av ≤ eλn ≤ Tr eA .


Fact 2.12. Let A be a symmetric matrix satisfying kAk ≤ 1, then:

e−A  I − A + A2 .

This is easy to see because the eigenspaces of all the matrices


A, A2 , A3 , ..., eA are the same, so it is enough to prove the above in-
equality for A being a number in the interval [−1, 1]. Now we are ready
to prove the theorem.

Proof. As in the proof of Theorem 2.5 we try to obtain upper and lower
bounds on the potential function Φt+1 , which in our case is defined to
def
be Φt+1 = Tr(Qt+1 ). Let us start with the upper bound:

 Pt k

Φt+1 = Tr(Qt+1 ) = Tr e−ε k=1 F
G−T  Pt−1 k t

≤ Tr e−ε k=1 F e−εF
 Pt−1 k 
≤ Tr e−ε k=1 F I − εF t + ε2 (F t )2
Tr Qt − εTr Qt F t + ε2 Tr Qt (F t )2 .
  
=

Now we use the fact that Qt = P t Tr(Qt ):

Tr Qt F t = Tr(Qt )Tr P t F t = Φt (P t • F t )
 

Similarly:

Tr Qt (F t )2 = Tr(Qt )Tr P t F t = Φt (P t • (F t )2 ).
 
2.4. Appendix: A Multiplicative Weights Update for Matrices 45

Therefore we can conclude that:


t •F t )+ε2 (P t •(F t )2 )
Φt+1 ≤ Φt (1 − ε(P t • F t ) + ε2 (P t • (F t )2 )) ≤ Φt e−ε(P .

Expanding further,
Pt k •F k )+ε2
Pt k •(F k )2 )
Φt+1 ≤ ne−ε k=1 (P k=1 (P . (2.12)

It remains to obtain a suitable lower bound for Φt+1 . To this end let us
fix any vector v with kvk2 = 1. We use Fact 2.11 with A = −ε tk=1 F k :
P

Pt > k
 Pt k

e−ε k=1 v F v ≤ Tr e−ε k=1 F = Φt+1 . (2.13)

Putting inequalities 2.12 and 2.13 together and taking logarithms of


both sides yields
t
X t
X t
X
−ε v > F k v ≤ −ε (P k • F k ) + ε2 (P k • (F k )2 ) + ln n.
k=1 k=1 k=1

After dividing by ε and rearranging, we get for every v ∈ Sn−1 :


t t t
X
k k
X
T k
X ln n
(P • F ) ≤ v F v+ε (P k • (F k )2 ) + .
ε
k=1 k=1 k=1
3
Newton’s Method and the Interior Point Method

3.1 Overview
In the last of the three foundational lectures we continue our journey
towards faster (and better) algorithms for convex programs. The al-
gorithms introduced till now assume access only to an oracle for the
value of the convex function and its gradient at a specified point. In
this lecture we assume that we are also given a second order access to
f : namely, given vectors x and y, we could obtain (∇2 f (x))−1 y. The
resulting method would be Newton’s method for solving unconstrained
programming and will have this property that if one starts close enough
to the optimal solution the convergence would be in log log 1/ε itera-
tions! Finally, we present the application of Newton’s method to solv-
ing constrained convex programs. This is achieved by moving from
constrained to unconstrained optimization via a barrier function. The
resulting methods are broadly termed as interior point methods. We an-
alyze one such method, referred to as the primal path-following interior
point method, for linear programs. We end this lecture by a discussion
on self-concordant barrier functions which allow us to go beyond linear
programs to more general convex programs.

46
3.2. Newton’s Method and its Quadratic Convergence 47

g(x)

r x1 x0

Fig. 3.1 One step of Newton’s Method

3.2 Newton’s Method and its Quadratic Convergence


Our starting point is the versatile Newton’s method which we first
explain in the simplest setting of finding a root for a univariate poly-
nomial.

3.2.1 Newton-Raphson method

In numerical analysis, Newton’s method (also known as the Newton-


Raphson method), named after Isaac Newton and Joseph Raphson, is
a method for finding successively better approximations to the roots
(or zeroes) of a real-valued function. Suppose we are given a function
g : R 7→ R and we want to find its root (or one of its roots). Assume we
are given a point x0 which is likely to be close to a zero of g. We consider
the point (x0 , g(x0 )) and draw a line through it which is tangent to the
graph of g. Let x1 be the intersection of the line with the x-axis (see
Figure 3.2.1). Then it is reasonable (at least if one were to believe the
figure above) to suppose that by moving from x0 to x1 we have made
48 Newton’s Method and the Interior Point Method

progress in reaching a zero of g. First note that:

def g(x0 )
x1 = x0 −
g 0 (x0 )

From x1 , by the same method we obtain x2 , then x3 etc. Hence, the


general formula is:

def g(xk )
xk+1 = xk − for all k ≥ 0. (3.1)
g 0 (xk )

Of course we require differentiability of g, in fact we will assume even


more – that g is twice continuously differentiable. Let us now analyze
how fast the distance to the root decreases with k.
Let r, be the root of g, that is g(r) = 0. Expand g into Taylor
series at the point xk and, use the Mean Value Theorem, to obtain the
following:
1
g(r) = g(xk ) + (r − xk )g 0 (xk ) + (r − xk )2 g 00 (θ)
2
for some θ in the interval [r, xk ]. From (3.1) we know that

g(xk ) = g 0 (xk )(xk − xk+1 ).

Recall also that g(r) = 0. Hence, we get:

1
0 = g 0 (xk )(xk − xk+1 ) + (r − xk )g 0 (xk ) + (r − xk )2 g 00 (θ)
2
which implies that
1
g 0 (xk )(r − xk+1 ) = (r − xk )2 g 00 (θ).
2
This gives us the relation between the new distance from the root in
terms of the old distance from it:
00
g (θ)
|r − xk+1 | = 0 |r − xk |2 .
2g (xk )

This can be summarized in the following theorem:


3.2. Newton’s Method and its Quadratic Convergence 49

Theorem 3.1. Suppose g : R 7→ R is a C 2 function, 1 r ∈ R is a root


of g, x0 ∈ R is a starting point and x1 = x0 − gg(x 0)
0 (x ) , then:
0

|r − x1 | ≤ M |r − x0 |2
00
g (θ)
where M = supx∈[r,x0 ] 2g 0 (x) .

Thus, assuming that M is a small constant, say M ≤ 1 (and remains


so throughout the execution of this method) and that |x0 − r| < 1, we
obtain quadratically fast convergence of xk to r. For the error |xk − r|
to became less then ε one needs to take k = log log 1/ε. As one can
imagine, for this reason Newton’s Method is very efficient and powerful.
In practice, it gives very good results even when no reasonable bounds
on M or |x0 − r| are available.

3.2.2 Newton’s Method for Convex Optimization


How could the benign looking Newton-Raphson method be useful to
solve convex programs? The key lies in the observation from the first
lecture that the task of minimization of a differentiable convex func-
tion in the unconstrained setting is equivalent to finding a root of its
derivative. In this section we abstract out the method from the previous
section and present Newton’s method for convex programming.
Recall that the problem is to find
def
x? = arg infn f (x).
x∈R

where f is a convex (smooth enough) function. The gradient ∇f of f is


a function Rn 7→ Rn and its derivative ∇2 f maps Rn to n×n symmetric
matrices. Hence, the right analog of the update formula (3.1) to our
setting can be immediately seen to be:
def
xk+1 = xk − (∇2 f (xk ))−1 ∇f (xk ) for all k ≥ 0. (3.2)
For notational convenience we define the Newton step at point x to be
def
n(x) = −(∇2 f (x))−1 ∇f (x),
1 The function is twice differentiable and the second derivative is continuous.
50 Newton’s Method and the Interior Point Method

then (3.2) gets abbreviated to xk+1 = xk + n(xk ). One may convince


themselves that (3.2) is meaningful by applying it to f being a strictly
convex quadratic function (i.e. f (x) = x> M x for M positive definite).
Then, no matter which point we start, after one iteration we land in
the unique minimizer. This phenomenon can be explained as follows:
suppose f˜ is the second order approximation of f at point x,
1
f˜(y) = f (x) + (y − x)> ∇f (x) + (y − x)> ∇2 f (x)(y − x)
2
If f is sctrictly convex then its Hessian is positive definite, hence the
minimizer of f˜(y) is

y ? = x − (∇2 f (x))−1 ∇f (x) = x + n(x)

For this reason Newton’s method is called a second-order method, be-


cause it takes advantage of the second order approximation of a func-
tion to make a step towards the minimum. All the algorithms we looked
at in the previous lecture were first-order methods. They used only the
gradient (first order approximation) to perform steps. However, com-
putationally our task has increased as now we would need a second
order oracle to the function: given x and y, we would need to solve the
system of equations ∇2 f (x)y = ∇f (x).
The next question is if, and, under what conditions xk converges
to the minimizer of f . It turns out that it is possible to obtain a sim-
ilar quadratic convergence guarantee assuming that x0 is sufficiently
close to the minimizer x? . We can prove the following theorem whose
hypothesis and implications should be compared to Theorem 3.1.

Theorem 3.2. Let f : Rn → 7 R be a C 2 function and x? be its mini-


mizer. Denote the gradient ∇2 f (x) by H(x) and assume that the fol-
lowing hold:

• There is some constant h > 0 and a ball B(x? , r) around x?


such that, whenever x ∈ B(x? , r), kH(x)−1 k ≤ h1 .
• There is some constant L > 0 and a ball B(x? , r) around
x? such that, whenever x, y ∈ B(x? , r), kH(x) − H(y)k ≤
Lkx − yk.
3.2. Newton’s Method and its Quadratic Convergence 51

If x0 is a starting point, sufficiently close to x? and x1 = x0 + n(x)


then:

kx1 − x? k ≤ M kx0 − x? k2

L
for some constant M . For example M = 2h will do.

Thus, if M ≤ 1, then with a starting point close enough to the opti-


mal solution, Newton’s method converges quadratically fast. One can
present a rough analogy of this theorem with Theorem 3.1. There, for
the method to have quadratic convergence, |g 0 (x)| should be bigger in
comparison to |g 00 (x)| (to end up with small M ). Here, the role of g is
played by the derivative f 0 of f (the gradient in the one-dimensional
case). The first condition on H(x) says basically that |f 00 (x)| is big. The
second condition may be a bit more tricky to decipher, it says that f 00 (x)
is Lipschitz-continuous, and upper-bounds the Lipschitz constant. As-
suming f is thrice continuously differentiable, this essentially gives an
upper bound on |f 000 (x)|. Note that this intuitive explanation does not
make any formal sense, since f 0 (x), f 00 (x), f 000 (x) are not numbers, but
vectors, matrices and 3-tensors respectively. We only wanted to em-
phasize that the spirit of Theorem 3.2 still remains the same as Theo-
rem 3.1.

Proof. [Proof of Theorem 3.2] The basic idea of the proof is the same
as in 3.1. We need a similar tool as the Taylor expansion used in the
previous chapter. To obtain such, we consider the function φ : [0, 1] →
Rn , φ(t) = ∇f (x + t(y − x)). Applying the fundamental theorem of
calculus to φ (to every coordinate separately) yields:
Z 1
φ(1) − φ(0) = ∇φ(t)dt
0
Z 1
∇f (y) − ∇f (x) = H(x + t(y − x))(x − y)dt. (3.3)
0

Let x = x0 for notational convenience and write x1 − x? in a convenient


52 Newton’s Method and the Interior Point Method

form:

x1 − x? = x − x? + n(x)
= x − x? − H(x)−1 ∇f (x)
= x − x? + H(x)−1 (∇f (x? ) − ∇f (x))
Z 1
? −1
= x − x + H(x) H(x + t(x? − x))(x − x? )dt
0
Z 1
= H(x)−1 (H(x + t(x? − x)) − H(x))(x − x? )dt.
0

Now take norms:


Z 1
? −1
kx1 − x k ≤ kH(x) k(H(x + t(x? − x)) − H(x))(x − x? )kdt
k
0
Z 1
−1 ?
≤ kH(x) kkx − x k k(H(x + t(x? − x)) − H(x))kdt.
0
(3.4)
We use the Lipschitz condition on H to bound the integral:
Z 1 Z 1
k(H(x + t(x? − x)) − H(x))kdt ≤ Lkt(x? − x)kdt
0 0
Z 1
?
≤ Lkx − xk tdt
0
L
= kx? − xk.
2
Together with (3.4) this implies:
LkH(x)−1 k ?
kx1 − x? k ≤ kx − xk2 (3.5)
2
LkH(x)−1 k L
which completes the proof. We can take M = 2 ≤ 2h .

3.3 Constrained Convex Optimization via Barriers


In this section return to constrained convex optimization problems of
the form:

inf f (x) (3.6)


x∈K
3.3. Constrained Convex Optimization via Barriers 53

where f is a convex, real valued function and K ⊆ Rn is a convex


set. 2 In the first lecture we discussed how gradient-descent type meth-
ods could be adapted in this setting by projecting onto K at every
step. We took an improvement step xk 7→ x0k+1 with respect to f and
then we projected x0k+1 onto K to obtain xk+1 . There are a few prob-
lems with this method. One of them is that in most cases computing
projections is prohibitively hard. Furthermore, even if we ignore this
issue, the projection-based methods are not quiet efficient. To get an
ε-approximation to the solution, the number of iterations depends poly-
nomially on ε−1 (i.e. the number of iterations is proportional to 1/εO(1) ).
Unfortunately such dependence on ε is often unsatisfactory. For exam-
ple, to obtain the optimal solution for a linear program we need to take
ε of the form 2−L , where L is the size of the instance.3 Hence, to get a
polynomial time algorithm, we need the running time dependence on ε
to be logO(1) (1/ε). Today we will see an interior point algorithm which
achieve such a guarantee.

3.3.1 Following the Central Path


We are going to present one very general idea for solving constrained
optimization problems. Recall that our aim is to minimize a given con-
vex function f (x) subject to x ∈ K. To simplify our discussion, we
assume that the objective function is linear, 4 i.e. f (x) = c> x and
the convex body K is bounded and full-dimensional (it has positive
volume).
Suppose we have a point x0 ∈ K and we want to perform an im-
provement step maintaining the condition of being inside K. The sim-
plest idea would be to move in the direction −c to decrease our objective
value as much as possible. Our step will then end up on the bound-
ary of K. The second and further points would lie very close to the
2K is given to us either explicitly – by a collection of constraints defining it, or by a
separation oracle.
3 One should think of L as the total length of all the binary encodings of numbers in the

description of the linear program. In the linear programming setting, LO(1) can be shown
to bound the number of binary digits required to represent the coordinates of the optimal
solution.
4 Actually we are not losing on generality here. Every convex problem can be stated equiv-

alently with a linear objective function.


54 Newton’s Method and the Interior Point Method

boundary, which will force our steps to be short and thus inefficient. In
case of K being a polytope, such a method would be equivalent to the
simplex algorithm, which as known to have an exponential worst case
running time. For this reason we need to set some force, which would
repel us from the boundary of K. More formally, we want to move our
constraints to the objective function and consider c> x + F (x),5 where
F (x) can be regarded as a “fee” for violating constraints. F (x) should
become big for x close to ∂K. Of course, if we would like the methods
developed in the previous sections for unconstrained optimization to be
applicable here, we would also like f to be strongly convex. Thus, this
is another route we could take to convert a constrained minimization
problem into unconstrained minimization, but with a slightly altered
objective function. To formalize this approach we introduce the notion
of a Barrier Function. Instead of giving a precise definition, we list
some properties, which we wish to hold for a barrier function F :

• F is defined in the interior of K, i.e. dom(F ) = int(K) , (3.7)


• for every point b ∈ ∂K we have: limx→b F (x) = +∞, (3.8)
• F is strictly convex. (3.9)

Suppose F is such a barrier function, let us define a perturbed objective


function fη , where η > 0 is a real parameter:
def
fη (x) = ηc> x + F (x) (3.10)

We may imagine that fη is defined on all of Rn but attains finite values


only on int(K). Intuitively, making η bigger and bigger reduces the
influence of F (x) on the optimal value of fη (x). Furthermore, observe
that since c> x is a linear function, the second order behavior of fη is
completely determined by F , that is ∇2 fη = ∇2 F . In particular, fη is
strictly convex and it has a unique minimizer x?η . The set

{x?η : η ≥ 0}

5 One seemingly perfect choice of F would be a function which is 0 on K and +∞ on the


complement of K. This reduces our problem to unconstrained minimization of c> x+F (x).
However, note that we have not gained anything by this reformulation, even worse: our
objective is not continuos anymore.
3.4. Interior Point Method for Linear Programming 55

Fig. 3.2 Example of a central path

can be seen to be continuous due to the Implicit Function Theorem


and is referred to as a central path starting at x?0 and approaching x?
– the solution to our convex problem 3.6. In other words:

lim x?η = x? .
η→∞

A method which follows this general approach is called a path-following


interior point method. Now, the key question is: “how fast is this conver-
gence?”. Needless to say, this would depend on the choice of the barrier
function and the method for solving the unconstrained optimization
problem. We answer it in the next section for linear programs: using
the logarithmic barrier function along with Newton’s method from the
previous section we can give an algorithm to solve linear programs in
time polynomial in the encoding length.

3.4 Interior Point Method for Linear Programming


3.4.1 A Brief History of IPMs
Interior point methods (IPMs), as alluded to in the previous section,
first appear in a recognizable form in the Ph.D. thesis of Ilya Dikin
56 Newton’s Method and the Interior Point Method

in the 60s. However, there was no analysis for the time needed for
convergence. In 1984 Narendra Karmarkar announced a polynomial-
time algorithm to solve linear programs using an IPM. At that point
there was already a known polynomial time algorithm for solving LPs,
namely the Ellipsoid Algorithm from 1979. Further, the method of
choice in practice, despite known to be inefficient in the worst case, was
the Simplex method. However, in his paper, he also presented empirical
results which showed that his algorithm was consistently 50 times faster
than the simplex method. This event, which received publicity around
the world throughout the popular press and media, marks the beginning
of the interior-point revolution. For a nice historical perspective on this
revolution, we refer the reader to the survey of Wright [Wri05].
Karmarkar’s algorithm needed roughly O(m log 1/ε) iterations to
find a solution (which is optimal up to an additive error of ε) to a
linear program, where m is the number of constraints. Each such it-
eration involved solving a linear system of equations, which could be
easily performed in polynomial time. Thanks to the logarithmic depen-
dence on the error ε it can be used to find the exact, optimal solution
to a linear program in time polynomial with respect to the encoding
size of the problem. Thus, Karmarkar’s algorithm was the first efficient
algorithm with provable worst case polynomial running time. Subse-
quently, James Renegar proposed an interior point algorithm with re-

duced number of iterations: O( m log(1/ε)). Around the same time,
Nesterov and Nemirovski abstracted out the essence of interior point
methods and came up with the the notion of self-concordance, which
in turn was used to provide efficient, polynomial time algorithms for
many nonlinear convex problems such as semi-definite programs.
We begin our discussion by introducing the logarithmic barrier func-
tion and then Newton’s Method, which is at a core of all interior-
point algorithms. Then we present Renegar’s primal path following
method. We give a full analysis of this algorithm, i.e. we prove that
e √m log(1/ε)) it outputs a solution which is ε-close to the op-
after O(
timum.
3.4. Interior Point Method for Linear Programming 57

3.4.2 The Logarithmic Barrier


In this section we switch from a general discussion on constrained con-
vex optimization to a particular problem: linear programming. We will
use ideas from the previous section to obtain an efficient algorithm for
solving linear programs in the form:
min c> x
x∈Rn
(3.11)
s.t. Ax ≤ b
where x is the vector of variables of length n, c ∈ Rn , b ∈ Rm and
A ∈ Rm×n . We will denote the rows of A by a1 , a2 , . . . , am (but treat
them as column vectors). We will assume that the set of constraints
Ax ≤ b defines a bounded polytope P of nonzero volume in Rn .6
We are going to use the following barrier function which is often
called the logarithmic barrier function:
m
def
X
F (x) = − log(bi − a>
i x)
i=1

It is easy to see that F (x) is well defined on int(P ) and tends to infinity
when approaching the boundary of P . Let us now write down formulas
for the first and second derivative of F , they will prove useful a couple
of times in the analysis. To simplify the notation, we will often write
si (x) for bi − a>
i x.

Fact 3.3. If x is a point in the interior of P , then:

(1) ∇F (x) = m ai
P
i=1 si (x)
ai a>
(2) ∇2 F (x) = m
P i
i=1 si (x)2

Using the above formulas we can investigate the convexity of F . It is


clear that ∇2 F (x)  0, which implies convexity. Strong convexity of F
6 Of course this is not always the case for general linear programs, however if the program
is feasible then we can perturb the constraints by an exponentially small ε > 0 to force
the feasible set to be full dimensional. Moreover, it will turn out soon that infeasibility is
not a serious issue.
58 Newton’s Method and the Interior Point Method

(i.e. ∇2 F (x)  0) is equivalent to the fact that a1 , a2 , . . . , an span the


whole Rn , in our case this is true, because of the assumption that P
is full dimensional and bounded. Which is which should be clear from
the context.
From now on, whenever we talk about a function F we will write
H(x) for its Hessian ∇2 F (x) and g(x) for the gradient ∇F (x). Note
however that at some places in the text we refer to F as a general
function and sometimes we fix F to be a specific one: the logarithmic
barrier.

3.4.3 Local Norms


In this section we introduce an important concept of a local norm. It
plays a crucial role in understanding interior point methods. Rather
than working with the Euclidean norm, the analysis of our algorithm
will be based on bounding the local norm of the Newton step. Let
A ∈ S n be a positive definite matrix, we associate with it the inner
product h·, ·iA defined as:
def
hx, yiA = x> Ay

and the norm k · kA :


def

kxkA = x> Ax
Note that a ball of radius 1 with respect to such a norm corresponds
to an ellipsoid in the Euclidean space. Formally, we define the ellipsoid
associated with the matrix A, centered at x0 ∈ Rn as:
def
Ex0 (A) = {x : (x − x0 )> A(x − x0 ) ≤ 1}.

Matrices of particular interest are for us Hessians of strictly convex


functions F : Rn 7→ R (such as the logarithmic barrier). For them we
usually write in short:
def
kzkx = kzkH(x)
whenever the function F is clear from the context. This is called a local
norm at x with respect to F . From now on, let F (x) = m
P
i=1 − log(bi −
a>
i x) be the logarithmic barrier function. For the logarithmic barrier,
2
Ex (∇ F (x)) is called the Dikin Ellipsoid centered at x. An important
3.4. Interior Point Method for Linear Programming 59

property of this ellipsoid is that it is contained in P and that the


Hessian does not change much. Thus, the curvature of the central path
with respect to the local norm does not change much and, hence, it is
close to a straight line. Thus, taking a short Newton step inside this
ellipsoid from the center keeps one close to the central path.
We return to this intimate connection between the Dikin Ellipsoid
and IPMs in the appendix. We conclude this section by noting that
there is another way to motivate measuring progress in the local norm:
that Newton’s method is affinely invariant. This means that if, for a
invertible linear transformation A, we do a change of variables x = Ay,
then the Newton step is just a transformation by A: n(x) = An(y). On
the other hand the Lipschitz condition on the Hessian in Theorem 3.2
is not affine invariant w.r.t. the Euclidean norm. However, it would be
if we redefine it as

kH(x) − H(y)k ≤ Lkx − ykx ,

this remains affine invariant.

3.4.4 A Primal Path-Following IPM


We come back to the description of the path following algorithm. Recall
that we defined the central path as: {x?η : η > 0} consisting of optimal
solutions to the perturbed linear program. We want to start at x0 = x?η0
for some small η0 > 0 and move along the path by taking discrete steps
of certain length. This corresponds essentially to increasing η in every
step. So one idea for our iterative procedure would be to produce a
sequence of pairs (x0 , η0 ), (x1 , η1 ), . . . such that η0 < η1 < · · · and
xk = x?ηk for every k. We should finish at the moment when we are
close enough to the optimum, that is when c> xk < c> x? + ε. Our first
lemma gives a bound on when to stop:

Lemma 3.4. For every η > 0 we have c> x?η − c> x? < m
η.

Proof. Calculate first the derivative of fη (x):

∇fη (x) = ∇(ηc> x + F (x)) = ηc + ∇F (x) = ηc + g(x)


60 Newton’s Method and the Interior Point Method

The point x?η is the minimum of fη , hence ∇fη (x?η ) = 0 and so:

g(x?η ) = −ηc (3.12)

Using this observation we obtain that


1
? ?
c> x?η − c> x? = − c, x? − x?η = g(xη ), x − x?η


η
To complete the proof it remains to argue that g(x?η ), x? − x?η < m.

We will show even more, that for every two points x, y in the interior
of P , we have hg(x), y − xi < m. This follows by a simple calculation:

m
X a> (y − x)
i
hg(x), y − xi =
si (x)
i=1
m
X (bi − a> >
i x) − (bi − ai y)
=
si (x)
i=1
m m
X si (x) − si (y) X si (y)
= =m− <m
si (x) si (x)
i=1 i=1

Where in the last inequality we make use of the fact that our points
x, y are strictly feasible, i.e. si (x), si (y) > 0 for all i.

This lemma tells us that if want an ε-additive solution, we could stop


our path following procedure when
m
η=Ω .
ε
At this point one may wonder why instead of working in an iterative
fashion we do not just set η = m/ε and solve the perturbed linear
problem to optimality. It turns out that it is hard to compute x?η when
some arbitrary η > 0 is given (essentially, it is at least as hard as
solving linear optimization problems). What we are able to do is given
x?η for some η > 0 calculate x?η0 for η 0 a little bit larger than η. This
immediately rouses another question: then how do we find x?η0 at the
very beginning of the algorithm? We will discuss this problem in detail
3.4. Interior Point Method for Linear Programming 61

later, for now let us assume that it is possible to provide some η0 > 0
and the corresponding point x0 = x?η0 .
One step of our algorithm will essentially correspond to one step
of Newton’s method. Before we give a full description of the algorithm
recall that n(x) is the Newton step at point x with respect to some
underlying function f . Whenever we write n(xk ), we have the function
fηk (x) = ηk c> x + F (x) in mind. So
n(xk ) = −H(xk )−1 ∇fηk (xk ).
We now give the algorithm.

Primal Path Following IPM for Linear Programming:

(1) Find an initial η0 and x0 with kn(x0 )kx0 ≤ 21 .


(2) At iteration k (k = 0, 1, 2, . . .):
• compute xk+1 according to the rule:
def
xk+1 = xk + n(xk )
 
def
• set ηk+1 = ηk 1 + 8√1m .
m
(3) Stop when for the first time K, ηK > ε.
def
(4) Calculate x̂ = x?ηK by Newton’s method (starting at
xK ), output x̂.

In this algorithm we do not ensure that xk = x?ηk at every step. This


means that as opposed to what we have said before, our points do not
lie on the central path. All we care about is that the points are close
enough to the central path. By close enough, we mean kn(xk )kxk ≤ 1/2.
It may not be clear why this is the right notion of the distance from
central path. We will discuss this in the Section 3.5. Let us conclude
the description of the algorithm by the following remark.

Remark 3.5. At step 4. of the Primal Path Following IPM we apply


Newton’s method to obtain a point on the central path. At this moment
it is not obvious that it will converge and, if yes, how quickly. We will
62 Newton’s Method and the Interior Point Method

see later that the point xK is in the quadratic convergence region of


Newton’s method, so in fact we will need only a few iterations to reach
x?ηK . Instead of doing this final computation, one could also output
simply xK . It can be shown that the optimality guarantee at point xK
is O(ε), see Appendix 3.7

We summarize what we prove about the algorithm above:

Theorem 3.6. The primal path following algorithm, given a linear


program with m constraints and a precision parameter ε > 0, performs

O( m log m/η0 ·ε) iterations and outputs a point x̂ satisfying:

c> x̂ ≤ c> x? + ε.

3.4.5 Analysis of the Algorithm

This section is devoted to the proof of Theorem 3.6. The statement


does not provide any bounds on the cost of a single iteration. It is easy
to see that the only expensive operation we perform at every iteration
is computing the Newton step. This computation can be viewed as
solving a linear system of the form H(x)z = g(x), where z is the vector
of variables. Of course, a trivial bound would be O(n3 ) or O(nω ), but
these may be significantly improved for specific problems. For example,
computing a Newton step for an LP max-flow formulation can be done
in O(|E|)
e time.
Recall that we initialize our algorithm with some η0 > 0. Then,
at every step we increase η by a factor of (1 + 8√1m ), thus after

O( m log m/η0 ·ε) we reach Ω(m/ε). This establishes the bound on the
number of iterations. It remains to prove the correctness (the optimal-
ity guarantee).
The following lemma plays a crucial role in the proof of correctness.
It asserts that the points we produce lie close to the central path:

Lemma 3.7. For every k = 0, 1, . . . , K it holds that kn(xk )kxk ≤ 12 .


3.4. Interior Point Method for Linear Programming 63

To prove this, we consider one iteration and show that, if started with
kn(xk )kxk ≤ 1/2, then we will end up with kn(xk+1 )kxk+1 ≤ 1/2. Ev-
ery iteration consists of two steps: the Newton step w.r.t. fηk and the
increase of ηk to ηk+1 . We formulate another two lemmas explaining
what happens when performing those steps.

Lemma 3.8. After taking one Newton step at point x w.r.t. the func-
tion fη , the new point x0 satisfies:
kn(x0 )kx0 ≤ kn(x)k2x .

Lemma 3.9. For every two positive η, η 0 > 0, we have:


η0 √ η0

−1 −1

kH (x)∇fη0 (x)kx ≤ kH (x)∇fη (x)kx + m − 1 .
η η

It is easy to verify that the above two lemmas together imply that if
kn(xk )kxk ≤ 12 , then
1 1 1
kn(xk+1 )kxk+1 ≤ + + o(1) < .
4 8 2
Hence, Lemma 3.7 follows by induction.
It remains to prove Lemmas 3.8 and 3.9. We start with the latter,
since its proof is a bit simpler.

Proof. [Proof of Lemma 3.9] We have


H −1 (x)∇fη0 (x) = H −1 (x)(η 0 c + g(x))
η 0 −1 η0
= H (x)(ηc + g(x)) + (1 − )H −1 g(x)
η η
0 0
 
η η
= H −1 (x)∇fη (x) + 1 − H −1 g(x).
η η

After taking norms and applying triangle inequality w.r.t. k · kx :


η0 0

η
−1 −1
kH (x)∇fη0 (x)kx ≤ kH (x)∇fη (x)kx + 1 − kH −1 g(x)kx .

η η
64 Newton’s Method and the Interior Point Method

Let us stop here for a moment and try to understand what is the
significance of the specific terms in the last expression. In our analysis
of the algorithm, the term kH −1 (x)∇fη (x)kx is a small constant. The
goal is to bound the left hand side by a small constant as well. We
should think of η 0 as η(1 + δ) for some small δ > 0. In such a setting
η0 −1 (x)∇f (x)k will be still a small constant, so what prevents
η kH η x
0
us from choosing a large δ is the second term (1 − ηη )kH −1 g(x)kx .
We need to derive an upper bound on kH −1 g(x)kx . We show that
√ def
kH −1 g(x)kx ≤ m. Let us denote z = H −1 g(x). We get:
v
m >a
um
2 >
X z i √ uX (z > ai )2
kzkx = z g(x) = ≤ mt . (3.13)
si (x) si (x)2
i=1 i=1

The last inequality follows from Cauchy-Schwarz. Further:

m m
!
X (z > ai )2 X ai a>
= z> i
z = z > H(x)z = kzk2x . (3.14)
si (x)2 si (x)2
i=1 i=1

Putting (3.13) and (3.14) together we obtain that kzk2x ≤ mkzkx , so

in fact kzkx ≤ m.

Before we proceed with the proof of Lemma 3.8 let us remark that it
can be seen as an assertion that x belongs to the quadratic convergence
region of Newton’s method.

Proof. [Proof of Lemma 3.8] We know that the point x0 is the minimizer
of the second order approximation of fη at point x, hence:

∇fη (x) + H(x)(x0 − x) = 0.

which implies that:


m m
X ai X ai a>
+ i
(x0 − x) = −ηc.
si (x) si (x)2
i=1 i=1

We use it to compute ∇fη (x0 ):


3.4. Interior Point Method for Linear Programming 65

m
0
X ai
∇fη (x ) = ηc +
si (x0 )
i=1
m m m
!
X ai X ai a>
i 0
X ai
=− + (x − x) +
s (x) s (x) 2 s (x0 )
i=1 i i=1 i i=1 i
m 
ai a> 0

ai ai i (x − x)
X
= − −
si (x0 ) si (x) si (x)2
i=1
m 
ai a> 0 ai a> 0

i (x − x) i (x − x)
X
= −
si (x)si (x0 ) si (x)2
i=1
m
X ai (a> 0
i (x − x))
2
= .
si (x)2 si (x0 )
i=1

Our goal is to show that kn(x0 )kx0 ≤ kn(x)k2x . Instead 7 , we will prove
that for every vector z, we have that hz, n(x0 )ix0 ≤ kzkx0 kn(x)k2x .
Indeed:
m

0
> 0
X z > ai (a> 0
i (x − x))
2
z, n(x ) x0 = z ∇fη (x ) =
si (x)2 si (x0 )
i=1
m
!1/2 m
!1/2
Cauchy−Schwarz X (z > ai )2 X (a>
i (x 0 − x))4
≤ ·
si (x0 )2 si (x)4
i=1 i=1
m
!
X (a> 0
i (x − x))
2
≤ kzkx0 · = kzkx0 kn(x)k2x
si (x)2
i=1
which completes the proof.

3.4.6 The Starting Point


In this section we give a method for finding a valid starting point. More
precisely, we show how to find efficiently some η0 > 0 and x0 such that
7 The following fact is true for every Hilbert space H: the norm of an element u ∈ H is
hz,ui
given by the formula kuk = max{ kzk : z ∈ H \ {0}}. We work with H = Rn but with
nonstandard inner product: hu, vix0 = u> H(x0 )v
66 Newton’s Method and the Interior Point Method

kn(x0 )kx0 ≤ 1/2.


Before we start, we would like to remark that this discussion pro-
vides a very small η0 , of order 2−L . This enables us to prove that in
fact IPM can solve linear programming in polynomial time, but does
not seem promising when trying to apply IPM to devise fast algorithms
for combinatorial problems. Indeed, there is a factor of log η0−1 in the
bound on number of iterations, which translates to L for such a tiny η0 .
To make an algorithm fast, we need to have η0 = Ω(1/poly(m)). How-
ever, it turns out that for specific problems (such as maximum flow)
we can often devise some specialized methods for finding satisfying η0
and x0 and thus solve this issue.
First, we will show how given a point x0 ∈ int(P ) we can find some
starting pair (η0 , x0 ). Then finally we show how to obtain such point
x0 ∈ int(P ). Let us assume now that such a point is given. Furthermore,
we assume that each of its coordinates is written using O(L) bits and
each constraint is satisfied with slack at least 2−L , that is bi − a> 0
i x ≥
2−L . Our procedure for finding x0 will provide such an x0 based on our
assumption that P is full dimensional. 8
Recall that we want to find a point x0 close to the central path
Γc = {x?η : η ≥ 0}, which corresponds to the objective function c> x.
Note that as η → 0, x?η → x?0 = xc , the analytic center of P . So finding
a point x0 , very close to the analytic center and choosing η0 to be some
very tiny number should be a good strategy. In fact it is, but how to
find a point close to xc ?
The central path Γc is of main interest for us, because it tends to
the optimal solution to our linear program. In general, if d ∈ Rn we
may define Γd to be the path consisting of minimizers to the functions
νd> x + F (x) for ν ≥ 0. What do all the paths Γd have in common?
The origin! They all start at the same point: analytic center of P . Our
strategy will be to pick one such path on which x0 lies and traverse it
backwards to reach a point very close to the origin of this path, which
at the same time will be a good choice for x0 .
Recall that g is the gradient of the logarithimic barrier F and define

8 Inthis presentation we will not discuss some details, e.g. the full-dimensionality assump-
tion. These are easy to deal with and can be found in any book on linear programming.
3.4. Interior Point Method for Linear Programming 67

d = −g(x0 ). Now it turns out that x0 ∈ Γd . Why is this? Denote


fν0 (x) = d> x + F (x) and let x0? 0 0? 0
ν be the minimizer of fν . Then x1 = x ,
since ∇f10 (x0 ) = 0. We will use n0 (x) for the Newton step at x with
respect to fν0 . We see in particular that n0 (x0 ) = 0.
As mentioned above, our goal is to move along the Γd path in the
direction of decreasing ν. We will use exactly the same method as in
the Primal Path Following. We perform steps, in each of them we make
one Newton step w.r.t. current ν and then decrease ν by a factor of

(1 − 1/8 m). At each step it holds that kn0 (x)kx ≤ 1/2, by an argument
identical to the proof of Lemma 3.7. It remains to see, how small ν we
need to have (how many iterations we need to perform).

Lemma 3.10. If kH(x)−1 g(x)kx ≤ 1/4 and we take η0 =


(4kH(x)−1 ckx )−1 , then the point x is a good candidate for x0 . The
Newton step n(x) w.r.t. fη0 satisfies:
1
kn(x)kx ≤ .
2

Proof. This is just a simple calculation: kn(x)kx =


1 1
kH −1 (x)(η0 c + g(x))kx ≤ η0 kH(x)−1 ckx + kH(x)−1 g(x)kx ≤ + .
4 4

Suppose now we have some very small ν > 0 and a point x such that
kn0 (x)kx ≤ 1/2. By performing two further Newton steps, we may as-
sume kn0 (x)kx ≤ 1/16. We have:
n0 (x) = H(x)−1 (−νg(x0 ) + g(x)),
hence:
1
kH(x)−1 g(x)kx ≤ νkH(x)−1 g(x0 )kx +kn0 (x)kx ≤ νkH(x)−1 g(x0 )kx +
16
So it turns out, by Lemma 3.10 that it is enough to make
νkH(x)−1 g(x0 )kx smaller then a constant. We can make ν as small
as we want, but what with kH(x)−1 g(x0 )kx ? Let us present a technical
claim, which we leave without proof:
68 Newton’s Method and the Interior Point Method

Claim 3.11. All the calculations during the backward walk along Γd
can be performed with 2O(L) precision.

This means that we may assume that all the points x computed during
the procedure have only O(L) places after the comma, and are of ab-
solute value at most 2O(L) . This allows us to provide an upper bound
on kH(x)−1 g(x0 )kx .

Claim 3.12. If the description sizes of x0 and x are O(L), then


kH(x)−1 g(x0 )kx ≤ 2poly(L) .

This follows from the fact that a solution to a linear system is of polyno-
mial size. Now it remains to take ν so small that νkH(x)−1 g(x0 )kx ≤ 1/8.
By our previous reasoning this is enough to get suitable (η0 , x0 ). To
reach such a ν we need to perform O( e √m log 1/ν ) iterations, but log 1/ν
is polynomial in L, so we are done.
To complete our considerations we show how to find a suitable x0 ∈
int(P ). To this end, we consider an auxiliary linear program:
min t
(t,x)∈Rn+1
(3.15)
a>i x ≤ bi + t, for all i
P
Taking x = 0 and t big enough (t = 1 + i |bi |), we get a strictly
feasible solution. Having such, we may solve 3.15 up to an additive
error of 2−O(L) in polynomial time (by IPM). This will give us a solution
(t0 , x0 ) with t0 = −2−Ω(L) , so x0 ∈ int(P ) is a suitable starting point for
our walk along Γd .

3.5 Self-Concordant Barrier Functions


Finally, in this section we present the definition of a self-concordant
barrier function for a convex set P. Armed with this definition, the
task of bounding the number of iterations of the path following method
for minimizing a linear function over P reduces to analyzing certain
structural properties of the self-concordant barrier function. We omit
the straightforward details about how, once we have a self-concordant
3.6. Appendix: The Dikin Ellipsoid 69

barrier function for a convex set, one can design a path following IPM
for it.

Definition 3.13. Let F : int(P ) 7→ R be a C 3 function. We say that


F is self-concordant with parameter ν if:

(1) F is a barrier function: F (x) → ∞ as x → ∂P.


(2) F is strictly convex.
For all x ∈ int(P ), ∇2 F (x)  1 >
(3) ν3∇F (x)∇F (x)
.
(4) For all x ∈ int(P ), ∇ F (x)[h, h, h] ≤ 2khk3x =
3/2
2 ∇2 F (x)[h, h] .9

With this definition in hand, it is not difficult to give an IPM for P



which takes about ν log 1/ε iterations. In fact, a careful reader would
already have observed that Lemmas 3.9 and 3.8 prove exactly (3) and
(4) for the log-barrier function with the complexity parameter m.
Similar to the log-barrier function for the positive real-orthant, for
semidefinite programs over the cone of positive-definite matrices (or an
affine transformation of it), one can use the following barrier function:
F (X) = − log det X
for X  0.
In the next lecture we will discuss self-concordant barrier functions
with complexity parameters  m. The main object of study will be
the volumetric barrier of Vaidya.

3.6 Appendix: The Dikin Ellipsoid


Recall our setting:
min hc, xi
s.t. Ax ≤ b
where Ax ≤ b defines a bounded, nonempty and full-dimensional poly-
tope P . Recall that we can write the constrains at hai , xi ≤ bi and
9 Here
we interpret ∇3 F (x)[h, h, h] as the inner product between the 3-tensor ∇3 F (x) and
h⊗h⊗h, namely, h∇3 F (x), h⊗h⊗hi) and ∇2 F (x)[h, h] = h> ∇2 F (x)h = h∇2 F (x), h⊗hi.
70 Newton’s Method and the Interior Point Method

then consider the slacks si (x) = bi − hai , xi which capture the extent
to which a constraint is satisfied. Clearly, if x is a feasible point, then
si (x) ≥ 0 for all i. Note that, when clear from context, we will often
write si for si (x) for conciseness. The log-barrier function (rather its
negative) and its differentials are:
X
F (x) = log(si (x))
i

where
ai
∇F (x) = , and
si
X ai a>
∇2 F (x) = i
2 .
i
s i

We often write H(x) = ∇2 F (x) for the Hessian of x, and note that
H(x)  0 (and in fact H(x)  0 when A is fully-dimensional).
Now, we can consider the ellipsoid centred at x and defined by H,
namely
Ex (H(x)) = {y : (y − x)> H(x)(y − x) ≤ 1}.
This is the Dikin ellipsoid centred at x.

Definition 3.14. The Dikin Ellipsoid at x ∈ int(P ) is defined as the


ball of radius 1 around x in the local norm defined by H(x) at x.

We will show that the Dikin ellipsoid is always completely contained in


the polytope P . Furthermore, if we denote by xc the Analytic Center
of P , i.e. the unique minimizer of F (x), then the Dikin ellipsoid at xc
inflated m times contains the whole polytope P (we will also show that

m blow-up is sufficient for symmetric polytopes). We start by proving
the first claim.

Theorem 3.15. For every x ∈ int(P ), the Dikin ellipsoid at x is fully


contained in P .10

10 Infact, this also holds when P is not bounded for certain cases such as if P is the positive
orthant.
3.6. Appendix: The Dikin Ellipsoid 71

Proof. This is easy to see since, for any y ∈ Ex , all slacks are non-
negative, and hence y ∈ P . Formally, let y ∈ Ex (H(x)). Then,
(y − x)> H(x)(y − x) ≤ 1
X hai , y − xi2
⇒ ≤ 1
i
s2i
hai , y − xi2
⇒ ≤ 1 ∀i since all summands are non-negative
s2i
si (x) − si (y) 2
 
⇒ ≤ 1 ∀i
si (x)

si (y)
⇒ 1 − ≤ 1 ∀i.
si (x)
Hence, for all i we know that 0 ≤ si (y)/si (x) ≤ 2, and, in particular,
si (y) ≥ 0 ∀i.

Let us start with a very interesting lemma describing one crucial prop-
erty of the analytic center of P :

Lemma 3.16. If xc is the analytic center of P then for every point


x ∈ int(P ) it holds that:
m
X si (x)
= m.
si (xc )
i=1

si (x)
Proof. Let us denote r(x) = m
P
i=1 si (xc ) . We know that r(xc ) = m. To
show that a function is constant, it remains to find its derivative and
argue that it is zero:
m
!
X si (x) ai
∇r(x) = ∇ = = ∇F (xc )
si (xc ) si (xc )
i=1

but xc is the minimizer of F (x) so the gradient at this point vanishes.

Now we are ready to proof the second theorem describing the Dikin
ellipsoid. This time we look at the Dikin ellipsoid centered at a specific
point xc :
72 Newton’s Method and the Interior Point Method

Theorem 3.17. The Dikin ellipsoid at xc inflated m times contains P


inside: P ⊆ mExc (H(xc )).

Proof. Take any point x ∈ P , the goal is to show that (x −


xc )> H(xc )(x − xc ) ≤ m2 . We compute:

(x − xc )> H(xc )(x − xc )


m
!
>
X ai a>
i
=(x − xc ) (x − xc )
si (xc )2
i=1
m m
X (a>
i (x − xc ))2 X (si (xc ) − si (x))2
= =
si (xc )2 si (xc )2
i=1 i=1
m m m
X si (x)2 X si (xc )2 X si (xc )si (x)
= 2
+ 2
− 2 .
si (xc ) si (xc ) si (xc )2
i=1 i=1 i=1

si (xc )2 si (xc )si (x)


Now, the middle term m
P Pm
i=1 si (xc )2 is equal to m and i=1 si (xc )2
=
Pm si (x)
i=1 si (xc ) = m by Lemma 3.16. So we obtain:

m
X si (x)2
(x − xc )> H(xc )(x − xc ) = − m.
si (xc )2
i=1

si (x)2
Observe that all the terms in the sum m
P
i=1 si (xc )2 are nonnegative, so
2
si (x)2
P
m si (x)
we can use the simple bound m = m2 (the
P
i=1 si (xc )2 ≤ i=1 si (xc )
last equality again by Lemma 3.16) and obtain finally:

(x − xc )> H(xc )(x − xc ) ≤ m2 − m ≤ m2 .

The last theorem we want to proof about the Dikin ellipsoid asserts
that when we restrict ourselves to symmetric polytopes (i.e. symmetric
with respect to the center of P ) then the Dikin ellipsoid at the center

inflated only m times contains the polytope.
3.6. Appendix: The Dikin Ellipsoid 73

Theorem 3.18. Let P be a symmetric polytope, then P ⊆



mExc (H(xc )).

Remark 3.19. Note that the Dikin ellipsoid depends not exactly on
P but rather on the description of P (on the set of constraints). In the
above theorem we assume that the description of P is reasonable. Since
P is symmetric (with respect to the origin), we can assume that for
every constraint x> ai ≤ bi there is a corresponding constraint −x> ai ≤
bi .

Proof. [Proof of Theorem 3.18] We assume that P is symmetric w.r.t.


the origin, so xc = 0. Further, since all the bi ’s are nonzero, we may
rescale the constraints and assume bi = 1 for all i. After this reductions,
our ellipsoid is as follows: E = {x : m > 2
P
i=1 (x ai ) ≤ 1}.
Take any point x on the boundary of E, that is m > 2
P
√ i=1 (x ai ) = 1.
We need to show that mx ∈ / int(P ). This will finish the proof.
Pm
Since i=1 (x ai ) = 1, there exists i with |x> ai | ≥ √1m . The set
> 2

of constraints is symmetric, so we can assume that x> ai ≥ √1 .


m
Hence
√ √
( mx)> ai ≥ 1, so mx ∈ / int(P ).

Theorems 3.17 and 3.18 together with Theorem 3.15 demostrate that
the Dikin ellipsoid centered at xc is a very good approximation of the
polytope P . We obtain an ellipsoid with a blow-up ratio of m (that is

Exc (H(xc )) ⊆ P ⊆ mExc (H(xc ))) or m for the symmetric case. One
can ask if this is the best we can do. It turns out that we can obtain
better ratio by taking the so called John ellipsoid. It is defined as the
largest-volume ellipsoid contained in the polytope P . When we inflate

it by a factor of n, it contains P inside (similarly, n is enough for sym-
metric polytopes). This means that the John Ellipsoid achieves better
blow-up ratio, because n ≤ m (we need at least n linear inequalities
to define a non-degenerate, bounded polytope in n-dimensional space).
One can also prove that this is indeed tight, the best possible blow-up
ratio for the n-dimensional simplex is exactly n.
74 Newton’s Method and the Interior Point Method

3.6.1 The Dikin Algorithm and some Intuition for m Iter-
ations

The results above imply an particularly simple greedy IPM: start with
the analytic centre xc of a polytope P and move to the boundary of
Exc (H(xc )) in the direction of −c, where c is the cost vector. Repeat the
same from the next point. First note that algorithmically this makes
sense as Theorem 3.15 implies that throughout the execution of the
algorithm, we are always inside P. The cost is also decreasing. However,

we can only argue that the cost becomes 1/ m in the symmetric case (or
1/m in the non-symmetric case) of the optimal cost in the first iteration.

To see this assume that P is symmetric. Thus, xc = 0 and the cost of


this point is 0. Let x be the point on the boundary of Exc (H(xc )) in the
direction of −c. We know from Theorem 3.15 that x ∈ P. However, we

also know from Theorem 3.18 that hc, mxi ≤ hc, x? i = opt, where x?

is the optimal solution to the linear program. This is because mx lies

on the boundary of mExc which contains P, and is in the direction of
−c. Thus, hc, xi ≤ √1m opt. If this were to magically continue at every

step then one would expect the cost to come around opt in about m
iterations. However, we cannot prove this and this analogy ends here.

3.7 Appendix: The Length of Newton Step


In this section we explain the relation between the length of the Newton
step at point x (with respect to the function fη ) and the distance to
the optimum x?η . We show that whenever kn(x)kx is sufficiently small,
kx − x?η kx is small as well. This together with a certain strenghtening of
Lemma 3.4 imply that in the last step of Primal Path Following IPM
we do not need to go with xK to optimality. In fact, only 2 additional
Newton steps bring us (2ε)-close to the optimum.
Let us start by a generalization of Lemma 3.4, which shows that
to get a decent approximation of the optimum we do not neccessarily
need to be on the central path, but only close enough to it.

Lemma 3.20. For every point x ∈ int(P ) and every η > 0, if kx −


3.7. Appendix: The Length of Newton Step 75

x?η kx < 1 then:


m
c> x − c> x? ≤ (1 − kx − x?η kx )−1 .
η

Proof. For every y ∈ int(P ) we have:

c> x − c> y = c> (x − y)


= hc, x − yi
= hcx , x − yix
≤ kcx kx kx − ykx

Where cx = H −1 (x) and the last inequality follows from Cauchy-


Schwarz. Now, we want to bound kcx kx . Imagine we are at point x and
we move in the direction of −cx until hitting the boundary of Dikin’s
ellipsoid. We will land in the point x − kccxxkx , which is still inside P by
Theorem 3.15. Therefore:
 
cx
c, x − ≥ hc, x? i.
kcx kx

Since hc, cx i = kcx k2x , we get:

kcx kx ≤ hc, xi − hc, x? i.

We have obtained:

c> x − c> y ≤ kx − ykx (c> x − c> x? ). (3.16)

Now, let us express

c> x − c> x? = (c> x − c> x?η ) + (c> x?η − c> x? )

and use 3.16 with y = x?η . We obtain

c> x − c> x? ≤ (c> x − c> x? )kx − ykx + (c> x?η − c> x? ).

Thus
(c> x − c> x? )(1 − kx − ykx ) ≤ c> x?η − c> x? .
By applying Lemma 3.4 we get the result.
76 Newton’s Method and the Interior Point Method

Note that in the algorithm we never literally mention the condition that
kx − x?η kx is small. However, we will show that it follows from kn(x)kx
being small. We will need the following simple fact about logarithmic
barrier. A proof appears in the notes on Volumetric Barrier in the next
lecture.

1
Fact 3.21. If x, z ∈ int(P ) are close to each other, i.e. kz − xkz ≤ 4
then H(x) and H(z) are close as well:
4
H(x)  H(z)  4H(x).
9

We are ready to prove the main theorem of this section.

Theorem 3.22. Let x be any point in int(P ) and η > 0. Consider the
1
Newton step n(x) at point x with respect to fη . If kn(x)kx ≤ 18 , then
?
kx − xη kx ≤ 5kn(x)kx .

Proof. Pick any h such that khkx ≤ 41 . Expand fη (x + h) into a Taylor


series around x:
1
fη (x + h) = fη (x) + h> ∇fη (x) + h> ∇2 fη (θ)h (3.17)
2
for some point θ lying on the segment [x, x + h]. We proceed by lower-
bounding the linear term. Note that:

>
h ∇f (x) = |hh, n(x)ix | ≤ khkx kn(x)kx (3.18)

η

by Cauchy-Schwarz. Next:
4
h> ∇2 fη (θ)h = h> H(θ)h ≥ h> H(x)h (3.19)
9
where we used Fact 3.21. Applying bounds 3.18, 3.19 to the expansion
3.17 results in:
2
fη (x + h) ≥ fη (x) − khkx kn(x)kx + khk2x . (3.20)
9
3.7. Appendix: The Length of Newton Step 77

Set r = 92 kn(x)kx and consider points y satysfing kx − ykx = r, i.e.


points on the boundary of the local norm ball of radius r centered at
x. Then 3.20 simplifies to:

fη (x + h) ≥ fη (x)

which implies that x?η , the unique minimizer of fη , belongs to the above
mentioned ball and the theorem follows.

m
Combining Lemma 3.20 with Theorem 3.22 we get that if η ≥ ε and
1
kn(x)kx ≤ 18 then c> x − c> x? < 2ε.
4
Volumetric Barrier, Universal Barrier and John
Ellipsoids

4.1 Overview

In this lecture we take a step towards improving the m in the number
of iterations the path-following IPM presented in the previous lecture

to n. This will be achieved by considering more involved barrier func-
tions rather than altering the basic framework of IPMs. The log-barrier
function was particularly nice as working with it was computationally
not harder than solving linear systems. However, it had many short-
comings. An obvious one is that it has no way to take into account
that a constraint could be repeated many time. One way to handle
this would be to work with weighted barrier functions which have the
following form: X
− wi (x) log si (x)
i
where we could allow the weights to depend on the current point as
well. Analyzing such methods, however, poses great technical challeges
as now the gradients and Hessians become more unwieldy. Amazingly,
Vaidya laid the foundations of analyzing such weighted barrier func-
tions. He introduced one such function, the volumetric barrrier and

showed how it allows us to improve the m to (mn)1/4 . Computation-

78
4.2. Restating the Interior Point Method for LPs 79

ally, the volumetric barrier was no harder than computing determi-


nants.
Subsequently, Nesterov-Nemirovskii discovered the Universal Bar-

rier which achieves the stated goal of n iterations. Their result held
far beyond the setting of LPs and worked for almost any convex body.
However, computationally the barrier is at least hard as solving the
convex program itself!

The search for a n barrier which is computationally efficient got
a major boost by a recent result of Lee-Sidford who demonstrated a
computationally efficient barrier (about the same complexity as solving

linear systems) which achieves Õ( n) iterations. In my opinion, their
result is inspired by Vaidya’s work: while Vaidya considered the vol-
ume of the Dikin ellipsoid as a barrier, Lee-Sidford consider an object
which can be considered a smoothening of John ellipsoid. It remains
an outstanding problem to remove the log-factors from the work of
Lee-Sidford.

We begin by presenting yet another proof of m convergence and
then show how the volumetric barrier allows us to improve this to
(mn)1/4√. Towards the end, we give a brief presentation (without proofs)
of the rank Universal Barrier by Nesterov-Nemirovskii for any con-
vex set. As for the work of Lee-Sidford, currently, explaining it simply
remains a challenge. However, in the appendix we introduce the John
ellipsoid and give some basic intuition about why one may expect the
√ √
m to be possibly replaced by n.

4.2 Restating the Interior Point Method for LPs


We start by restating the interior point method in a slightly different
but equivalent way.

Theorem 4.1. Let F be a barrier function such that there exists δ


and θ as follows:

(1) For all x, z ∈ int(P ) such that kx − zk2∇2 F (z) ≤ δ, we have


O(1)
∇2 F (z) ≈ ∇2 F (x),1 and
1 Formally, there exist constants c1 , c2 such that c1 ∇2 F (z)  ∇2 F (x)  c2 ∇2 F (z).
80 Volumetric Barrier, Universal Barrier and John Ellipsoids

(2) k∇F (x)k(∇2 F (x))−1 ≤ θ.2

Then, there is an interior point method that produces an ε-approximate


solution to LPs of the form min c> x, such that Ax ≤ b in O(θ/δ log 1/ε)
iterations. Here m is the number of rows of A.

Note that the first property captures the fact that, if two points x and z
are close in the local norm, then the Hessian does not change by much.
These two requirements along with differentiability are equivalent to
self-concordance with complexity parameter θ/ε. We leave the proof to
the reader.

4.2.1 The Logarithmic Barrier Revisited

Before we proceed to the volumetric barrier, as an exercise, we revisit



the log-barrier funtion and derive the O( m)-iteration convergence
result for it, yet again. Recall that the log-barrier function is:
m
def
X
F (x) = − log(bi − a>
i x).
i=1


We will show that for this function θ = m and δ = O(1). Hence, via
Theorem 4.1, it follows that this results in an IPM which converges in

roughly m iterations.

4.2.1.1 θ for log-barrier

Let S be the diagonal matrix where Sii = si (x). We can now restate the
gradient and Hessian of F with more convenient notation as follows:

∇F (x) = A> S −1 1, and


∇2 F (x) = A> S −2 A.

2 Equivalently, we could write (∇F (x))(∇F (x))−1  θ∇2 F (x).


4.2. Restating the Interior Point Method for LPs 81

Hence,
k∇F (x)k2(∇2 F (x))−1 ) = (∇F (x))> (∇2 F (x))−1 (∇F (x))
= (A> S −1 1)> (A> S −2 A)−1 (A> S −1 1)
= 1> (S −1 A(A> S −2 A)−1 A> S −1 )1.

Denote
def
Π = S −1 A(A> S −2 A)−1 A> S −1 ,
and note that Π is a projection matrix; i.e., Π2 = Π. Hence,
k∇F (x)k2(∇2 F (x))−1 ) = 1> Π1 = 1> Π2 1.
= kΠ1k22 ≤ k1k22
= m.


Thus, k∇F (x)k(∇2 F (x))−1 ) ≤ m as desired.

4.2.1.2 δ for the log-barrier


O(1)
We will show that ∇2 F (z) ≈ ∇2 F (x) for δ = 1/4. Let x, z be such
that kx − zkz ≤ δ. Hence,
X hai , x − zi2
(x − z)> (∇2 F (z))(x − z) =
i
s2i (z)
X (a> (x − z))2
i
=
i
s2i (z)
X  si (x) − si (z) 2
= .
si (z)
i
Therefore,
si (x) − si (z) √

≤ δ ∀i.
si (z)
√ √
In particular, for all i, 1 − δ ≤ si (x)/si (z) ≤ 1 + δ, and hence, for
δ = 1/4,
s2i (z) s2i (x) 9
2 ≤ 4 and 2 ≤
si (x) si (z) 4
82 Volumetric Barrier, Universal Barrier and John Ellipsoids

s2i (z) s2i (z)


   
for all i. Since mini s2i (x)
·∇2 F (z)  ∇2 F (x)  maxi s2i (x)
·∇2 F (z),
we have that
4 2
∇ F (z)  ∇2 F (x)  4∇2 F (z)
9
O(1)
which gives ∇F (x) ≈ ∇F (z) as desired.

4.3 Vaidya’s Volumetric Barrier


√ √
Now, we go beyond m; though we would like to get roughly n it-
erations, we will not quite reach this target, but will come somewhere
in-between. The problem with the log-barrier function is that it weighs
each constraint equally. Thus, if, for instance, a constraint is repeated
many times, there is no way for the log-barrier function to know this.
Vaidya introduced the volumetric barrier function which aims to bypass
this limitation by assigning weights to each constraint:
X
− wi log(bi − a>
i x).
i

Of course, a useful choice of weights must depend on the current point


x, and this causes several technical difficulties. His work laid the foun-
dations for and developed many techniques to analyze such weighted
barrier functions.

4.3.1 The Volumetric Barrier


The proposed barrier function by Vaidya uses the log-volume of the
Dikin Ellipsoid.

Definition 4.2. The volumetric barrier is defined to be


def 1
V (x) = − · log det ∇2 F (x) .

2

Note that, whenever we come up with a new barrier function, we must


also worry about its computability along with the computability of
its first and second order oracle. Using the restatement of the barrier
function as the determinant of a positive definite matrix, we see that
the volumetric barrier is efficiently computable.
4.3. Vaidya’s Volumetric Barrier 83

Notation and basic properties. Notation will play an important


role in understanding the rather technical sections that are to follow.
Towards this, we introduce new notation, some of which may conflict
with what we have used previously. Instead of F (x), V (x), H(x), and
so on, we denote the same quantities by Fx , Vx , and Hx . Similarly, the
slack vector will be denoted by sx and its components by sx,i = si (x).
Let the i-th row of A be the vector ai . For an x ∈ int(P ), let
def
Ax = Sx−1 A

where Sx is the diagonal matrix corresponding to the vector sx . Then,


the Hessian of the log-barrier function can be written as

Hx = A> > −2
x Ax = A S A.

Thus, if Vx = − 12 log det Hx , then

∇Vx = A>
x σx (4.1)

where
def a> −1
i Hx ai
σx,i = .
s2x,i
An informed reader could compare σx,i with leverage scores, or effective
resistances that arise when dealing with graph Laplacians. We note
some simple properties of σx which will allow us to build towards finding
the θ and δ necessary for applying Thoerem 4.1.

Fact 4.3.

(1) σx ≤ 1.
(2) σx> 1 = n.

−1
a>
i Hx ai
Proof. The first fact follows from the fact that s2x,i
≤ 1 if and only
ai a>
if i
s2x,i
 Hx . But the latter is true since Hx is the sum over all i of
84 Volumetric Barrier, Universal Barrier and John Ellipsoids

quantites on the left hand side. For the second part note that
X a> H −1 ai
σx> 1 = i x

i
s2x,i
!
X ai a>
= Tr Hx−1 i

i
s2x,i
−1

= Tr Hx Hx
= n.

Thus, each σx,i is at most one and they sum up to n. Thus, σx,i assigns
a relative importance to each constraint while maintaining a budget
of n. This is unlike in the setting of log-barrier where each constraint
has a weight of 1 and, hence, requires a budget of m. Further, note the
following straightforward fact:

−1
P (a>
i Hx aj )
2
Fact 4.4. σx,i = j 2 2
sx,i sx,j
.

Proof.
a> −1
i Hx ai
σx,i =
s2x,i
a> −1 −1
i Hx Hx Hx ai
=
s2x,i
 
> −1
P aj a>
ai Hx j s2
j
Hx−1 ai
x,j
=
s2x,i
X (a> −1
i Hx aj )
2
= .
j
s2x,i s2x,j

Let Σx denote the diagonal matrix corresponding to σx . Let


def
Px = Ax Hx−1 A> > −1 >
x = Ax (Ax Ax ) Ax .
4.3. Vaidya’s Volumetric Barrier 85

Note that Px  0, is symmetric and is a projection matrix since

Px2 = Ax (A> −1 > > −1 >


x Ax ) Ax Ax (Ax Ax ) Ax = Px .
(2)
Further, let Px denote the matrix whose each entry is the square of
the corresponding entry of Px . Then, Fact 4.4 above can be restated as

σx = Px(2) 1.
(2) (2) (2)
Thus, Px ≥ 0, Σx − Px is symmetric and (Σx − Px )1 = 0. Thus, it
is a graph Laplacian. As a consequence we obtain the following fact.

def (2)
Fact 4.5. Λx = Σx − Px  0.

Gradient and the Hessian of the Volumetric Barrier. In the


appendix we show the following by a straightforward differentiation of
the volumetric barrier:

Lemma 4.6.

(1) ∇Vx = A>


x σx and
(2)
(2) ∇ Vx = A>
2
x (3Σx − 2Px )Ax .

Thus, using Fact 4.5, we obtain the following corollary which allows
us to think of the Hessian of the volumetric barrier as the log-barrier
weighted with leverage scores. This lemma greatly simplifies the calcu-
lations to come.

Corollary 4.7.

A> 2 >
x Σx Ax  ∇ Vx  5Ax Σx Ax .

Proof. The lower bound is a direct consequence of Fact 4.5.


For the upper bound, note that for a graph Laplacian, it follows
from the inequality (a − b)2 ≤ 2(a2 + b2 ) that

Λx  2(Σx + Px(2) ).
86 Volumetric Barrier, Universal Barrier and John Ellipsoids

Thus,

∇ 2 V x = A> (2) > >


x (3Σx − 2Px )Ax = Ax (Σx + 2Λx )Ax  5Ax Σx Ax .

def
We now let Qx = A> x Σx Ax . The next lemma relates Qx back the
Hessian of the log-barrier function.

1
Lemma 4.8. 4m Hx  Qx  Hx .

This lemma is responsible for us losing the grounds we gained by


the weighted barrier function. If we could somehow improve the lower
1 √
bound to 100 Hx , then we would achieve our goal of n. In the next
section we show how to alter the barrier function to get a lower bound
n
of m Hx which will enable us to get the bound of (mn)1/4 .

Proof. The upper bound follows straightforwardly from Fact 4.3 which
states that Σx  I.
For the lower bound, we first break the sum into two parts as follows
X σx,i ai a>
i
Qx =
i
s2x,i
X σx,i ai a>
i
X σx,i ai a>
i
= +
s2x,i s2x,i
i:σx,i ≥1/2m σx,i <1/2m
X σx,i ai a>
i
 .
s2x,i
i:σx,i ≥1/2m

Thus,
 
1 X ai a>i 1  X ai a>i 
Qx  = Hx − . (4.2)
2m s2x,i 2m s2x,i
i:σx,i ≥1/2m i:σx,i <1/2m

Now note that by the definition of σx,i , we obtain for all i,

ai a>i
 σx,i Hx .
s2x,i
4.3. Vaidya’s Volumetric Barrier 87

Thus,
X ai a>i
X 1 X 1
 σx,i Hx  Hx  Hx .
s2x,i 2m
i
2
i:σx,i <1/2m i:σx,i <1/2m

Thus, plugging in this estimate in (4.2), we obtain that


 
1 1 1
Qx  Hx − Hx = Hx .
2m 2 4m
This completes the proof of the lemma.

This allows us to get the following crucial bounds on σx .

a> −1
i Qx ai
n
1
o √
Lemma 4.9. 2
sx,i
≤ min σx,i , 4mσx,i ≤ 2 m.

Proof. From the lower bound estimate of Lemma 4.8, it follows that
a> −1
i Qx ai 4ma> −1
i Hx ai
2 ≤ 2 = 4mσx,i .
sx,i sx,i

The other inequality follows from the following simple inequality


σx,i ai a>
i
 Qx .
s2x,i

We can now conclude by stating the key properties of the volumetric


barrier function necessary to get a desired bound on the number of
iterations of the IPM:

Theorem 4.10.

(1) ∇Vx> (∇2 Vx )−1 ∇Vx ≤ n.


1
(2) For all x, y ∈ int(P ) such that kx − ykx ≤ ,
8m1/2

O(1)
∇2 Vx ≈ ∇2 Vy .
88 Volumetric Barrier, Universal Barrier and John Ellipsoids

Proof. For the first part, note that

ζ > ∇Vx ∇Vx> ζ = hζ, A>


x σx i
2

= hζ, A>
x Σx 1i
2

1 1
= hΣx/2 Ax ζ, Σx/2 1i2
  
≤ ζ > A>
x Σ x Ax ζ 1>
Σ x 1
 
≤ ζ > Qx ζ n
 
≤ n ζ > ∇2 Vx ζ .

For the second part, we start by noting that kx − ykx ≤ δ is the same
as (x − y)> Qx (x − y) ≤ δ 2 . This in turn implies that
1 2
|a> 2 2 > −1
i (x − y)| ≤ δ ai Qx ai ≤ ·s .
4 x,i
Thus, an argument similar to what we did for the log-barrier implies
that

1 − s y,i 1
≤ .
sx,i 2
This implies that Hx ≈ Hy and σx,i ≈ σy,i for all i. Thus, Qx ≈ Qy .
This implies that ∇2 Vx ≈ ∇2 Vy , completing the proof.

4.3.2 The Hybrid Barrier


Now consider the barrier function
n
G x = Vx + Fx ,
m
where Vx is the volumetric barrier function and Fx the log-barrier func-
tion.
n
∇2 Gx = ∇2 Vx + Hx .
m
This adding a scaled version of the log-barrier function changes nothing
significantly in the previous section. However, crucially this addition
implies, via Lemmas 4.7 and 4.8 that
n
∇ 2 Gx  Hx
m
4.4. Appendix: The Universal Barrier 89

1
rather than 4m Hx in the case of volumetric barrier.
p This implies that
the upper bound in Lemma 4.9 comes down to m/n. This then implies
that, while the first part of Theorem 4.10 continues to hold with 4n

instead of n, the second part holds with (mn)1/4 rather than m. Hence,
from Theorem 4.1, we obtain an (mn)1/4 iteration IPM for LPs.
One can interpret Vaidya’s technique as starting with a barrier F
and replacing it with
1 n
− log det ∇2 F (x) + F (x).
2 m
One may ask if repeating this helps improve the bound of (mn)1/4 . In
particular, it is an interesting question to understand the fixed point of
this functional equation and to what extent is it self-concordant.

4.4 Appendix: The Universal Barrier


In this section we introduce Nesterov-Nemirovskii’s result that every
bounded and convex body K in n-dimensions admits an O(n) -self
concordant barrier function. This implies that the number of iterations

scale like n. This not only improves the self-concordance results for
polytopes we have proved till now, but also significantly generalizes it
to all convex bodies. However, as we will see, computationally it is not
very attractive. Designing a barrier function that does not require much
more computation than solving linear system of equations, at least for
polytopes, remains an outstanding open problem.
What is their barrier function that works in such a general setting?
It is a volumetric barrier; however, not of any ellipsoid, rather of the
polar of the body K centered at the current point x. More precisely,
for a point x ∈ int(K), let
def
Kx◦ = {z : z > (y − x) ≤ 1 ∀y ∈ K}.

Then, the Nesterov-Nemirovskii barrier function is defined to be


def
F (x) = log volKx◦ .

Since K is bounded, this is well-defined in the interior of K. Moreover, it


is easy to see that as x approaches the boundary of K from the interior,
90 Volumetric Barrier, Universal Barrier and John Ellipsoids

the volume blows up to h infinity.


i For instance, if K is the interval [0, 1]
then Kx◦ = − x , 0 ∪ 0, 1−x . Thus, as x → 1, the right end of the
1 1
 

polar goes to +∞ and as x → 0, the left end of the polar goes to −∞.
In either case, the volume goes to infinity.
In fact this one-dimensional intuition goes a long way in understand-
ing F (x). This is due to the following simple observation which allows
us to think of Kx◦ as a collection of line segments. Let v ∈ Sn−1 be a
unit vector and let p(v) denote the maximum over y ∈ K of v > y. Thus,
we can rewrite the polar as a collection of the following line segments;
each along a unit vector v.
[  1


Kx = 0, v.
n−1
p(v) − v > x
v∈S

This representation allows us to easily rewrite the volume of Kx◦ as


Z
def ◦ 1
f (x) = vol Kx = (p(v) − v > x)−n dS(v)
n v∈Sn−1
where dS(v) is the (n − 1)-dimensional volume of the surface of the
unit sphere around v.
What we achieve by this transformation is an understanding of the
derivatives of f (x) in terms of moment integrals over the polar. In fact
the following claim can now be seen easily:
(−1)l (n + l)!
Dl f (x)[h, h, . . . , h] = Il (h)
n!
where Z
def
Il (h) = (z > h)l dz.
Kx◦

However, we are interested in the differentials of F (x) = log f (x). How


do we calculate the differentials of this function? This is also easy since
d log g(x) 1 dg(x)
= .
dx g(x) dx
This allows us to easily derive the following expressions:

(1) DF (x)[h] = −(n + 1) I1I(h)


0
.
4.5. Appendix: Calculating the Gradient and the Hessian of the Volumetric Barrier 91

I12 (h)
(2) D2 F (x)[h, h] = (n + 1)(n + 2) I2I(h)
0
− (n + 1)2 I02
.
I3 (h)
(3) D3 F (x)[h, h, h] = −(n + 3)(n + 2)(n + 1) I0 + 3(n + 2)(n +
(h)3
1)2 I2 (h)I
I2
1 (h)
− 2(n + 1)3 I1I 3 .
0 0

It follows that F (x) is convex since a straightforward calculation shows


that
D2 F (x)[h, h] > 0.
Further, it is not difficult to see that
√ p
|DF (x)[h, h]| ≤ n + 1 |D2 F (x)[h, h]|,

which establishes the required bound on the complexity parameter. The


self-concordance property remains to be proved; that does not seem to
have a very illuminating proof and hence we omit the details here.
In summary, we sketched the proof of the fact that log-volume of the

polar is a n self-concordant barrier function. Could this be improved

beyond n? The answer turns out to be no in general. This is as far
as the theory of self-concordant barriers takes us. However, that is not
to say that interior point methods cannot take us further, at least in
the case of combinatorial polytopes!

4.5 Appendix: Calculating the Gradient and the Hessian of


the Volumetric Barrier
In this section we give a sketch of the gradiant and Hessian calculation
of the volumetric barrier. In particular, we prove the followin lemma
mentioned earlier.

Lemma 4.11. (1) ∇Vx = A> x σx and


2 > (2)
(2) ∇ Vx = Ax (3Σx − 2Px )Ax .

P ai a> 1
Proof. Recall that Hx = i
i s2x,i and Vx = 2 log det Hx . Let ζ be a
vector and let t ∈ R be an infinitesimal. First note that
X ai a> X ai a> X ai a> a> ζ
Hx+tζ − Hx = 2
i
− 2
i
= 2t 2
i i
+ O(t2 ).
s x+tζ,i s x,i s x,i s x,i
i i i
92 Volumetric Barrier, Universal Barrier and John Ellipsoids

def P ai a> >


i ai ζ
Let ∆ = 2t i s2x,i sx,i . Thus, for an infinitesimal t,
1 1 1 1
log det Hx+tζ − log det Hx ≈ log det(Hx + ∆) − log det Hx
2 2 2 2
1
log det Hx (I + Hx ∆Hx− /2 )Hx/2
−1/2
1/2 1 1
=
2
1
− log det Hx
2
1
log det(I + Hx− /2 ∆Hx− /2 )
1 1
=
2
1
Tr(Hx− /2 ∆Hx− /2 )
1 1

2 !
X ai a> a> ζ
−1/2 i i −1/2
= tTr Hx H
s2x,i sx,i x
i
X a> ζ
= t σx,i i .
sx,i
i

Thus, ∇Vx = A>


x σx .
To get the formula for the Hessian, start by noting that
∂ 2 Vx ∂ > > ∂ X > −1 Ail
= el Ax σx = ai Hx ai 3
∂xk ∂xl ∂xk ∂xk sx,ii
!
X Ail ∂ > −1 ∂ Ail
= 3 · ai Hx ai + a> −1
i Hx ai ·
sx,i ∂xk ∂xk s3x,i
i
!
X Ail ∂ > −1 Ail Aik
= 3 · ai Hx ai + 3a> −1
i Hx ai · .
i
sx,i ∂xk s4x,i

It remains to compute ∂x∂ k a> −1


i Hx ai . Towards this first note that similar
to the calculation we did above,
−1
Hx+tζ − Hx−1 ≈ −Hx−1 ∆Hx−1 .
 
> −1 > −1 > −1 aj a> >
j aj ζ
Hx−1 ai =
P
Thus, ai Hx+tζ ai − ai Hx ai = −ai Hx 2t j 2
sx,j sx,j
(a> −1 2 a> ζ
i Hx aj ) j
P
−2t j s2x,j sx,j . Thus,

Ail ∂ > −1 X (a> H −1 aj )2 Ajk Ail


· a H a i = −2 i x
= −2A> (2)
x Px Ax .
s3x,i ∂xk i x j
s2 s2
x,j x,i sx,j sx,i
4.6. Appendix: John ellipsoid 93

Thus taking ζ = ek and letting t → 0, we obtain

∇ 2 V x = A> (2)
x (3Σx − 2Px )Ax .

4.6 Appendix: John ellipsoid


In this section we introduce the John ellipsoid and contrast it with
the Dikin ellipsoid introduced in the previous lecture. In particular,
we show a fundamental result that, for symmetric polytopes, the John
√ √
ellipsoid n approximates the body (as opposed to the m achieved
by the Dikin ellipsoid). Thus, just as we did with the Dikin ellipsoid,
one can in principle define “The John Algorithm” and hope that it

converges in n iterations. Whether it does remains far from clear.
Recently, Lee-Sidford show that a smoothened version of John ellipsoid

actually does converge in n logO(1) (m) iterations.

def
Definition 4.12. Given a bounded polytope P = {x ∈ Rn : a> i x ≤
bi for i = 1, . . . , m} and a point x ∈ int(P ), the John ellipsoid of P at
x is defined to be the ellipsoid of maximum volume which is centered
at x and contained in P .

Let us try to describe the John ellipsoid Ex centered at a point x using


a quadratic form. Towards this, let Bx be a PSD matrix such that

Ex = {y : ky − xk2Bx ≤ 1}.

Then, the constraint that Ex is contained inside the polytope P would


require that for every i = 1, . . . , m

Ex ⊆ {y : hai , yi ≤ bi }.

In other words for all i,

max hai , yi ≤ bi
y∈Ex

or equivalently
max hai , y + xi ≤ bi
y:y > Bx y≤1
94 Volumetric Barrier, Universal Barrier and John Ellipsoids

or equivalently
max hai , yi ≤ bi − hai , xi = si (x).
y:y > Bx y≤1

It can be checked that the left-hand side is equal to kai kBx−1 . Because
si (x) ≥ 0, we can square both sides and obtain the constraint
a> −1 2
i Bx ai ≤ si (x) .

As for the volume of Ex (which we are maximizing), we have that vol(Ex )


equals
 
vol({y : y > Bx y ≤ 1}) = vol({Bx− /2 v : kvk2 ≤ 1}) = Vn det(Bx− /2 )
1 1

where Vn is the volume of the unit `2 ball in Rn . Ignoring the Vn term


(by just redefining the volume relative to Vn ) we obtain
1
log det(Bx−1 ).
log vol Ex =
2
We take the logarithm here because this will allow us to obtain a convex
program.
We can now write the following program, called (John-Primal):
min − log det B −1
a> −1
i B ai
s.t. ≤1 for i = 1, . . . , m.
si (x)2
Note that we deal with the constraint B  0 (equivalently, B −1  0)
by encoding it into the domain of the function log det.
Let us denote C = B −1 . The program (John-Primal) is convex in
the variables {Cij }.

4.6.1 Duality
Let us now write the dual program (John-Dual). We multiply the i-th
constraint by a multiplier wi ≥ 0, obtaining the Lagrangian
X  a> Cai 
i
L(C, w) = − log det C + wi −1
si (x)2
i
!
X wi X
>
= − log det C + C • a i ai − wi .
si (x)2
i i
4.6. Appendix: John ellipsoid 95

The KKT optimality condition ∇C L(C, w) = 0 yields


X wi
−C −1 + ai a>
i =0
si (x)2
i

and thus X wi
B = C −1 = ai a>
i .
si (x)2
i
The dual objective function is
!−1
X wi X
g(w) = inf L(C, w) = − log det ai a>
i +n− wi ,
C si (x)2
i i

with constraints wi ≥ 0 and i siw a a>  0 (the latter arising from


P i
(x)2 i i
our restriction of log det to PD matrices).
Slater’s condition holds (just consider a small ball certifying that
x ∈ int(P )) and we have strong duality. Thus, at optimal values of C
and w: X
− log det C = − log det C − wi + n,
i
and hence X
wi = n.
i
We can interpret wi as a measure of how strongly the ellipsoid Ex is
supported on the hyperplane defined by the i-th constraint. For ex-
ample, from the complementary slackness conditions we have that if
a> −1 2
i B ai < si (x) (i.e., the ellipsoid does not touch the hyperplane),
then wi = 0.

4.6.2 Approximating symmetric polytopes



Now we show that the John ellipsoid improves the m of the Dikin

ellipsoid to n for symmetric polytopes.

Proposition 4.13. Let P be a polytope which is symmetric around


the origin (i.e., P = −P ), 0 ∈ int(P ), and let E be the John ellipsoid

of P at 0. Then the ellipsoid nE contains P .
96 Volumetric Barrier, Universal Barrier and John Ellipsoids

Proof. The proof resembles the one of a similar statement for Dikin
ellipsoids. Without loss of generality assume that all bi = 1. Also,
because P is symmetric, we assume that for every constraint hai , xi ≤ 1
there is a corresponding constraint h−ai , xi ≤ 1. Now all si (x) = 1 and
the John ellipsoid E is given by
( )
X X 2
> >
B= wi ai ai , E = {x : x Bx ≤ 1} = x : wi hai , xi ≤ 1 .
i i

Pick a point x on the boundary of E. It is enough to show that nx is
not inside K. Since x ∈ ∂E, we have
X
wi hai , xi2 = 1.
i

If it were the case that nx ∈ int(P ), then for every i we would have
√ √
hai , nxi < 1 and h−ai , nxi < 1, so that

√ 2
ai , nx < 1,

and multiplying by wi/nand summing over all i we would get


X X wi
√ 2 P wi
wi hai , xi2 = ai , nx < i = 1,
n n
i i

a contradiction.

4.6.3 The John Algorithm?


Thus, the John ellipsoid provides us with an alternative to the Dikin
ellipsoid and one may ask what is stopping us from defining a barrier
similar to Vaidya’s volumetric barrier. After all, it does give us a barrier
of the form
X ai a>
i
wi (x)
si (x)2
i
with weights summing up to n at every point raising the possibility of a

n-iteration algorithm. Unfortunately, the weights wi (x) are not even
continuous (let alone differentiable). For example, consider the square
[−1, 1]2 defined by the four natural constraints (suppose w1 corresponds
to x ≤ 1 and w2 corresponds to −x ≤ 1). At a point (ε, 0), with ε very
4.6. Appendix: John ellipsoid 97

small, we have w1 = 1 and w2 = 0, but at (−ε, 0) the converse is true.


However, Lee-Sidford were able to find a workaround by smoothening
the weights at a poly-log cost in the number of iterations. The details
are quite involved and omitted here.
5
A Primal-Dual IPM for Linear Programs
(without the Barrier)

5.1 Overview
In this lecture we will present an elementary proof of Ye’s primal-
dual interior point method (IPM) for solving linear programming (LP)
problems. The primal-dual approach presented here can be cast in the
language of central path using a barrier function. The formulation and
analysis here are simpler and make no reference to barrier functions,
central paths, Newton step, affine scaling, predictor step, corrector step,
centering etc., however an avid who is familiar with path following
methods is encouraged to see the similarities.

5.2 Linear Programming and Duality


It is well-known that any LP problem can be formulated as below,
along with the corresponding dual program.

98
5.2. Linear Programming and Duality 99

Primal (P)

min hc, xi
s.t.
Bx = b
∀e ∈ [m] xe ≥ 0
Dual (D)

max hb, yi
Here, m is the num-
s.t.
B>y + s = c
∀e ∈ [m] se ≥ 0
ber of variables and n is the number of constraints, i.e.,

B ∈ Rn×m , b ∈ Rn , c ∈ Rm , x ∈ Rm , y ∈ Rn , s ∈ Rm .

Example 5.1. As an important example, if we think of B as a vertex-


edge incidence matrix1 of a directed graph G with edge costs c, then the
primal problem above encodes the MinCostUncapacitatedFlow in
G with demands given by the vector b and the cost by c. In view of this
application, we will be indexing coordinates of x and s with the letter
e.

Motivated by this example, we will present our method in the context


of solving a combinatorial problem, that is, one where the input data
B, b, c contains only entries of bounded length (e.g. only from the set
{−1, 0, 1}). Therefore, the encoding length L of the instance will not be
a factor in our considerations. Moreover, we will not solve programs (P)
and (D) to optimality; instead, once we are able to bring the duality
gap hc, xi − hb, yi down to O(m−1 ), we will stop and assume that the

1 That is, Bve = −1 and Bwe = 1 for e = hv, wi.


100 A Primal-Dual IPM for Linear Programs (without the Barrier)

solution we output will be good enough for the purpose at hand (for
example, that it can be rounded to an optimal solution).2
Consider an optimal primal-dual solution (x, y, s). It satisfies all
four (sets of) conditions from the programs (P) and (D), as well as the
following complementary slackness condition:

∀e ∈ [m] xe se = 0.

Moreover, it follows from the theory of linear programming that any


triple (x, y, s) satisfying all these constraints, that is,

Bx = b
B>y + s = c
∀e ∈ [m] xe ≥ 0
∀e ∈ [m] se ≥ 0
∀e ∈ [m] xe s e = 0

is in fact an optimal primal-dual solution (i.e. it satisfies hc, xi = hb, yi).


Thus, we have turned the task of finding the optimum solution of a
linear program into the task of finding a feasible solution to a quadratic
program. At first sight, this does not seem promising, especially given
that the new program is not even convex. Nevertheless, we will be able
to come up with a polynomial-time algorithm for approximately solving
this new program. The following is the main result:

Theorem 5.2. There is an algorithm, which produces a solution


(x, y, s) which is feasible for the primal-dual linear program mentioned

above such that hc, xi − hb, yi ≤ δ in O( m log m/δ) iterations. Each
iteration corresponds to solving an m × m positive semi-definite (PSD)
linear system of equations.

2 Extremal solution to linear programming problems have this property that they are
rational with the denominator bounded by 2O(L) , where L is the encoding length. Further,
for many combinatorial polytopes, the vertices have all coordinates 0/1. Due to this, once
the duality gap is small enough, of the order 2−O(L) , one can round the solutions to get
an actual vertex solution. For the MinCostUncapacitatedFlow problem, a duality gap
of m−O(1) suffices. This is left as an easy exercise.
5.3. A Primal-Dual Interior Point Method 101

5.3 A Primal-Dual Interior Point Method


The most important challenge seems to be handling the quadratic con-
straints xe se = 0. To this end, we will introduce a parameter µ ∈ R+ ,
which is going to serve as an approximation of the 0 in these constraints.
And we define the relaxed program called KKT(µ) as follows:3

Bx = b (5.1)
>
B y+s=c (5.2)
∀e ∈ [m] xe ≥ 0 (5.3)
∀e ∈ [m] se ≥ 0 (5.4)
∀e ∈ [m] xe se = µ (5.5)

It is easy to see that as µ → 0, the optimal solution to KKT (µ) tends to


the optimal solution of the underlying linear program. The rough idea
of our approach is to start with an initial solution (x0 , s0 , y 0 ) satisfying
KKT(µ) for a large value of µ. Then we hope to turn it into solutions
(xt , st , y t ) of programs KKT(µ) for smaller and smaller values of µ using
some iterative process. When µ is small enough, we can stop. Why?
P
Fact 5.3. The duality gap hc, xi − hb, yi is equal to e xe se = hx, si.

Proof.
D E
hx, ci − hb, yi = x, B > y + s − hBx, yi
D E D E
= x, B > y + hx, si − x, B > y
= hx, si .

Therefore, a setting of µ = O(m−2 ) would give us a duality gap of


−1
P
e xe se = mµ = O(m ), which, as we discussed before, we are as-
suming to be good enough.
3 The KKT stands for Karush-Kuhn-Tucker. The reason we call it KKT(µ) is because these
are the KKT-conditions for the following convex programming problem obtained from
our primal LP by removing
P the x ≥ 0 constraints and replacing them by the log barrier
function: infhc, xi − µ e ln xe subject to Ax = b.
102 A Primal-Dual IPM for Linear Programs (without the Barrier)

Unfortunately, this is still too difficult a task. Namely, approximat-


ing 0 by µ is not enough: it is unreasonable to expect that we will be
able to make all the terms xe se equal to any given parameter (and also,
we are still left with a quadratic constraint). Indeed, we will not try to
solve KKT(µ) exactly. Instead, we will allow some leeway in the con-
straints xe se = µ, but we will take care to measure the total relative
error in the approximate equality (xe se )e ≈ µ · 1 and keep it under
control.
We will encode this measurement using the following potential func-
tion:  
def def xe s e − µ
v = v(x, s, µ) =
µ
e 2

That is,
v
u  2
uX xe se
v(x, s, µ) = t −1 .
e
µ

Our main invariant is that we will keep this quantity bounded from
above by 1/2 at all times. In other words, we will always be close to the
actual solution of KKT(µ).4 Hence, the program KKT’(µ) that we will
be solving is the following:

Bx = b
>
B y+s=c
∀e ∈ [m] xe ≥ 0
∀e ∈ [m] se ≥ 0
1
v(x, s, µ) ≤
2
It can be checked that one can find an initial solution (x0 , s0 , y 0 ) to the
above problem for some µ0 = mO(1) .5 Let us quickly verify that this
relaxation is good enough for our purposes.

4 The solution of KKT(µ) traces a path in the interior of the primal polytope and the dual
polytope simultaneoulsly as µ → 0 continuously. This is the primal-dual central path.
5 Actually, we will need to introduce a constant number of new x, y, s variables (with very

high costs) that enable us to find a starting solution.


5.3. A Primal-Dual Interior Point Method 103

Fact 5.4. Let µ = O(m−2 ). If (x, y, s) is a solution of the program


KKT’(µ), then the duality gap is O(m−1 ).

Proof. We have that


 
xe s e − µ
v(x, s, µ) =
≤ 1,
µ
e 2 2

hence6   √
xe se − µ
≤ m,
µ
e 1 2
and hence,

X µ m
(xe se − µ) ≤
e
2
and, using Theorem 5.3, the duality gap is

X µ m
xe se ≤ + µm = O(m−1 ).
e
2

Hence, the gist of our approach will be to produce solutions of the


program KKT’(µ) for values of µ that decrease geometrically until we
reach µ = O(m−2 ).
We will always maintain the invariant that x and s are strictly
positive, i.e., that we are in the interior of the primal and dual bodies.
This is why this method can be considered an interior point method.
Let (xt , st , y t ) be the solutions at iteration t.

Invariant : xt  0, st  0. (5.6)

Again, we are tacitly assuming that we are able to somehow produce


an initial solution (x0 , y 0 , s0 ) for KKT’(µ) satisfying x0  0, s0  0,
for some setting of µ which is polynomial in m.

6 We √
are using the basic fact that kvk1 ≤ mkvk2 (actually, a factor of m would suffice).
104 A Primal-Dual IPM for Linear Programs (without the Barrier)

We will now describe a single update step. It will consist of two


stages:
def
from (x, y, s, µ) = (xt , y t , st , µt ) s.t. v ≤ 1/2
def
through (x + ∆x, y + ∆y, s + ∆s, µ) = (xt+1 , y t+1 , st+1 , µt ) s.t. v ≤ 1/4
def
to (x + ∆x, y + ∆y, s + ∆s, µ(1 − γ)) = (xt+1 , y t+1 , st+1 , µt+1 ) s.t. v ≤ 1/2

That is: in the first stage, we will try to bring v down as much as
possible by changing our solution (but keeping µ intact). In the second
stage we will decrease µ as much as possible while still keeping v below
def
1/2 (we decrease µ using the multiplicative update µt+1 = (1 − γ)µt ).

Thus our task amounts to finding (for the first stage) a way to decrease
v, and (for the second stage) a maximum value of γ which will guarantee
v ≤ 1/2.
Let us focus on the first stage for now. We have a solution which
satisfies all constraints of KKT(µ) except for (5.5), and our goal is
to satisfy this constraint (as much as possible). We have denoted by
(∆x, ∆s, ∆y) the change in the solution. Let us write the old and new
linear equality constraints:

Bx = b
B>y + s = c
B(x + ∆x) = b
>
B (y + ∆y) + s + ∆s = c

Straightforward subtraction shows that the two new ones will be sat-
isfied iff

B∆x = 0,
>
B ∆y + ∆s = 0.

Ideally, we would want to satisfy (5.5), that is, to have

(xe + ∆xe )(se + ∆se ) = µ.

But this would lead to having to solve a program with a quadratic


constraint – precisely what we are trying to avoid. Thus instead, we
5.3. A Primal-Dual Interior Point Method 105

use the following as an approximation:

∀e ∈ [m], xe se + ∆xe se + xe ∆se = µ.

I.e., we just dropped the quadratic term ∆xe ∆se .7


Thus, the linear system of equations LS which we need to solve to
find (∆x, ∆s, ∆y) is:

B∆x = 0 (5.7)
>
B ∆y + ∆s = 0 (5.8)
∀e ∈ [m], xe se + ∆xe se + xe ∆se = µ. (5.9)

Note that we should still enforce that x + ∆x, s + ∆s  0. But it turns


out that this is true for any solution, as Theorem 5.9 will show.
We begin by showing that this system has a solution.

Lemma 5.5. The linear system LS has a solution.

def def
Proof. Let8 L = BXS −1 B > , and let z = − µ1 Bs−1 , where we denote
 def
s−1 e = s−1
e . Then ∆y can be found by solving:

L∆y = z.

It can be checked that this has a solution since x, s  0 and, by def-


inition, z lies in the column space of B. Vectors ∆s and ∆x are now
determined by (5.8) and (5.9), respectively:

∆s = −B > ∆y,
µ − xe ∆se
(∆x)e = − xe .
se
We are left with verifying that B∆x = 0, which is an easy exercise.

7 This is the Newton step which approximates the function by its first order Taylor series
and finds the best direction to move.
8 We use the notation that for a vector x, let X denote the diagonal matrix with the entries

of x in the diagonal.
106 A Primal-Dual IPM for Linear Programs (without the Barrier)

Example 5.6. Coming back to Theorem 5.1, we can see that the ma-
trix BXS −1 B > from the proof of Theorem 5.5 is the Laplacian matrix
of a graph where each edge e has conductance xsee . Solving the system
for ∆y (which is the only computationally nontrivial step in solving the
system LS) amounts to computing an electrical flow in this graph. It
can be done in nearly-linear time using a fast approximate Laplacian
solver due to Spielman-Teng. It is important to note that the Lapla-
cian solver of Spielman-Teng can find a solution in Õ(m log 1/ε) time
where ε is the relative error of the solution. It can be checked that an
approximate solution with ε = 1/mO(1) suffices for our purpose. In sum-
mary, our method solves the MinCostUncapacitatedFlow problem
by computing a sequence of electrical flows where the resistances and
the demand vector depend on the current point.

Now let us fix a solution (∆x, ∆y, ∆s) and proceed to analyzing its
properties. We begin with a simple but very useful observation.

Lemma 5.7. If (∆x, ∆s) satisfy LS, then


X
∆xe ∆se = 0.
e∈[m]

As a corollary, on average the duality gap is satisfied:


1 X
(xe + ∆xe )(se + ∆se ) = µ.
m e
The proof of the lemma is immediate.

Proof. Start by multiplying (5.8) by ∆x> and use (5.7) to obtain


0 = ∆x> B > ∆y + ∆x> ∆s = (B∆x)> ∆y + ∆x> ∆s = ∆x> ∆s.

The following crucial lemma establishes that potential reduces quadrat-


ically9 and lets us analyze the first stage.
9 Thisis similar to the analysis of Newton’s method (see Lecture 3) and this similarity is
not coincidental. The Taylor approximation we performed can be seen as a Newton step
5.3. A Primal-Dual Interior Point Method 107

Lemma 5.8. For (x, s, µ) such that v(x, s, µ) < 1 and ∆x, ∆s that
satisfy LS, we have
1 v(x, s, µ)2
v(x + ∆x, s + ∆s, µ) ≤ · .
2 1 − v(x, s, µ)

It follows that if v(x, s, µ) ≤ 12 , then v(x + ∆x, s + ∆s, µ) ≤ 1/2 · 1/2 ·


1/2 · 2 = 1/4. The proof relies on the Cauchy-Schwarz inequality and

Theorem 5.7.
def def
q q
se xe
Proof. Define dxe = µxe ∆xe and dse = µse ∆se . Thus,
r  
µ xe se
dxe + dse = − −1 .
xe se µ

Hence, v(x + ∆x, s + ∆s, µ)2 = e∈[m] dx2e ds2e which is at most
P

 2
 2
1 X 2 2 2 1 X
2 1 µ
(dxe +dse ) ≤  (dxe + dse )  = max v(x, s, µ)4 .
4 4 4 e∈[m] xe se
e∈[m] e∈[m]

Here we have used that


X X X
(dxe + dse )2 = dx2e + ds2e + 2dxe dse = dx2e + ds2e
e∈[m] e∈[m] e∈[m]
P xe s e
since e∈[m] dxe dse = 0. Thus, for all e, 1 − µ ≤ v(x, s, µ). This im-
 
plies that 1 − v(x, s, µ) ≤ xe s e
µ . Since v(x, s, µ) < 1, maxe∈[m] xeµse ≤
1
1−v(x,s,µ) . This completes the proof of the lemma.

As we remarked above, the following fact proves that just solving


this linear system (consisting only of equalities) is actually enough to
guarantee that the linear inequalities are satisifed as well (i.e. that we
are in the interior of the primal and dual bodies).

for the system of quadratic equations we started with and the condition that v ≤ 1/2
corresponds to the domain of quadratic convergence in Newton’s method.
108 A Primal-Dual IPM for Linear Programs (without the Barrier)

Lemma 5.9. If (x, s, µ) are such that v(x + ∆x, s + ∆s, µ) < 1, where
∆x, ∆s satisfy LS, then x + ∆x > 0 and s + ∆s > 0.

Proof. v(x + ∆x, s + ∆s, µ) < 1 implies that for all e, −µ < ∆xe ∆se <
µ. Thus,
(xe + ∆xe )(se + ∆se ) = µ + ∆xe ∆se > 0.

Thus, for each e, both xe + ∆xe and se + ∆se are either both positive
or both negative. If they are both negative then,

∆xe < −xe and ∆se < −se .

Since xe > 0 and se > 0, multiplying the above inequalities by se and


xe respectively and adding we obtain

∆xe se + ∆se xe < −2xe se .

Thus, µ = ∆xe se + ∆se xe + xe se < −xe se which is a contradiction as


µ > 0.

Finally, the following lemma lets us analyze the second stage. It gives
an estimate of the rate of convergence of the whole method. That is,
it answers the question of what values of γ are guaranteed to keep v
below 1/2.

Lemma 5.10. For (x, s, µ) such that v(x, s, µ) < 1 and ∆x, ∆s that
satisfy LS we have
1 p
v(x + ∆x, s + ∆s, (1 − γ)µ) ≤ · v(x + ∆x, s + ∆s, µ)2 + γ 2 m.
1−γ

Thus, if v(x, s, µ) ≤ 1/2 and consequently v(x + ∆x, s + ∆s, µ) ≤ 1/4,



then we must take γ = Ω (1/ m) in order to maintain the invariant that
v(x + ∆x, s + ∆s, (1 − γ)µ) ≤ 1/2.
5.3. A Primal-Dual Interior Point Method 109

Proof.
X  (xe + ∆xe )(se + ∆se ) 2
2
v(x + ∆x, s + ∆s, (1 − γ)µ) = −1
µ(1 − γ)
e∈[m]
 
 2
1 X ∆xe ∆se
= 2
 +γ 
(1 − γ) µ
e∈[m]

1 X  ∆xe ∆se 2
= 
(1 − γ)2 µ
e∈[m]

X 2∆xe ∆se γ
+ + mγ 2  .
µ
e∈[m]

Using Theorem 5.7, we obtain

1 X  ∆xe ∆se 2 mγ 2
v(x + ∆x, s + ∆s, (1 − γ)µ)2 = 2
+
(1 − γ) µ (1 − γ)2
e∈[m]

v(x + ∆x, s + ∆s, µ)2 mγ 2


= + .
(1 − γ)2 (1 − γ)2

To finish, we calculate the number of iterations is takes to bring µ0 =


poly(m) down to µT = O(m−2 ). Note that after T iterations we have
µT = µ0 (1 − T T −1
 γ) , and hence, we need (1 − γ) = poly(m ). Since √
γ = Ω √1m , a constant decrease in µ is brought about by O( m)
iterations, and we need to iterate this O(log m) times. We conclude
that
√ 
T = O m log m .
Each iteration involves solving a system of linear equations.

Example. Coming back to Theorems 5.1 and 5.6, notice that our
result, together with a fast Laplacian solver, implies that we are able to
solve the MinCostUncapacitatedFlow problem in time Õ m3/2 .

References

[AHK12] Sanjeev Arora, Elad Hazan, and Satyen Kale. The multiplicative weights
update method: a meta-algorithm and applications. Theory of Computing,
8(1):121–164, 2012.
[Bub14] Sebastien Bubeck. Theory of convex optimization for machine learning.
Internet draft, 2014. URL: http://arxiv.org/abs/1405.4980.
[Haz14] Elad Hazan. Introduction to online convex optimization. Internet draft,
2014. URL: http://ocobook.cs.princeton.edu/.
[Vis13] Nisheeth K. Vishnoi. Lx = b. Foundations and Trends in Theoretical Com-
puter Science, 8(1-2):1–141, 2013. URL: http://research.microsoft.
com/en-us/um/people/nvishno/site/Lxb-Web.pdf.
[Wri05] Margaret H. Wright. The interior-point revolution in optimization: his-
tory, recent developments, and lasting consequences. Bull. Amer. Math.
Soc. (N.S, 42:39–56, 2005.

110

You might also like