LMI Methods in Optimal and Robust Control
LMI Methods in Optimal and Robust Control
Matthew M. Peet
Arizona State University
Topics to Cover:
Formulating Constraints
• Tricks of the Trade for expressing constraints.
• Converting everything to equality and inequality constraints.
Equivalence:
• How to Recognize if Two Optimization Problems are Equivalent.
• May be true despite different variables, constraints and objectives
aTK 1 bK
M. Peet Lecture 02: Optimization 4 / 31
Least Squares
Unconstrained Optimization
x = (AT A)−1 AT b
x2i = 1
Combined:
N
X −1
x(N )T Sx(N ) + x(k)T Qx(k) + u(k)T Ru(k)
min
x,u
k=1
x(k + 1) = Ax(k) + Bu(k), k = 0, · · · , N
x(0) = 1
M. Peet Lecture 02: Optimization 10 / 31
Dynamic Programming
Closed-Loop Case
Combined:
N
X −1
x(N )T Sx(N ) + x(k)T Qx(k) + u(k)T Ru(k)
min
x,u
k=1
x(k + 1) = Ax(k) + Bu(k), k = 0, · · · , N
u(k) = Kx(k), x(0) = 1
M. Peet Lecture 02: Optimization 11 / 31
Formulating Optimization Problems
Equivalence
Definition 1.
Two optimization problems are Equivalent if a solution to one can be used to
construct a solution to the other.
Example 1: Equivalent Objective Functions
In Classification we data inputs (data) (xi ), each of which has a binary label
(yi ∈ {−1, +1})
• yi = +1 means the output of xi belongs to group 1
• yi = −1 means the output of xi belongs to group 2
f (x) = wT x − b. Then
4
-2 0 2 4 6 8 10
Definition 2.
• A Hyperplane is the generalization of the concept of line/plane to multiple
dimensions. {x : wT x − b = 0}
• A Half-Space are the parts above and below a Hyperplane.
{x : wT x − b ≥ 0} OR {x : wT x − b ≤ 0}
We want to separate the data into disjoint half-spaces and maximize the
distance between these half-spaces
yi (wT xi − b) ≥ 1, ∀i = 1, ..., K.
H
w1
7 Hw2
n
1 2
X 2
w∈Rp ,b∈R 2 0
i=1 -2 0 2 4 6 8 10
γ1 S ≠∅
γ
Bisection Algorithm:
1 Initialize feasible γu = b
2 Initialize infeasible γl = a γL S ≠∅ γ
γu +γl
3 Set γ = 2
γu +γl
5 If Sγ feasible, set γu = 2
4 If Sγ infeasible, set γl = γu +γ
2
l
6 k =k+1
7 Goto 3
Then γ ∗ ∈ [γl , γu ] and |γu − γl | ≤ b−a
2k
.
Bisection with oracle also solves the
Primary Problem. (max γ : Sγ = ∅)
M. Peet Lecture 02: Optimization 20 / 31
Computational Complexity
(γ, x) ∈ S 0
Answer: Easy (P) if S 0 is a Convex Set.
M. Peet Lecture 02: Optimization 22 / 31
Convex Functions
Definition 3.
An OBJECTIVE FUNCTION or CONSTRAINT function is convex if
f (λx1 + (1 − λ)x2 ) ≤ λf (x1 ) + (1 − λ)f (x2 ) for all λ ∈ [0, 1].
Useful Facts:
• eax , kxk are convex. xn (n ≥ 1 or n ≤ 0), − log x are convex on x ≥ 0
• If f1 is convex and f2 is convex, then f3 (x) := f1 (x) + f2 (x) is convex.
• A function is convex if ∇2 f (x) is positive semidefinite for all x.
• If f1 , f2 are convex, then f3 (x) := max(f1 (x), f2 (x)) is convex.
• If f1 , f2 are convex, and f1 is increasing, then f3 (x) := f1 (f2 (x)) is convex.
M. Peet Lecture 02: Optimization 23 / 31
Convex Sets
Definition 4.
A FEASIBLE SET is convex if for any x, y ∈ Q,
Facts:
• If f is convex, then {x : f (x) ≤ 0} is convex.
• The intersection of convex sets is convex.
I If S1 and S2 are convex, then S2 := {x : x ∈ S1 , x ∈ S2 } is convex.
M. Peet Lecture 02: Optimization 24 / 31
Descent Algorithms (Why Convex Optimization is Easy)
Unconstrained Optimization
All descent algorithms are iterative, with a search direction (∆x ∈ Rn ) and step
size (t ≥ 0). xk+1 = xk + t∆x
1. For convex optimization problems, Descent Methods Always find the global
optimal point.
2. For non-convex optimization, Descent Algorithms may get stuck at local
optima.
Example
Linear Programming (LP)
minimize x 1 + x2
minn cTto
subject x: 3x1 subject
+ x2 ≥ 3to
x∈R
Ax ≤ b x2 ≥ 1
x1 ≤ 4
A0 x = b0
−x1 + 5x2 ≤ 20
x1 + 4x2 ≤ 20
• EASY: Simplex/Ellipsoid Algorithm (P)
• Can solve for >10,000 variables
min xT Qx + cT x : subject to
x∈Rn
Ax ≤ b
• EASY (P): If Q ≥ 0.
• HARD (NP-Hard): Otherwise
min cT x : subject to
x∈Rn
Ax ≤ b
xi ∈ Z i = 1, · · · K
• HARD (NP-Hard)
• Very Hard
M. Peet Lecture 02: Optimization 30 / 31
Next Time: