Optim
Optim
Optim
Motivation
Bellmans Principle of Optimality
Discrete-Time Systems
Continuous-Time Systems
Steady-State Infinite Horizon Optimal
Control
Illustrative Examples
Motivation
Control design based on pole-placement
often has non unique solutions
Best locations for eigenvalues are
difficult to determine
Optimal control minimizes a performance
index based on time response
Control gains result from solving the
optimal control problem
Quadratic Functions
Single variable quadratic function:
f ( x ) qx 2 bx c
f (x) xT Qx bT x c
Where Q is a symmetric (QT=Q) nxn matrix
and b is an nx1 vector
x x1 x 2 x n
2-Variable Quadratic Example
Quadratic function of 2 variables:
f (x ) x 1 2 x 2 2 x 1x 2 2 x 1
2 2
Matrix representation:
1 1 x 1 x1
f (x) x 1 x 2 2 0
1 2 x 2 x2
Quadratic Optimization
The value of x that minimizes f(x) (denoted by x*)
sets f
2xT Q bT 0 2Qx b 0
x
1
or equivalently x Q 1b
2
is positive definite
Positive Definite Matrixes
Definition: A symmetric matrix H is said to
be positive definite (denoted by H>0) if
xTHx>0 for any non zero vector x (semi
positive definite if it only satisfies xTHx0
for any x (denoted by H 0) ).
Positive definiteness (Sylvester) test: H
is positive definite iff all the principal
minors of H are positive:
h11 h12 h1n
h11 h12 h21 h22 h2 n
h11 0, 0,, 0
h21 h22
hn1 hn 2 hnn
2-Variable Quadratic Optimization Example
1 1 x 1 x1
f (x) x 1 x 2 2 0
1 2 x2 x2
Q
Optimal solution:
1
1 1 1 1 1 2 2
x Q b
2 2 1 2 0 1
5 20
2 2
20
10
H 2Q 0 4 18
2 4
20
3 10 16
5 1
2 14
0
1
0
1 20 -1
12
10 1
Thus x* minimizes f(x)
-1
10
x2
0
5
5
0 1
10 8
-1
5 20
20 6
-2 10 10
4
-3 20
20 2
-4
0
-5
-5 0 5
x1
Discrete-Time Linear Quadratic
(LQ) Optimal Control
Given discrete-time state equation
x(k 1) Gx(k) Hu(k)
uTRu
xTQx
t
Principle of Optimality
4
2 7
5
9
1
3 8
6
Bellmans Principle of Optimality: At any
intermediate state xi in an optimal path
from x0 to xf, the policy from xi to goal xf
must itself constitute optimal policy
Discrete-Time LQ Formulation
Optimization of the last input (k=N-1):
1 T 1 T
JN x (N)Sx(N) u (N 1)Ru(N 1)
2 2
2 2
1 1
JN1 x(N 1)T PN1x(N 1) u(N 2)T Rx(N 2)
2 2
where PN1 (G HK)T PN (G HK) KTRK Q
Ricatti Equation
Pk G HKk Pk 1G HKk Q KkTRKk , k N 1,,1,0
T
Optimal Cost:
J x
1
x(0)T P0 x(0)
2
Comments on Continuous-Time
LQ Solution
k 0
Discretized Equation:
x1(k 1) 1 T x1(k ) T 2 2
x (k 1) 0 1 x (k ) u(k )
2 2 T
R
PN Q
System Definition in Matlab
%System Matrices
Ac=[0 1;0 0]; Bc=[0;1];
[G,H]=c2d(Ac,Bc,0.1);
%Performance Index Matrices
N=100;
PN=[10 0;0 0]; Q=[1 0;0 0]; R=2;
Ricatti Equation Computation
%Simulation
x=zeros(N+1,2); x0=[1;0];
x(1,:)=x0';
for k=1:N
xk=x(k,:)';
uk=-K(k,:)*xk;
xkp1=G*xk+H*uk;
x(k+1,:)=xkp1';
end
%plot results
Plot of P and K Matrices
25
P11
Elements of P 20 P12
P22
15
10
0
0 2 4 6 8 10
1.5
K1
K2
Elements of K
0.5
0
0 2 4 6 8 10
Time (sec)
Plot of State Variables
1
x1
x2
States 0.5
-0.5
0 2 4 6 8 10
0.2
-0.2
Input
-0.4
-0.6
-0.8
0 2 4 6 8 10
Time (sec)
Linear Quadratic Regulator
1 T
J x (k )Qx (k ) uT (k )Ru(k )
2 k 0
S, Q 0, R 0 and symmetric
Control law:
u(k) Kx(k),
K R HTPH HTPG
Ricatti Equation
P G HK PG HK Q K TRK
T
Optimal Cost:
1
J x(0)T Px(0)
2