Dynamics of Linear System
Dynamics of Linear System
Dynamics of Linear System
Department of Applied
SYSTEMS Physics
University of Calcutta
EE 902
COURSE OUTCOMES (CO)
Module 2:
State-space representation of discrete-time systems,
Solving the discrete-time state equation.
𝑥2 (0) 𝑥1 (0)
uncontrollable 𝑠 𝑠
𝑢 𝑥ሶ 2 𝑠 −1 𝑥2 1 𝑥ሶ 1 𝑠 −1 𝑥1 1 𝑦
−1 −2
1
Linear system Analysis controllable
DEPARTMENT OF APPLIED PHYSICS, UNIVERSITY OF CALCUTTA 7
MOTIVATION
𝑥ሶ 1 −2 0 𝑥1 3
= + 𝑢(𝑡)
𝑥ሶ 2 0 −1 𝑥2 1
𝑥1
𝑦= 1 0 𝑥
2
𝑥2 (0) 𝑥1 (0)
𝑠 𝑠
𝑢 𝑥ሶ 2 𝑠 −1 𝑥2 𝑥ሶ 1 𝑠 −1 𝑥1 𝑦
1 1
−1 −2
3
observable
Linear system Analysis unobservable
DEPARTMENT OF APPLIED PHYSICS, UNIVERSITY OF CALCUTTA 8
INTRODUCTION TO
CONTROLLABILITY
A system is said to be controllable at time t0 if it is
possible
oby means of an unconstrained control vector
oto transfer the system from any initial state x(t0) to any
other state in a finite interval of time.
Concept introduced by Kalman
It can assume that the final state is the origin of the state
space and that the initial time is zero, or t0 = 0
(ii)
(iii)
(iv)
(vi)
It can be implied,
✓ To solve the equation (vii), RHS must be linearly
independent
✓ Possible only full rank
✓ Link with input
e.g. 2
𝑢 1 𝑥ሶ 2 𝑠 −1 𝑥2 𝑥ሶ 1 𝑠 −1 𝑥1 1 𝑦
−2
−1
3
observable
unobservable
Now,
Thus, to place all eigenvalues of (A−BK) arbitrarily, the system must be complete
state controllable
So, with
Which is equal to
DEPARTMENT OF APPLIED PHYSICS, UNIVERSITY OF CALCUTTA 66
PROOF OF SUFFICIENT CONDITION
Simplified version
It can be written,
The state variables are measurable and are available for feedback.
𝑥ሶ 𝑡 = 𝐴𝑛×𝑛 𝑥 𝑡 + 𝐵𝑛×1 𝑢 𝑡
𝑦 𝑡 = 𝐶1×𝑛 𝑥 𝑡 + 𝐷𝑢(𝑡)
… (1)
From Cayley-Hamilton theorem,
But,
𝐴ҭ and 𝐵are
ҭ the depend on system, so only 𝐾 can choose arbitrarily
Hence,
෩ = A − KeC
A
෩=B
B
Ke = αn − an … (α1 − a1 ) T T −1
C 0
CA 0
Ke = ϕ A
⋮ ⋮
CAn−1 1
With ϕ A = An + α1 An−1 + ⋯ + αn I
known unknown
The equation for the unmeasured portion of the state
The error
Let,
is (𝑛 − 1)
Necessary condition:
dn J1
ቚ =0
dx n x=x∗
Sufficient condition: Candidate point
dn+1 J1
RHS positive if > 0 => for minimization
dxn+1
dn+1 J1
RHS positive if < 0 =>for maximization
dxn+1
𝑑 2 𝐽1
| ∗ >0
𝑑𝑥 2 𝑥=𝑥
0
𝑥∗ = is minimum point
0
5) Minimize the control effort, while the final state 𝑋𝑓 reaches close
to constant 𝐶
1 𝑇 1 𝑡𝑓 𝑇
𝐽 = 𝑋𝑓 − 𝐶 𝑆𝑓 𝑋𝑓 − 𝐶 + න 𝑈 𝑅𝑈 𝑑𝑡,
2 2 𝑡0
𝑆𝑓 ≥ 0, 𝑅 > 0
➢𝜆ሶ = 𝑃𝑋
ሶ + 𝑃 𝐴𝑋 + 𝐵𝑈
➢𝜆ሶ = 𝑃𝑋
ሶ + 𝑃 𝐴𝑋 − 𝐵𝑅−1 𝐵𝑇 𝜆
➢𝜆ሶ = 𝑃𝑋
ሶ + 𝑃 𝐴𝑋 − 𝐵𝑅−1 𝐵𝑇 𝑃𝑋
ሶ + 𝑃 𝐴 − 𝐵𝑅−1 𝐵𝑇 𝑃 𝑋)
➢− 𝑄𝑋 + 𝐴𝑇 𝜆 = (𝑃𝑋
➢ 𝑃ሶ + 𝑃𝐴 + 𝐴𝑇 𝑃 − 𝑃𝐵𝑅−1 𝐵𝑇 𝑃 + 𝑄 𝑋 = 0
Why?
State feedback control designs need the state
information for control computation
All state variables are not available for feedback
Non-availability of sensors
Expensive sensor
Noisy measurement
A state observer estimates the state variables based on
the measurement of some of the output variables as well
as the process information
• x=Ax+Bu
ሶ
• y=Cx
• Observer dynamics • Observer dynamics
• Error dynamics
෩ = A − KeC e
eሶ = Ae
• Error dynamics
LQ observer
Eሶ = A෩E = A − K e C E
K 𝑇e = 𝑅−1 𝐶𝑃, 𝑃 > 0
Information required
System model
Measurement and their statistical behaviors
Statistical models characterizing the process and measurement noise
Initial condition of the states
Assume
W(t), V(t): zero mean white noise
X(0)~(X 0 , 𝑃0 ): X(0) is unknown but covariance matrix of initial condition is
known
P=E ෩ X T , (X
X෩ ෩=X−X )