Am04 Ch3-10oct04
Am04 Ch3-10oct04
Am04 Ch3-10oct04
Dynamic Behavior
ẋ = f (x, u)
(3.1)
y = g(x),
61
62 CHAPTER 3. DYNAMIC BEHAVIOR
A given differential equation may have many solutions. We will most often
be interested in the initial value problem, where x(t) is prescribed at a given
time t0 ∈ R and we wish to find a solution valid for all future time, t > t0 .
We say that x(t) is a solution of the differential equation (3.2) with initial
value x0 ∈ Rn at t0 ∈ R if
mq̈ + bq̇ + kq = 0,
ẋ1 = x2
k b
ẋ2 = − x1 − x2 .
m m
3.1. SOLVING DIFFERENTIAL EQUATIONS 63
0.5
x1, x2
−0.5
−1
0 5 10 15 20
time (sec)
Numerical Solutions
One of the benefits of the computer revolution that we can benefit from is
that it is very easy to obtain a numerical solution of the differential equation
when the initial condition is given. A nice consequence of this is as soon as
we have a model in the form of (3.2), it is straightforward to generate the
behavior of x for different initial conditions, as we saw briefly in the previous
chapter.
Modern computing environments allow simulation of differential equa-
tions as a basic operation. In particular, MATLAB provides several tools
for representing, simulating, and analyzing ordinary differential equations
of the form in equation (3.2). To define an ODE in MATLAB, we define a
function representing the right hand side of equation (3.2):
Each function Fi(x) takes a (column) vector x and returns the ith element
of the differential equation. The first argument to the function sysname, t,
represents the current time and allows for the possibility of time-varying dif-
ferential equations, in which the right hand side of the ODE in equation (3.2)
depends explicitly on time.
ODEs define in this fashion can be simulated by using the MATLAB
ode45 command:
The first argument is the name of the file containing the ODE declaration,
the second argument gives the time interval over which the simulation should
be performed and the final argument gives the vector of initial conditions.
The default action of the ode45 command is to plot the time response of
each of the states of the system.
Example 3.2 (Balance system). Consider the balance system given in Exam-
ple 2.1 and reproduced Figure 3.2a. Suppose that a coworker has designed
3.1. SOLVING DIFFERENTIAL EQUATIONS 65
m
1.2
p
θ 1
theta
0.8
p 0.4
0.2
F M
−0.2
−0.4
0 2 4 6 8 10 12 14 16 18 20
time (sec)
(a) (b)
Figure 3.2: Balance system: (a) simplified diagram and (b) initial condition
response.
a control law that will hold the position of the system steady in the upright
position at p = 0. The form of the control law is
F = Kx,
where x = (p, θ, ṗ, θ̇) ∈ R4 is the state of the system, F is the input, and
K = (k1 , k2 , k3 , k4 ) is the vector of “gains” for the control law.
The equations of motion for the system, in state space form, are
ṗ
p
d θ = ·
θ̇
−1
dt ṗ M + m ml cos θ −bẋ + ml sin θ θ̇2 + Kx
¸ · ¸
θ̇ J + ml2 ml cos θ −mgl sin θ
· ¸
p
y= .
θ
We use the following parameters for the system (corresponding roughly to
a human being balanced on a stabilizing cart):
M = 10 kg m = 80 kg b = 0.1 Ns/m
2 2
J = 100 kg m /s l=1m g = 9.8 m/s2
£ ¤
K = −1 120 −4 20
This system can now be simulated using MATLAB or a similar numerical
tool. The results are shown in Figure 3.2b, with initial condition x0 =
(1, 0, 0, 0). We see from the plot that after an initial transient, the angle and
position of the system return to zero (and remain there).
66 CHAPTER 3. DYNAMIC BEHAVIOR
100 100
80 80
60 60
x
x
40 40
20 20
Figure 3.3: Solutions to the differential equation (3.3) (a) and (3.4) (b).
Ä
Existence and Uniqueness
Without imposing some conditions on the function F the differential equa-
tion (3.2) may not have a solution for all t, and there is no guarantee that
the solution is unique. We illustrate this with two examples.
Example 3.3 (Finite escape time). Let x ∈ R and consider the differential
equation
dx
= x2 (3.3)
dt
with initial condition x(0) = 1. By differentiation we can verify that the
function
1
x(t) = (3.4)
1−t
satisfies the differential equation and it also satisfies the initial condition. A
graph of the solution is given in Figure 3.3a; notice that the solution goes
to infinity as t goes to 1. Thus the solution only exists in the time interval
0 ≤ t < 1.
Example 3.4 (No unique solution). Let x ∈ R and consider the differential
equation
dx √
= x
dt
with initial condition x(0) = 0. By differentiation we can verify that the
function (
0 if t ≤ a
x(t) = 1 2
4 (t − a) if t > a
3.2. QUALITATIVE ANALYSIS 67
satisfies the differential equation for all values of the parameter a ≥ 0. The
function also satisfies the initial condition. A graph of some of the possible
solutions is given in Figure 3.3b. Notice that in this case there are many
solutions to the differential equations.
These simple examples show clearly that there may be difficulties even
with seemingly simple differential equations. Existence and uniqueness can
be guaranteed by requiring that the function F has the property that for
some fixed c ∈ R
Phase Portraits
A convenient way to understand the behavior of dynamical systems with
state x ∈ R2 is to plot the phase portrait of the system, briefly introduced
in Chapter 2. We start by introducing the concept of a vector field. For a
system of ordinary differential equations
ẋ = F (x),
1 1
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
x2 0 x2 0
−0.2 −0.2
−0.4 −0.4
−0.6 −0.6
−0.8 −0.8
−1 −1
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 −1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1
x1 x1
(a) (b)
Figure 3.4: Vector field plot (a) and phase portrait (b) for a damped oscilla-
tor. This plots were produced using the phaseplot command in MATLAB.
Equilibrium Points
An equilibrium point of a dynamical system represents a stationary condition
for the dynamics. We say that at state xe is an equilibrium point for a
3.2. QUALITATIVE ANALYSIS 69
m
θ
dynamical system
ẋ = F (x)
if F (xe ) = 0. If a dynamical system has an initial condition x(0) = xe then
it will stay at the equilibrium point: x(t) = xe for all t > 0.1
Equilibrium points are one of the most important features of a dynami-
cal system since they define the states corresponding to constant operating
conditions. A dynamical system can have zero, one or more equilibrium
points.
Example 3.5 (Inverted Pendulum). Consider the inverted pendulum in Fig-
ure 3.5. The state variables are the angle θ = x1 and the angular velocity
dθ/dt = x2 , the control variable is the acceleration u of the pivot, and the
output is the angle θ.
Newton’s law of conservation of angular momentum becomes
d2 θ
J= mgl sin θ + mul cos θ
dt2
Introducing x1 = θ and x2 = dθ/dt the state equations become
x2
" #
dx
= mgl mlu
dt sin x1 + cos x1
J J
y = x1 .
For simplicity, we assume mgl/J = ml/J = 1, so that our equations become
· ¸
dx x2
=
dt sin x1 + u cos x1 (3.5)
y = x1 .
1
We take t0 = 0 from here on.
70 CHAPTER 3. DYNAMIC BEHAVIOR
x2 0
−1
−2
−6 −4 −2 0 2 4 6
x1
Figure 3.6: Phase portrait for a simple pendulum. The equilibrium points
are marked by solid dots along the x2 = 0 line.
Limit Cycles
Nonlinear systems can exhibit very rich behavior. Consider the differential
equation
dx1
= −x2 − x1 (1 − x21 − x22 )
dt (3.6)
dx2 2 2
= x1 − x2 (1 − x1 − x2 ).
dt
The phase portrait and time domain solutions are given in Figure 3.7. The
figure shows that the solutions in the phase plane converge to an orbit which
is a circle. In the time domain this corresponds to a oscillatory solution.
3.2. QUALITATIVE ANALYSIS 71
1.5 1.5
1 1
0.5
0.5
x 0
2
0
−0.5
−0.5
−1
−1
−1.5
−1.5 −1 −0.5 0 0.5 1 1.5 −1.5
x 0 5 10 15 20 25 30
1
Figure 3.7: Phase portrait and time domain simulation for a system with a
limit cycle.
In the first equation, rR represents the growth rate of the rabbits, K repre-
sents the maximum population of rabbits (in the absence of foxes), a repre-
sents the interaction term that describes how the rabbits are diminished as
a function of the fox population, and Th depends is a time constant for prey
consumption. In the second equation, rF represents the growth rate of the
foxes and k represents the fraction of rabbits versus foxes at equilibrium.
The equilibrium points for this system can be determined by setting the
right hand side of the above equations to zero. Letting Re and Fe represent
the equilibrium state, from the second equation we have
Fe = kRe .
72 CHAPTER 3. DYNAMIC BEHAVIOR
25
250
20
200
15
150
x
2
10
100
5
50
0
0 50 100 150 200 250
0
x 0 100 200 300 400 500 600 700 800 900 1000
1
Figure 3.8: Phase portrait and time domain simulation for the predator prey
system.
0.8
1
0.6
0.5
0.4
0.2 0
x2 0 −0.5
−0.2
−1
−0.4
0 2 4 6 8 10
−0.6
ẋ1 = x2
−0.8
ẋ2 = −x1
−1
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1
x1
Figure 3.9: Phase portrait and time domain simulation for a system with a
single stable equilibrium point.
3.3 Stability
The stability of an equilibrium point determines whether or not solutions
nearby the equilibrium point remain nearby, get closer, or get further away.
Definitions
An equilibrium point is stable if initial conditions that start near an equi-
librium point stay near that equilibrium point. Formally, we say that an
equilibrium point xe is stable if for all ² > 0, there exists an δ > 0 such that
Note that this definition does not imply that x(t) gets closer to xe as time
increases, but just that it stays nearby. Furthermore, the value of δ may
depend on ², so that if we wish to stay very close to the equilibrium point,
we may have to start very, very close (δ ¿ ²). This type of stability is
sometimes called stability “in the sense of Lyapunov” (isL for short).
An example of a stable equilibrium point is shown in Figure 3.9. From
the phase portrait, we see that if we start near the equilibrium then we stay
near the equilibrium. Indeed, for this example, given any ² that defines the
range of possible initial conditions, we can simply choose δ = ² to satisfy
the definition of stability.
An equilibrium point xe is (locally) asymptotically stable if it is stable in
the sense of Lyapunov and also x(t) → xe as t → ∞ for x(t) sufficiently close
to xe . This corresponds to the case where all nearby trajectories converge
74 CHAPTER 3. DYNAMIC BEHAVIOR
0.8
1
0.6
0.5
0.4
0.2 0
x2 0 −0.5
−0.2
−1
−0.4
0 2 4 6 8 10
−0.6
ẋ1 = x2
−0.8
ẋ2 = −x1 − x2
−1
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1
x1
Figure 3.10: Phase portrait and time domain simulation for a system with
a single asymptotically stable equilibrium point.
to the equilibrium point for large time. Figure 3.10 shows an example of an
asymptotically stable equilibrium point. Note from the phase portraits that
not only do all trajectories stay near the equilibrium point at the origin, but
they all approach the origin as t gets large (the directions of the arrows on
the phase plot show the direction in which the trajectories move).
An equilibrium point is unstable if it is not stable. More specifically, we
say that an equilibrium point is unstable if given some ² > 0, there does not
exist a δ > 0 such that if kx(0) − xe k < δ then kx(t) − xe k < ² for all t. An
example of an unstable equilibrium point is shown in Figure 3.11.
The definitions above are given without careful description of their do-
main of applicability. More formally, we define an equilibrium point to be
locally stable (or asymptotically stable) if it is stable for all initial conditions
x ∈ Br (xe ) where
Br (xe ) = {x : kx − xe k < δ}
0.8 100
0.6
50
0.4
0.2 0
x2 0
−50
−0.2
−0.4 −100
0 2 4 6 8 10
−0.6
−0.8
ẋ1 = 2x1 − x2
−1
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 ẋ2 = −x1 + 2x2
x1
Figure 3.11: Phase portrait and time domain simulation for a system with
a single unstable equilibrium point.
equilibrium point which is stable but not asymptotically stable (such as the
one in Figure 3.9 is called a center.
If we assume that the angle x1 = θ remains small, then we can replace sin θ
with θ and cos θ with 1, which gives the approximate system
· ¸
dx x2
=
dt x1 + u
y = x1
We see that this system is linear and it can be shown that when x1 is small,
it gives an excellent approximation to the original dynamics.
76 CHAPTER 3. DYNAMIC BEHAVIOR
Ä
Lyapunov functions
A powerful tool for determining stability is the use of Lyapunov functions.
A Lyapunov function V : Rn → R is an energy-like function that can be
used to determine stability of a system. Roughly speaking, if we can find a
non-negative function that always decreases along trajectories of the system,
we can conclude that the minimum of the function is a stable equilibrium
point (locally).
To describe this more formally, we start with a few definitions. We say
that a continuous function V (x) is positive definite if V (x) > 0 for all x 6= 0
and V (0) = 0. We will often write this as V (x) Â 0. Similarly, a function
is negative definite if V (x) < 0 for all x 6= 0 and V (0) = 0. We say that
a function V (x) is positive semidefinite if V (x) can be zero at points other
than x = 0 but otherwise V (x) is strictly positive. We write this as V (x) º 0
and define negative semi-definite functions analogously.
To see the difference between a positive definite function and a positive
semi-definite function, suppose that x ∈ R2 and let
ẋ = F (x) x ∈ Rn .
3.3. STABILITY 77
10
q̇
-10
-10 q 10
Ä
Lasalle’s Invariance Principle
Lasalle’s theorem enables one to conclude asymptotic stability of an equi-
librium point even when one can’t find a V (x) such that V̇ (x, t) is locally
negative definite. However, it applies only to time-invariant or periodic sys-
tems. We will deal with the time-invariant case and begin by introducing
a few more definitions. We denote the solution trajectories of the time-
invariant system
ẋ = F (x) (3.7)
as s(t, x0 , t0 ), which is the solution of equation (3.7) at time t starting from
x0 at t0 .
Definition 3.1. The set S ⊂ Rn is the ω limit set of a trajectory s( · , x0 , t0 )
if for every y ∈ S, there exists a strictly increasing sequence of times tn such
that
s(tn , x0 , t0 ) → y
as tn → ∞.
Definition 3.2. The set M ⊂ Rn is said to be an (positively) invariant set
if for all y ∈ M and t0 ≥ 0, we have
s(t, y, t0 ) ∈ M ∀t ≥ t0 .
It may be proved that the ω limit set of every trajectory is closed and
invariant. We may now state Lasalle’s principle.
Theorem 3.2 (Lasalle’s principle). Let V : Rn → R be a locally positive
definite function such that on the compact set Ωc = {x ∈ Rn : V (x) ≤ c} we
have V̇ (x) ≤ 0. Define
S = {x ∈ Ωc : V̇ (x) = 0}.
2
Fortunately, there are systematic tools available for searching for special classes of
Lyapunov functions, such as sums of squares [?].
80 CHAPTER 3. DYNAMIC BEHAVIOR
As t → ∞, the trajectory tends to the largest invariant set inside S; i.e., its
ω limit set is contained inside the largest invariant set in S. In particular,
if S contains no invariant sets other than x = 0, then 0 is asymptotically
stable.
A global version of the preceding theorem may also be stated. An
application of Lasalle’s principle is as follows:
Example 3.9 (Nonlinear spring mass system with damper). Consider a non-
linear, damped spring mass system with dynamics
ẋ1 = x2
ẋ2 = −f (x2 ) − g(x1 )
Here f and g are smooth functions modeling the friction in the damper and
restoring force of the spring, respectively. We will assume that f, g are both
passive; that is,
σf (σ) ≥ 0 ∀σ ∈ [−σ0 , σ0 ]
σg(σ) ≥ 0 ∀σ ∈ [−σ0 , σ0 ]
and equality is only achieved when σ = 0.
Consider the Lyapunov function candidate
Z x1
x22
V (x) = + g(σ) dσ,
2 0
g(x10 ) = 0 =⇒ x10 = 0.
Thus, the largest invariant set inside Ωc ∩ {x1 , x2 : V̇ = 0} is the origin and,
by Lasalle’s principle, the origin is locally asymptotically stable.
3.4. SHAPING DYNAMIC BEHAVIOR 81
We see that we can control the equilibrium point along a one dimensional
curve given by the solution to this equation as a function of Kd > 0.
u = K(xe − x) (3.8)
Step Response
A particularly common form of input is a step input, which represents an
abrupt change in input from one value to another. A unit step is defined as
(
0 t=0
u=
1 t > 0.
84 CHAPTER 3. DYNAMIC BEHAVIOR
The step response of the system (3.10) is defined as the output y(t) starting
from zero initial condition (or the appropriate equilibrium point) and given
a step input. We note that the step input is discontinuous and hence is not
physically implementable. However, it is a convenient abstraction that is
widely used in studying input/output systems.
A sample step response is shown in Figure 3.14. Several terms are used
when referring to a step response:
Steady state value The steady state value of a step response is the final
level of the output, assuming it converges.
Rise time The rise time is the amount of time required for the signal to
go from 5% of its final value to 95% of its final value. It is possible
to define other limits as well, but in this book we shall use these
percentages unless otherwise indicated.
Overshoot The overshoot is the percentage of the final value by which the
signal initially rises above the final value. This usually assumes that
future values of the signal do not overshoot the final value by more
than this initial transient, otherwise the term can be ambiguous.
Settling time The settling time is the amount of time required for the signal
to stay within 5% of its final value for all future times.
Frequency Response
The frequency response of an input/output system measures the way in
which the system responds to a sinusoidal excitation on one of its inputs. As
we have already seen (and will see in more detail later), for linear systems the
particular solution associated with a sinusoidal excitation is itself a sinusoid
at the same frequency. Hence we can compare the magnitude and phase of
the output sinusoid as compared to the input. More generally, if a system
has a sinusoidal output response at the same frequency as the input forcing,
we can speak of the frequency response.
Frequency response is typically measured in terms of gain and phase at a
given forcing frequency, as illustrated in Figure 3.15. The gain the system at
a given frequency is given by the ratio of the amplitude of the output to that
of the input. The phase is given by the the fraction of a period by which the
output differs from the input. Thus, if we have an input u = Au sin(ωt + ψ)
and output y = Ay sin(ωt + φ), we write
Ay
gain(ω) = phase(ω) = φ − ψ.
Au
If the phase is positive, we say that the output “leads” the input, otherwise
we say it “lags” the input.
86 CHAPTER 3. DYNAMIC BEHAVIOR
3.6 Exercises