Excerpted from "A Mathematical Introduction to Robotic Manipulation"
by R. M. Murray, Z. Li and S. S. Sastry
4
Lyapunov Stability Theory
In this section we review the tools of Lyapunov stability theory. These
tools will be used in the next section to analyze the stability properties
of a robot controller. We present a survey of the results that we shall
need in the sequel, with no proofs. The interested reader should consult
a standard text, such as Vidyasagar [?] or Khalil [?], for details.
4.1
Basic definitions
Consider a dynamical system which satisfies
ẋ = f (x, t)
x(t0 ) = x0
x ∈ Rn .
(4.31)
We will assume that f (x, t) satisfies the standard conditions for the existence and uniqueness of solutions. Such conditions are, for instance, that
f (x, t) is Lipschitz continuous with respect to x, uniformly in t, and piecewise continuous in t. A point x∗ ∈ Rn is an equilibrium point of (4.31) if
f (x∗ , t) ≡ 0. Intuitively and somewhat crudely speaking, we say an equilibrium point is locally stable if all solutions which start near x∗ (meaning
that the initial conditions are in a neighborhood of x∗ ) remain near x∗
for all time. The equilibrium point x∗ is said to be locally asymptotically
stable if x∗ is locally stable and, furthermore, all solutions starting near
x∗ tend towards x∗ as t → ∞. We say somewhat crude because the
time-varying nature of equation (4.31) introduces all kinds of additional
subtleties. Nonetheless, it is intuitive that a pendulum has a locally stable equilibrium point when the pendulum is hanging straight down and
an unstable equilibrium point when it is pointing straight up. If the pendulum is damped, the stable equilibrium point is locally asymptotically
stable.
By shifting the origin of the system, we may assume that the equilibrium point of interest occurs at x∗ = 0. If multiple equilibrium points
exist, we will need to study the stability of each by appropriately shifting
the origin.
43
Definition 4.1. Stability in the sense of Lyapunov
The equilibrium point x∗ = 0 of (4.31) is stable (in the sense of Lyapunov)
at t = t0 if for any ǫ > 0 there exists a δ(t0 , ǫ) > 0 such that
x(t0 ) < δ
=⇒
x(t) < ǫ,
∀t ≥ t0 .
(4.32)
Lyapunov stability is a very mild requirement on equilibrium points.
In particular, it does not require that trajectories starting close to the
origin tend to the origin asymptotically. Also, stability is defined at a
time instant t0 . Uniform stability is a concept which guarantees that the
equilibrium point is not losing stability. We insist that for a uniformly
stable equilibrium point x∗ , δ in the Definition 4.1 not be a function of
t0 , so that equation (4.32) may hold for all t0 . Asymptotic stability is
made precise in the following definition:
Definition 4.2. Asymptotic stability
An equilibrium point x∗ = 0 of (4.31) is asymptotically stable at t = t0 if
1. x∗ = 0 is stable, and
2. x∗ = 0 is locally attractive; i.e., there exists δ(t0 ) such that
x(t0 ) < δ
=⇒
lim x(t) = 0.
t→∞
(4.33)
As in the previous definition, asymptotic stability is defined at t0 .
Uniform asymptotic stability requires:
1. x∗ = 0 is uniformly stable, and
2. x∗ = 0 is uniformly locally attractive; i.e., there exists δ independent of t0 for which equation (4.33) holds. Further, it is required
that the convergence in equation (4.33) is uniform.
Finally, we say that an equilibrium point is unstable if it is not stable.
This is less of a tautology than it sounds and the reader should be sure he
or she can negate the definition of stability in the sense of Lyapunov to get
a definition of instability. In robotics, we are almost always interested in
uniformly asymptotically stable equilibria. If we wish to move the robot
to a point, we would like to actually converge to that point, not merely
remain nearby. Figure 4.7 illustrates the difference between stability in
the sense of Lyapunov and asymptotic stability.
Definitions 4.1 and 4.2 are local definitions; they describe the behavior
of a system near an equilibrium point. We say an equilibrium point x∗
is globally stable if it is stable for all initial conditions x0 ∈ Rn . Global
stability is very desirable, but in many applications it can be difficult
to achieve. We will concentrate on local stability theorems and indicate
where it is possible to extend the results to the global case. Notions
44
4
ẋ
-4
-4
4
x
(a) Stable in the sense of Lyapunov
4
0.4
ẋ
ẋ
-4
-4
-0.4
-0.4
4
x
(b) Asymptotically stable
0.4
x
(c) Unstable (saddle)
Figure 4.7: Phase portraits for stable and unstable equilibrium points.
of uniformity are only important for time-varying systems. Thus, for
time-invariant systems, stability implies uniform stability and asymptotic
stability implies uniform asymptotic stability.
It is important to note that the definitions of asymptotic stability do
not quantify the rate of convergence. There is a strong form of stability
which demands an exponential rate of convergence:
Definition 4.3. Exponential stability, rate of convergence
The equilibrium point x∗ = 0 is an exponentially stable equilibrium point
of (4.31) if there exist constants m, α > 0 and ǫ > 0 such that
x(t) ≤ me−α(t−t0 ) x(t0 )
(4.34)
for all x(t0 ) ≤ ǫ and t ≥ t0 . The largest constant α which may be
utilized in (4.34) is called the rate of convergence.
Exponential stability is a strong form of stability; in particular, it implies uniform, asymptotic stability. Exponential convergence is important
in applications because it can be shown to be robust to perturbations and
is essential for the consideration of more advanced control algorithms,
45
such as adaptive ones. A system is globally exponentially stable if the
bound in equation (4.34) holds for all x0 ∈ Rn . Whenever possible, we
shall strive to prove global, exponential stability.
4.2
The direct method of Lyapunov
Lyapunov’s direct method (also called the second method of Lyapunov)
allows us to determine the stability of a system without explicitly integrating the differential equation (4.31). The method is a generalization
of the idea that if there is some “measure of energy” in a system, then
we can study the rate of change of the energy of the system to ascertain
stability. To make this precise, we need to define exactly what one means
by a “measure of energy.” Let Bǫ be a ball of size ǫ around the origin,
Bǫ = {x ∈ Rn : x < ǫ}.
Definition 4.4. Locally positive definite functions (lpdf )
A continuous function V : Rn ×R+ → R is a locally positive definite function if for some ǫ > 0 and some continuous, strictly increasing function
α : R+ → R,
V (0, t) = 0 and V (x, t) ≥ α(x)
∀x ∈ Bǫ , ∀t ≥ 0.
(4.35)
A locally positive definite function is locally like an energy function.
Functions which are globally like energy functions are called positive definite functions:
Definition 4.5. Positive definite functions (pdf )
A continuous function V : Rn × R+ → R is a positive definite function if
it satisfies the conditions of Definition 4.4 and, additionally, α(p) → ∞
as p → ∞.
To bound the energy function from above, we define decrescence as
follows:
Definition 4.6. Decrescent functions
A continuous function V : Rn × R+ → R is decrescent if for some ǫ > 0
and some continuous, strictly increasing function β : R+ → R,
V (x, t) ≤ β(x)
∀x ∈ Bǫ , ∀t ≥ 0
(4.36)
Using these definitions, the following theorem allows us to determine stability for a system by studying an appropriate energy function.
Roughly, this theorem states that when V (x, t) is a locally positive definite function and V̇ (x, t) ≤ 0 then we can conclude stability of the equilibrium point. The time derivative of V is taken along the trajectories of
the system:
∂V
∂V
+
f.
V̇
=
∂t
∂x
ẋ=f (x,t)
46
Table 4.1: Summary of the basic theorem of Lyapunov.
1
2
3
Conditions on
V (x, t)
lpdf
lpdf, decrescent
lpdf, decrescent
Conditions on
−V̇ (x, t)
≥ 0 locally
≥ 0 locally
lpdf
4
pdf, decrescent
pdf
Conclusions
Stable
Uniformly stable
Uniformly asymptotically
stable
Globally uniformly
asymptotically stable
In what follows, by V̇ we will mean V̇ |ẋ=f (x,t) .
Theorem 4.4. Basic theorem of Lyapunov
Let V (x, t) be a non-negative function with derivative V̇ along the trajectories of the system.
1. If V (x, t) is locally positive definite and V̇ (x, t) ≤ 0 locally in x and
for all t, then the origin of the system is locally stable (in the sense
of Lyapunov).
2. If V (x, t) is locally positive definite and decrescent, and V̇ (x, t) ≤ 0
locally in x and for all t, then the origin of the system is uniformly
locally stable (in the sense of Lyapunov).
3. If V (x, t) is locally positive definite and decrescent, and −V̇ (x, t) is
locally positive definite, then the origin of the system is uniformly
locally asymptotically stable.
4. If V (x, t) is positive definite and decrescent, and −V̇ (x, t) is positive definite, then the origin of the system is globally uniformly
asymptotically stable.
The conditions in the theorem are summarized in Table 4.1.
Theorem 4.4 gives sufficient conditions for the stability of the origin
of a system. It does not, however, give a prescription for determining
the Lyapunov function V (x, t). Since the theorem only gives sufficient
conditions, the search for a Lyapunov function establishing stability of
an equilibrium point could be arduous. However, it is a remarkable fact
that the converse of Theorem 4.4 also exists: if an equilibrium point is
stable, then there exists a function V (x, t) satisfying the conditions of
the theorem. However, the utility of this and other converse theorems is
limited by the lack of a computable technique for generating Lyapunov
functions.
Theorem 4.4 also stops short of giving explicit rates of convergence of
solutions to the equilibrium. It may be modified to do so in the case of
exponentially stable equilibria.
47
Theorem 4.5. Exponential stability theorem
x∗ = 0 is an exponentially stable equilibrium point of ẋ = f (x, t) if and
only if there exists an ǫ > 0 and a function V (x, t) which satisfies
α1 x2 ≤ V (x, t) ≤ α2 x2
V̇ |ẋ=f (x,t) ≤ −α3 x2
∂V
(x, t) ≤ α4 x
∂x
for some positive constants α1 , α2 , α3 , α4 , and x ≤ ǫ.
The rate of convergence for a system satisfying the conditions of Theorem 4.5 can be determined from the proof of the theorem [?]. It can be
shown that
1/2
α3
α2
α≥
m≤
α1
2α2
are bounds in equation (4.34). The equilibrium point x∗ = 0 is globally
exponentially stable if the bounds in Theorem 4.5 hold for all x.
4.3
The indirect method of Lyapunov
The indirect method of Lyapunov uses the linearization of a system to
determine the local stability of the original system. Consider the system
ẋ = f (x, t)
(4.37)
with f (0, t) = 0 for all t ≥ 0. Define
∂f (x, t)
A(t) =
∂x
(4.38)
x=0
to be the Jacobian matrix of f (x, t) with respect to x, evaluated at the
origin. It follows that for each fixed t, the remainder
f1 (x, t) = f (x, t) − A(t)x
approaches zero as x approaches zero. However, the remainder may not
approach zero uniformly. For this to be true, we require the stronger
condition that
lim sup
x →0 t≥0
f1 (x, t)
= 0.
x
(4.39)
If equation (4.39) holds, then the system
ż = A(t)z
(4.40)
is referred to as the (uniform) linearization of equation (4.31) about the
origin. When the linearization exists, its stability determines the local
stability of the original nonlinear equation.
48
Theorem 4.6. Stability by linearization
Consider the system (4.37) and assume
lim sup
x →0 t≥0
f1 (x, t)
= 0.
x
Further, let A(·) defined in equation (4.38) be bounded. If 0 is a uniformly
asymptotically stable equilibrium point of (4.40) then it is a locally uniformly asymptotically stable equilibrium point of (4.37).
The preceding theorem requires uniform asymptotic stability of the
linearized system to prove uniform asymptotic stability of the nonlinear
system. Counterexamples to the theorem exist if the linearized system is
not uniformly asymptotically stable.
If the system (4.37) is time-invariant, then the indirect method says
that if the eigenvalues of
∂f (x)
A=
∂x x=0
are in the open left half complex plane, then the origin is asymptotically
stable.
This theorem proves that global uniform asymptotic stability of the
linearization implies local uniform asymptotic stability of the original
nonlinear system. The estimates provided by the proof of the theorem
can be used to give a (conservative) bound on the domain of attraction
of the origin. Systematic techniques for estimating the bounds on the
regions of attraction of equilibrium points of nonlinear systems is an important area of research and involves searching for the “best” Lyapunov
functions.
4.4
Examples
We now illustrate the use of the stability theorems given above on a few
examples.
Example 4.5. Linear harmonic oscillator
Consider a damped harmonic oscillator, as shown in Figure 4.8. The
dynamics of the system are given by the equation
M q̈ + B q̇ + Kq = 0,
(4.41)
where M , B, and K are all positive quantities. As a state space equation
we rewrite equation (4.41) as
d q
q̇
=
.
−(K/M )q − (B/M )q̇
dt q̇
49
(4.42)
K
M
B
q
Figure 4.8: Damped harmonic oscillator.
Define x = (q, q̇) as the state of the system.
Since this system is a linear system, we can determine stability by
examining the poles of the system. The Jacobian matrix for the system
is
0
1
A=
,
−K/M −B/M
which has a characteristic equation
λ2 + (B/M )λ + (K/M ) = 0.
The solutions of the characteristic equation are
√
−B ± B 2 − 4KM
,
λ=
2M
which always have negative real parts, and hence the system is (globally)
exponentially stable.
We now try to apply Lyapunov’s direct method to determine exponential stability. The “obvious” Lyapunov function to use in this context
is the energy of the system,
V (x, t) =
1
1
M q̇ 2 + Kq 2 .
2
2
(4.43)
Taking the derivative of V along trajectories of the system (4.41) gives
V̇ = M q̇q̈ + Kq q̇ = −B q̇ 2 .
(4.44)
The function −V̇ is quadratic but not locally positive definite, since it
does not depend on q, and hence we cannot conclude exponential stability. It is still possible to conclude asymptotic stability using Lasalle’s
invariance principle (described in the next section), but this is obviously
conservative since we already know that the system is exponentially stable.
50
10
10
q̇
q̇
-10
-10
-10
-10
10
q
10
q
(a)
(b)
Figure 4.9: Flow of damped harmonic oscillator. The dashed lines are
the level sets of the Lyapunov function defined by (a) the total energy
and (b) a skewed modification of the energy.
The reason that Lyapunov’s direct method fails is illustrated in Figure 4.9a, which shows the flow of the system superimposed with the level
sets of the Lyapunov function. The level sets of the Lyapunov function
become tangent to the flow when q̇ = 0, and hence it is not a valid
Lyapunov function for determining exponential stability.
To fix this problem, we skew the level sets slightly, so that the flow of
the system crosses the level surfaces transversely. Define
V (x, t) =
1 q
2 q̇
T
K
ǫM
ǫM
M
1
1
q
= q̇M q̇ + qKq + ǫq̇M q,
q̇
2
2
where ǫ is a small positive constant such that V is still positive definite.
The derivative of the Lyapunov function becomes
V̇ = q̇M q̈ + qK q̇ + ǫM q̇ 2 + ǫqM q̈
= (−B + ǫM )q̇ 2 + ǫ(−Kq 2 − Bq q̇) = −
q
q̇
T
ǫK
1
2 ǫB
1
2 ǫB
B − ǫM
q
.
q̇
The function V̇ can be made negative definite for ǫ chosen sufficiently
small (see Exercise 11) and hence we can conclude exponential stability.
The level sets of this Lyapunov function are shown in Figure 4.9b.
This same technique is used in the stability proofs for the robot control
laws contained in the next section.
Example 4.6. Nonlinear spring mass system with damper
Consider a mechanical system consisting of a unit mass attached to a
51
nonlinear spring with a velocity-dependent damper. If x1 stands for the
position of the mass and x2 its velocity, then the equations describing the
system are:
ẋ1 = x2
ẋ2 = −f (x2 ) − g(x1 ).
(4.45)
Here f and g are smooth functions modeling the friction in the damper
and restoring force of the spring, respectively. We will assume that f, g
are both passive; that is,
σf (σ) ≥ 0 ∀σ ∈ [−σ0 , σ0 ]
σg(σ) ≥ 0 ∀σ ∈ [−σ0 , σ0 ]
and equality is only achieved when σ = 0. The candidate for the Lyapunov function is
x1
x2
g(σ) dσ.
V (x) = 2 +
2
0
The passivity of g guarantees that V (x) is a locally positive definite function. A short calculation verifies that
V̇ (x) = −x2 f (x2 ) ≤ 0
when |x2 | ≤ σ0 .
This establishes the stability, but not the asymptotic stability of the origin. Actually, the origin is asymptotically stable, but this needs Lasalle’s
principle, which is discussed in the next section.
4.5
Lasalle’s invariance principle
Lasalle’s theorem enables one to conclude asymptotic stability of an equilibrium point even when −V̇ (x, t) is not locally positive definite. However,
it applies only to autonomous or periodic systems. We will deal with the
autonomous case and begin by introducing a few more definitions. We
denote the solution trajectories of the autonomous system
ẋ = f (x)
(4.46)
as s(t, x0 , t0 ), which is the solution of equation (4.46) at time t starting
from x0 at t0 .
Definition 4.7. ω limit set
The set S ⊂ Rn is the ω limit set of a trajectory s(·, x0 , t0 ) if for every
y ∈ S, there exists a strictly increasing sequence of times tn such that
s(tn , x0 , t0 ) → y
as tn → ∞.
52
Definition 4.8. Invariant set
The set M ⊂ Rn is said to be an (positively) invariant set if for all y ∈ M
and t0 ≥ 0, we have
s(t, y, t0 ) ∈ M
∀t ≥ t0 .
It may be proved that the ω limit set of every trajectory is closed and
invariant. We may now state Lasalle’s principle.
Theorem 4.7. Lasalle’s principle
Let V : Rn → R be a locally positive definite function such that on the
compact set Ωc = {x ∈ Rn : V (x) ≤ c} we have V̇ (x) ≤ 0. Define
S = {x ∈ Ωc : V̇ (x) = 0}.
As t → ∞, the trajectory tends to the largest invariant set inside S;
i.e., its ω limit set is contained inside the largest invariant set in S. In
particular, if S contains no invariant sets other than x = 0, then 0 is
asymptotically stable.
A global version of the preceding theorem may also be stated. An
application of Lasalle’s principle is as follows:
Example 4.7. Nonlinear spring mass system with damper
Consider the same example as in equation (4.45), where we saw that with
V (x) =
x22
+
2
x1
g(σ) dσ,
0
we obtained
V̇ (x) = −x2 f (x2 ).
Choosing c = min(V (−σ0 , 0), V (σ0 , 0)) so as to apply Lasalle’s principle,
we see that
V̇ (x) ≤ 0 for x ∈ Ωc := {x : V (x) ≤ c}.
As a consequence of Lasalle’s principle, the trajectory enters the largest
invariant set in Ωc ∩{x1 , x2 : V̇ = 0} = Ωc ∩{x1 , 0}. To obtain the largest
invariant set in this region, note that
x2 (t) ≡ 0
=⇒
x1 (t) ≡ x10
=⇒
ẋ2 (t) = 0 = −f (0) − g(x10 ),
where x10 is some constant. Consequently, we have that
g(x10 ) = 0
=⇒
x10 = 0.
Thus, the largest invariant set inside Ωc ∩ {x1 , x2 : V̇ = 0} is the origin
and, by Lasalle’s principle, the origin is locally asymptotically stable.
There is a version of Lasalle’s theorem which holds for periodic systems as well. However, there are no significant generalizations for nonperiodic systems and this restricts the utility of Lasalle’s principle in
applications.
53