Theoretical Background of Nonlinear System Stability and Control
Theoretical Background of Nonlinear System Stability and Control
Theoretical Background of Nonlinear System Stability and Control
One of the tasks of the control engineer consists, very often, in the study of stability,
whether for the considered system, free from any control, or for the same system
when it is augmented with a particular control structure. At this stage, it might be
useful or even essential to ask what stability is. How do we define it? How to con-
ceptualize and formalize it? What are the criteria upon which one can conclude on
the stability of a system?
ẋ = f (x) (A.1)
f (0) = 0 ∀t ≥ 0 (A.2)
Note that, by a change of coordinates, one can always bring the equilibrium point to
the origin.
tions that was contemplated at that time was the study of librations in astronomy.12
The emphasis is focused on the ordinary stability (i.e. stable but not asymptotically
stable), which we can represent as a robustness with respect to initial conditions,
and the asymptotic stability is only addressed in a corollary manner.
The automatic control community having inverted this preference, we will be
concentrating here on the concept of asymptotic stability rather than mere stability.
Note that there are many more complete presentations of Lyapunov stability in
many articles, for example [37, 49, 55, 62, 65, 66, 70, 87], which constitute the main
references of this part; this list also does not claim to be exhaustive.
Roughly speaking, we say that a system is stable if when displaced slightly from its
equilibrium position, it tends to come back to its original position. On the other
hand, it is unstable if it tends to move away from its equilibrium position (see
Fig. A.1).
Mathematically speaking, this is translated into the following definitions:
• unstable, if it is not stable, that is, if for every ε > 0, there exists η > 0 such that
for every solution x(t) of (A.1) we have
x(0) < η ⇒ x(t) ≥ ε ∀t ≥ 0
• attractive, if there exists r > 0 such that for every solution x(t) of (A.1) we have
x(0) < r ⇒ lim x(t) = 0
t→∞
The basin of attraction of the origin is defined by the set B such that
1 In astronomy, the librations are small oscillations of celestial bodies around their orbits.
2 The father of Alexander Michael Lyapunov was an astronomer.
96 A Theoretical Background of Nonlinear System Stability and Control
and globally exponentially stable (GES), if there exist M > 0 and α > 0 such that
for every solution x(t) of (A.1) we have
x(t) ≤ M x(0)e−αt for all t ≥ 0
Remark A.1
1. The difference between stable and asymptotically stable is that a small perturba-
tion on the initial state around a stable equilibrium point x̄ might lead to small
sustained oscillations, whereas these oscillations are dampened in time in the
case of asymptotically stable equilibrium point, see Fig. A.2 (U1 is the ball of
center 0 and radius ε and U2 is the ball of center 0 and radius η [46]).
2. For a linear system, all these definitions are equivalent (except for stable and
asymptotically stable). However, for a nonlinear system, stable does not imply
attractive, attractive does not imply stable, asymptotically stable does not imply
exponentially stable whereas exponentially stable implies asymptotically stable.
When the systems are represented by nonlinear differential equations, the verifi-
cation of stability is not trivial. On the contrary, for linear systems the verification
of stability is systematic and is determined as follows.
∂f
A= (x̄)
∂x
the Jacobian matrix of f evaluated at the equilibrium point x = x̄. The obtained sys-
tem will be of the form (A.3) and is called the linearization (or linear approximation)
of the nonlinear system (A.1).
Another criterion that allows to conclude on the stable behavior of the system for
both linear and nonlinear system is described next.
Remark A.2
1. V̇ depends only on x, it is sometimes called the derivative of V along the system.
2. This derivative is also called Lie derivative and is denoted by Lf V .
3. To calculate V̇ , we do not require the knowledge of x but of ẋ, that is, of f (x).
Hence, for the same function V (x), V̇ is different for different systems.
4. For every solution x(t) of (A.4), we have dt d
V (x(t)) = V̇ (x(t)), consequently
if V̇ is negative, V decreases along the solution of (A.4) so that the trajectories
converge towards the minimum of V .
5. When V (x) → ∞ whenever x → ∞, V (x) is said to be radially unbounded.
6. V (x) is often a function that represents the energy or a certain form of energy of
the system.
7. From a geometric point of view, a Lyapunov function is seen as a bowl whose
minimum coincides with the equilibrium point. If this point is stable, then the
velocity vector ẋ (or f ), tangent to every trajectory will point towards the interior
of the bowl, see Fig. A.3 [46].
Remark A.3
1. The criteria for stability and asymptotic stability presented in Theorems A.3, A.4
and A.5 are easy to utilize. However, they do not give any information on how
to construct the Lyapunov function. In reality, there does not exist any general
method for the construction of Lyapunov functions except for some particular
classes of systems (namely for the class of linear systems).
2. The theorems given previously give sufficient conditions in the sense that if for a
certain Lyapunov function V , the conditions on V̇ are not satisfied, this does not
imply that the considered system is unstable (maybe with another function one
can demonstrate the stability of the system).
3. Contrary to Lyapunov functions which guarantee the stability of the equilibrium
points, there are functions, called Chetaev functions, that guarantee the instabil-
ity of the equilibrium points. Note that it is more difficult to demonstrate the
instability rather than stability (refer to [46] for more details).
In this book we have employed some controllers for this class of systems. Con-
sequently, in what follows, we shall present the stability criteria for these systems.
We shall present the controller design for this class of system for a later stage.
σ is called the switching signal and can depend either on time or on the state or
both. Such systems are said to have variable structures or are called multi-models.
They represent a particularly simple class of hybrid systems [10, 81, 85].
Here, we shall assume that the origin is an equilibrium point that is common
for the individual subsystems fp (0) = 0. We shall also assume that the switching
is done without jumps and does not occur infinitely fast so that the Zeno phe-
nomenon is avoided. The reader who is interested in these properties can refer to
[6, 53, 63, 64].
The class of systems often considered in the literature are those for which the
individual systems are linear
ẋ = Ap x (A.6)
Just to mention a few, we cite the following references: [11, 26, 51, 50, 52, 54, 68,
75, 74, 90, 95, 96, 97]. On the other hand, there are only few works in the literature
for the class of nonlinear switching systems [9, 12, 16, 19, 47, 53, 98, 99].
At this stage one might ask the following question: given a switching system,
why do we need a theory of stability that is different from that of Lyapunov?
The main reason is that the stability of switching systems depends not only on
the different dynamics corresponding to several subsystems but also on the transition
law that governs the switchings. In effect, we have the case where two subsystems
are exponentially stable while the switching between the two subsystems drives the
trajectories to infinity.
In fact, it was shown in [12, 19, 47] that a necessary condition for the stability of
switching systems subjected to an arbitrary transition law is that all the individual
subsystems should be asymptotically stable, but this condition was not sufficient.
Nevertheless, it appears that when the switching between the subsystems is suffi-
ciently slow (so as to allow the transition period to settle down and to allow each
A.1 Stability of Systems 101
subsystems to be in steady state) then it is very likely that the global system will be
stable.
It is clear that in the case where the family of systems (A.5) possesses a common
Lyapunov function V (x) such that ∇V (x)fp (x) < 0 for all x = 0 and all p ∈ P,
then the switching system is asymptotically stable for any transition signal σ [47].
Hence, a possibility for demonstrating the stability of switching systems consists in
finding a common Lyapunov function for all the individual subsystems of (A.5).
However, finding a Lyapunov function for a nonlinear system, even for a single
one is not simple. If, in addition, we allow the switchings between several subsys-
tems, the determination of such a function becomes much more difficult. It is also
the reason for which a non-classical theory of stability is necessary.
In the case where a common Lyapunov function cannot be determined, the idea is
to demonstrate the stability through several Lyapunov functions. One of the first
results of such procedure was developed by Peleties in [58, 59], then by Liberzon
[47], for the switching systems of the form (A.6).
Given N dynamical systems Σ1 , . . . , ΣN , and N pseudo Lyapunov functions
(Lyapunov-like functions) V1 , . . . , VN .
Definition A.4 [19] A pseudo Lyapunov function for system (A.5) is a function
Vi (x) with continuous partial derivatives defined on a domain Ωi ⊂ Rn , satisfying
the following conditions:
• Vi is positive definite: Vi (x) > 0 and Vi (0) = 0 for all x = 0.
• V̇ is semi negative definite: for x ∈ Ωi ,
∂Vi (x)
V̇i (x) = fi (x) ≤ 0 (A.7)
∂x
and Ωi is the set for which (A.7) holds true.
Theorem A.6 [19] Suppose that i Ωi = Rn . For i < j , let ti < tj be the tran-
sition instants for which σ (ti ) = σ (tj ) and suppose that there exists γ > 0 such
that
2
Vσ (tj ) x(tj +1 ) − Vσ (ti ) x(ti+1 ) ≤ −γ x(ti+1 ) . (A.8)
Then, the system (A.6) with fσ (t) (x) = Aσ (t) x and the transition function σ (t) is
GAS.
102 A Theoretical Background of Nonlinear System Stability and Control
Theorem A.7 [9, 10] Given N switching systems of the form (A.5) and N pseudo
functions Vi in the region Ωi associated to each subsystem, and suppose
Lyapunov
that i Ωi = Rn and let σ (t) be the transition sequence that takes the value i when
x(t) ∈ Ωi . If in addition,
Vi x(ti,k ) ≤ Vi x(ti,k−1 ) (A.9)
− +
where ti,k is the kth time where fi is active, that is, σ (ti,k ) = σ (ti,k ) = i, then (A.5)
is stable in the sense of Lyapunov.
Figure A.5 illustrates the condition (A.9) (in dotted lines) [19]. A more general
result due to Ye [91, 92] concerns the utilization of weak Lyapunov functions for
which condition (A.9) is replaced by
Vi x(t) ≤ h Vi x(tj ) , t ∈ (tj , tj +1 ) (A.10)
If for each p, the value of Vp at the end of the interval where the system p is
active is higher than the value of Vp at the end of the following interval when the
system p is active (see Fig. A.6), then the system (A.5) is asymptotically stable.
Remark A.4
1. When N = 1, we obtain the classical results of stability. However, when N = ∞
the previous theorems are no longer valid.
2. These theorem are valid even when fp vary as a function of time.
3. These results can be extended by relaxing certain hypotheses, for example: the
individual subsystems can have different equilibrium points [53] or state jumps
during a switch [64].
Note that all the results of stability using multiple Lyapunov functions are con-
cerned with the decrease of these functions either at the beginning or at the end of
successive intervals where the same subsystem is active. Zhai in [98] has shown
that certain Lyapunov functions may not decrease at the beginning or at the end
of these intervals and yet decrease globally. His demonstration, which establishes
a complementary condition of stability to those that already exist, is based on the
104 A Theoretical Background of Nonlinear System Stability and Control
evaluation of the average value of Lyapunov functions during the intervals where
the same subsystem is active.
Evidently, in the case where the subsystems are GAS, the result is practically
equivalent to the previous results. However, his conditions are given with respect to
the decrease of the average Lyapunov functions on the same intervals, see Fig. A.7.
Theorem A.8 [98] Suppose that the N subsystems of (A.5), associated to N radi-
ally unbounded Lyapunov functions are GAS. Define the average value of the Lya-
punov functions during the activation period for each subsystem as
2j
j Δ 1 ti 2j −1 j 2j
Vi x Ti = 2j 2j −1 2j −1
Vi x(τ ) dτ ti ≤ Ti ≤ ti (A.11)
ti − ti ti
Then, the switched system is GAS in the sense of Lyapunov if, for all i,
j +1 j j
Vi x T − Vi x T
i ≤ −Wi x T
i i (A.12)
Additionally, this result is extended to the case when the subsystems are not sta-
ble under the condition that the Lyapunov functions are bounded. In this case, if the
average value of the Lyapunov functions decreases on the set of intervals associ-
ated to a subsystem i, then the switching system (A.5) is asymptotically stable, see
Fig. A.8 [98].
Remark A.5 More recently, a similar result to the above using the average value
of the derivative of the Lyapunov functions, rather than the average value of the
Lyapunov functions, for the stability analysis of linear switching systems has been
given by Michel in [54].
Recall that the stability is the first step in the study of a system in terms of its
performance evaluation. In fact, if a system is not stable (or not stable enough), it
is important to proceed to the stabilization of this system before looking to satisfy
other performances such as trajectory tracking, precision, control effort, perturba-
tion rejection, robustness, etc.
A.2 Control Theory 105
Remark A.6
1. The problem of trajectory tracking consists in maintaining the solution of the
system along a desired trajectory yd (t), t ≥ 0. The objective here is to find a
control law such that for every initial condition in a region D, the error between
the output and the desired output
yd (t) = y ∗ , t ≥0
The control design techniques allowing to construct control laws for the stabiliza-
tion of systems are numerous and varied. In what follows, we are going to present
those that are most useful for the control of underactuated mechanical systems. The
main references where most of the results on this subject were borrowed from, in
the next section, are [32, 37, 41, 43, 44, 66, 67].
Given a physical system that we want to control and the system behavior we want to
obtain, designing a control amounts to construct control laws such that the system
subjected to these laws (the closed-loop system) presents the desired behavior.
106 A Theoretical Background of Nonlinear System Stability and Control
The linearizability is a property that renders the systems more easy to control.
In addition, the control design techniques for linear systems are well-established
and largely developed. We can cite some examples such as pole placement control,
optimal control, and a frequency-based approach just to mention a few. For more
details on these subjects the interested reader can refer to [2, 17, 24, 35, 42, 56, 93].
This list is far from complete obviously.
Thus, it might be useful to highlight this linearizability property for nonlinear
systems too. In what follows, the most employed and well-known procedures are
briefly recalled.
In the presence of the control input u, the linear approximation around the equilib-
rium point is given by
ẋ = Ax + Bu (A.14)
where the matrices A and B are defined by
∂f ∂f
A= (0, 0), B= (0, 0).
∂x ∂u
The obtained form (A.14) justifies the utilization of linear control techniques men-
tioned above.
Unfortunately, the resulting linearized system is typically valid only around the
considered point so that the associated controller is valid only in a neighborhood of
this point. This leads to a local control only. In addition, determining the linearity
domain is not obvious.
In Appendix B, the reader will find more details of the limits of linearization and
the underlying dangers of destabilization.
Hence, even though this method is simple and practical, it is necessary to proceed
differently in order to increase the validity domain of the synthesized controllers.
A.2 Control Theory 107
To further benefit from the theory of linear control, there exists a control design
technique based on a change of coordinates and a state feedback allowing to render
the nonlinear dynamics equivalent to that of a linear dynamics: this is the so-called
feedback linearization.
When we transform a system via a change of coordinates, some of its properties re-
main unchanged. For example, if a system is unstable then the transformed system is
also unstable. If a system is controllable, then the transformed system is also control-
lable. On the other hand, some systems might seem nonlinear in certain coordinates
while they can become linear in other coordinates and under certain feedback.
Thus, it is interesting, whenever possible, to analyze the dynamics of a system in
a transformed form that is easier to study.
In Appendix C, some results and concepts of differential geometry necessary for
the presentation of this approach are recalled.
Two procedures of linearization by feedback are possible: input–state lineariza-
tion and input–output linearization.
The aim here is to transform the system of the form (A.13) via a diffeomorphism
z = ϕ(x) into a system of the form
ż1 = z2
ż2 = z3
..
. (A.15)
żn−1 = zn
żn = a ϕ −1 (z) + b ϕ −1 (z) u
This form is similar to the canonical form of Brunovsky or the canonical form of
controllability of linear systems.
If such transformation is possible, then for b(ϕ −1 (z)) = 0 the control
1
u= v − a ϕ −1 (z) (A.16)
b(ϕ −1 (z))
permits to linearize the system, which becomes
One can then ask the following questions: Is it always possible to linearize a
system by feedback? When this is the case, how do we obtain the transformation
z = ϕ(x)?
The answer to these questions lies in the following theorem:
With regard to the diffeomorphism, when the conditions of linearization are sat-
isfied, then there exist several algorithms that permit to find the latter [14, 37, 65].
ẋ = f (x) + g(x)u, x ∈ Rn , u, y ∈ R
(A.17)
y = h(x)
The idea is to generate linear equations between the output y and a certain input
v through a diffeomorphism z = φ(x) constituted of the output and its derivatives
with respect to time up to the order n − 1 when the relative degree r associated to
this system is equal to n:
φ1 (x) = h(x)
φ2 (x) = Lf h(x)
(A.18)
..
.
φn (x) = Ln−1
f h(x)
ż1 = z2
ż2 = z3
..
. (A.19)
żn−1 = zn
żn = a φ −1 (z) + b φ −1 (z) u.
A.2 Control Theory 109
By choosing u of the form (A.16) and assuming that b(φ −1 (z)) = 0 for all z ∈ R n ,
the system becomes
Note that in this case, the form (A.19) is the same as in (A.15). In fact, when r = n
the two linearizations are equivalent. Hence, the conditions for applying the second
linearization will be the same as for the first.
For more details on these two linearizations, and for some useful examples, the
reader can refer to [32, 33, 37, 66].
Obviously, for a relative degree r < n, the system is no longer completely feed-
back linearizable. In this case, one can talk of a partial feedback linearization.
When r < n, it is only possible to partially linearize a system of the form (A.17)
through the diffeomorphism constituted partly by the output h(x) and its successive
derivatives up to order r − 1: z = φi (x) for 1 < i < r, and completed—by using
the theorem of Frobenius—by n − r other functions: η = φi (x) for r + 1 ≤ i ≤ n,
chosen in such a way that Lg φi = 0 for r + 1 ≤ i ≤ n. In the coordinates (z, η) the
equations of the system are given by
ż1 = z2
ż2 = z3
..
.
żr−1 = zr (A.20)
żr = a(z, η) + b(z, η)u
η̇ = q(z, η)
y = z1
ż1 = z2
ż2 = z3
..
.
żr−1 = zr (A.21)
110 A Theoretical Background of Nonlinear System Stability and Control
żr = v
η̇ = q(z, η)
Remark A.7
1. When η̇ = q(0, η) is (locally) asymptotically stable then the associated system is
said to have (locally) minimum phase characteristic at the equilibrium point x̄.
2. When η̇ = q(0, η) is unstable then the associated system is said to be a non-
minimum phase system.
Even though the methods of linearization are useful for simplifying the study and
the control of nonlinear systems, they nevertheless present certain limitations. For
example, the lack of robustness in the presence of modeling errors, the verification
of certain conditions such as the involutivity, which, very often, is not verified by
many systems; even those belonging to the class of nonlinear control affine systems,
this is the case of UMSs. In addition, the state must be fully measured and accessi-
ble. Hence, the utilization of such techniques is confined to some classes of systems
only.
Therefore, one must find other linearization techniques that are applicable to a
wide range of systems without demanding restrictive and rigorous conditions as re-
quired by exact linearization. For example, approximative linearization techniques
allow the linearization of the systems up to certain order and neglect certain nonlin-
ear dynamics of high order. The authors that were interested in this technique are
[5, 28, 31, 36, 39, 73], just to mention a few of them.
For certain systems the computation of the relative degree presents some singular-
ities in the neighborhood of the equilibrium point. For other systems the relative
degree is smaller than the order of the system. In this case, the condition of involu-
tivity is not verified.
The key idea of approximate linearization is to find an output function such that
the system approximatively verifies the former condition.
Several linearization algorithms are available; one can cite, for example, lin-
earization by the Jacobian, pseudo-linearization, Krener algorithm, Hunt and
A.2 Control Theory 111
Turi [39], the algorithm of Krener based on the theory of Poincaré [40], the al-
gorithm of Hauser, and that of Sastry and Kokotović [28]. A comparative study of
these algorithms applied to some examples can be found in [45].
In what follows, we shall recall briefly the algorithm of Hauser et al. [28]. Con-
sider the system of the form (A.17):
ẋ = f (x) + g(x)u
y = h(x)
Suppose that the relative degree associated to this system is equal to r < n. Con-
sequently, the system is not exactly feedback linearizable. In f and g, some terms
prevent the linearization to take place, in the sense that the relative degree in the
presence of these terms is smaller than n.
The idea is to neglect these terms so that we can achieve a complete relative
degree, called robust relative degree.
Definition A.5 [88] The robust relative degree of a regular output associated to
system (A.17) at 0 is the integer γ such that
γ −2
Lg h(0) = Lg Lf h(0) = · · · = Lg Lf h(0) = 0
γ −1
Lg L f h(0) = 0
In this case, we say that the system (A.17) is approximatively feedback lineariz-
able around the origin if there exists a regular output y = h(x) for which γ = n.
This transformation is possible via the following diffeomorphism z = φ(x):
z1 = h(x)
z2 = Lf h(x)
..
.
zn = Ln−1
f h(x)
ż1 = z2
ż2 = z3
112 A Theoretical Background of Nonlinear System Stability and Control
..
. (A.22)
żn−1 = zn
−1
żn = Lnf h φ −1 (z) + Lg Ln−1
f h φ (z) u
This method is in many cases satisfactory but naturally the control engineer will
always try to improve it in order to increase its performances and its domain of
validity. This is how the theory of higher order approximations was introduced by
Krener [39] and Hauser [27].
Theorem A.10 [39] The nonlinear system (A.13) can be approximated by a state
feedback around an equilibrium point if and only if the distribution Δn−2 (f, g) is
involutive up to order3 p − 2 on E.
This means that there is a change of coordinates z = Ψ (x) such that, in the new
coordinates z, the dynamics (A.13) is given by
ż1 = z2
ż2 = z3
.. p p−1 (A.23)
. +OE (x) + OE (x)u
żn−1 = zn
żn = a(z) + b(z)u
In other words, for the system (A.13), during a higher order approximation, the
γ −2
terms Lg h(x) = Lg Lf h(x), . . . , Lg Lf h(x) are no longer assumed to be zero but
will be taken into account in the model and consequently in the expression of the
control law.
The obtained model (A.23) is no longer fully linearizable, but it is at least easier
to control than the initial system (A.13).
Apart from these methods of linearization, there exist several other approaches
that are different from one another for the synthesis of control. The utilization of
one method over another will depend on the class of systems considered. Among
these methods, we shall be interested in three of them, namely: passivity approach,
backstepping, and sliding mode control.
The notion of passivity is essentially linked to the notion of the energy that is ac-
cumulated in the considered system and the energy brought by external sources to
the system [57, 67, 86]. The principal reference on the utilization of this concept of
passivity in automatic control is due to Popov [61]. The dissipativity, which is an
extension of this concept, is developed in the works of Willems [89].
Even though the concept of passivity is applicable to a large class of nonlinear
systems, we will restrict our attention, only to dynamics modeled by system (A.17).
A dissipative system is then defined as follows:
Definition A.6 [67] The system (A.17) is said to be dissipative if there exists a
function S(x) that is positive and such that S(0) = 0, and a function w(u, y) that is
locally integrable for all u, such that the following condition:
0
S(x) − S(x0 ) ≤ w u(τ ), y(τ ) dτ (A.24)
t
This inequality expresses the fact that the energy stored in the system S(x) is at
most equal to the sum of energies initially stored and externally supplied. That is,
there is no creation of internal energy; only a dissipation of energy is possible.
If S(x) is differentiable, the expression (A.24) can be written as
Definition A.7 [67] The system (A.17) is said to be passive if it is dissipative and
if the function w is a bilinear function from the input to the output w(u, y) = uT y.
Remark A.8 Note that the definition of dissipativity and passivity does not require
that S(x) > 0 (it suffices that S(x) ≥ 0). Hence, in the presence of an unobservable
part, x = 0 can be unstable while the system is passive. For passivity to imply stabil-
ity, one must exclude a similar case. That is, one must verify that the unobservable
part is asymptotically stable. The reader should refer to [67] for a complete review
on the stability of passive systems and for some results on Lyapunov functions that
are semi positive definite.
ẋ1 = x2 + f1 (x1 )
ẋ2 = x3 + f2 (x1 , x2 )
..
.
(A.26)
ẋi = xi+1 + fi (x1 , x2 , . . . , xi )
..
.
ẋn = fn (x1 , , x2 , . . . , xn ) + u
The idea behind the backstepping technique is to consider the state x2 as a “vir-
tual control ” for x1 . Therefore, if it is possible to realize x2 = −x1 − f1 (x1 ), then
the state x1 will be stabilized. This can be verified by considering the Lyapunov
function V1 = 12 x12 . However, since x2 is not the real control for x1 , we make the
following change of variables:
z1 = x1
z2 = x2 − α1 (x1 )
with α1 (x1 ) = −x1 − f1 (x1 ). By introducing the Lyapunov function V1 (z1 ) = 12 z12 ,
we obtain
ż1 = −z1 + z2
∂α1
ż2 = x3 + f2 (x1 , x2 ) − x2 + f1 (x1 ) := x3 + f¯2 (z1 , z2 )
∂x1
V̇1 = −z12 + z1 z2
A.2 Control Theory 115
z3 = x3 − α2 (z1 , z2 )
1
V2 = V1 + z22
2
In order to determine the expression of α2 (z1 , z2 ), one can observe that
ż1 = −z1 + z2
ż2 = −z1 − z2 + z3
V̇2 = −z12 − z22 + z2 z3
1 2
i
Vi = zk
2
k=1
we obtain
At step n, we obtain
żn = f¯n (z1 , . . . , zn ) + u
Choosing
u = αn (z1 , . . . , zn ) = −zn−1 − zn − f¯n (z1 , . . . , zn )
116 A Theoretical Background of Nonlinear System Stability and Control
1 2
n
Vn = zk
2
k=1
żn = −zn−1 − zn
n
V̇n = − zk2
k=1
The stability of the system is proven by using simple quadratic Lyapunov func-
tions. One must also note that the dynamic obtained in the z coordinates is linear.
The advantage of the backstepping technique is its flexibility for the choice of the
stabilizing functions αi , which are simply chosen to eliminate all the nonlinearities
in order to render the function V̇i negative.
The theory of variable structure systems has been the subject of numerous studies
over the last 50 years. Initial works on this type of systems are those of Anosov
[1], Tzypkin [82]. and Emel’yanov [21]. These works have encountered a signifi-
cant revival in the late 1970s when Utkin introduced the theory of sliding modes
[83]. This control and observation technique received increasing interest because of
their relative ease of design, their strength vis-à-vis certain parametric uncertainties
and perturbations, and the wide range of their applications in varied fields such as
robotics, mechanics or power systems.
The principle of this technique is to force the system to reach and then to remain
on a given surface called sliding or switching surface (representing a set of static
relationships between the state variables). The resulting dynamic behavior, called
ideal sliding regime/mode, is completely determined by the parameters and equa-
tions defining the surface. The advantage of obtaining such behavior is twofold: on
one hand, there is a reduction of the system order, and on the other, the sliding mode
is insensitive to disturbances occurring in the same direction as the inputs.
The realization is done in two stages: a surface is determined so that the sliding
mode has the desired properties (not necessarily present in the original system), and
then a discontinuous control law is synthesized in order to make the surface invari-
ant (at least locally) and attractive. However, the introduction of this discontinuous
action, acting on the first derivative with respect to time of the sliding variable, does
not generate an ideal sliding mode. On average, the controlled variables can be con-
sidered as ideally moving on the surface. In reality, the movement is characterized
by high-frequency oscillations in the vicinity of the surface. This phenomenon is
A.2 Control Theory 117
known as chattering and is one of the major drawbacks of this technique. Further-
more, it may stimulate non-modeled dynamics and lead to instability [23].
The presentation of this theory and its applications would easily constitute an-
other book in itself. Therefore, in what follows, we swiftly present this technique
and we refer the reader to [8, 22, 23, 60, 72, 83] for an excellent presentation of
first order sliding modes and of the Fillipov theory for differential equations with
discontinuous second member as well as of the equivalent vector method of Utkin.
Even though the theory of sliding modes is applied to a large class of systems of the
form ẋ = f (x, u) [69], we shall restrict our attention to the class of single-output
control affine systems of the form
Remark A.9 The systems studied here are governed by differential equations involv-
ing discontinuous terms. The classical theories do not allow to describe the behavior
of the solution in this case. One must, therefore, employ the theory of differential
inclusions [3] and the solutions in the Fillipov sense [22].
Definition A.8 [84] We say that there exists an ideal sliding mode on S if there
exists a finite time ts such that the solution of (A.27) satisfies s(t, x) = 0 for all
t ≥ ts .
The existence of the sliding mode is guaranteed by sufficient conditions: the slid-
ing surface must be locally attractive, which can be mathematically translated as
∂s ∂s
lim (f + gu) < 0 and lim (f + gu) > 0 (A.29)
s→0+ ∂x s→0− ∂x
This condition translates the fact that, in a neighborhood of the sliding surface, the
velocity vectors of the trajectories of the system must point towards this surface, see
Fig. A.9 [8].
Hence, once the surface is intersected, the trajectories stay in an ε-neighborhood
of S, and we say that the sliding mode is ideal if we have exactly s(t, x) = 0.
118 A Theoretical Background of Nonlinear System Stability and Control
s ṡ < 0 (A.30)
with u+ and u− being continuous functions. It must be noted that it is this discontin-
uous characteristic of the control law that permits to obtain a convergence in finite
time on the surface as well as the properties of robustness with respect to certain
perturbations.
An example of a classical control by sliding mode that ensures the convergence
towards the surface s = 0 in finite time is as follows: if for the nonlinear system
(A.13) of relative degree r, we have |Lg Lr−1
f | > K > 0, Lf < M < ∞ then there
r
When on one hand, the control is chosen of the form (A.32) or simply of the form
(A.33), and on the other hand, the previous conditions for the boundedness of certain
functions are verified then the convergence in finite time is ensured. We shall try to
demonstrate this result through an example.
A.2 Control Theory 119
ẋ = b + u
(A.34)
u = −λ sign(x − xd )
with xd the desired state, s = x − xd the sliding surface and λ > |b| + sup|ẋd |, then
x converges to xd in finite time and remains on the surface x = xd .
Proof
s = x − xd
ṡ = b − λ sign(s) − ẋd
s2
Consider the Lyapunov function: V = 2. In this case, we have
V̇ = s b − λ sign(s) − ẋd
V −1/2 dV = −K1 dt
2V 1/2 = −K1 t + V0
−K1 t + V0 2
V (t) =
2
V0
The time from which V (t) = 0 corresponds to t = K1 , which is finite.
In practice, an ideal sliding mode does not exist since it will imply that the control
can switch with an infinite frequency. There then occurs the problem of chattering
which means that we no longer have s(t, x) = 0 but s(t, x) < Δ from t > t0 where
t0 is the convergence time and Δ a constant representing the maximum variations
along the ideal trajectory s = 0.
This maximum depends on the “slew rate ”of the components intervening in the
injection of the input u in the system, on wear, and on the sensitivity of actuator
120 A Theoretical Background of Nonlinear System Stability and Control
noise in the case of an analog control hence limiting the variation of speed between
u+ and u− , see Fig. A.10. In the discrete case, the switching speed is limited by
the data measurement which is in turn constrained by the sampling period and the
computation time [8].
This phenomenon constitutes a non-negligible disadvantage because even if it is
possible to filter the output of the process, it is susceptible to excite high-frequency
modes that were not taken into account in the model of the system. This can degrade
the performances and even lead to instability [29].
The chattering also implies important mechanical requirements at the actuator
level. This can cause rapid wear and tear of the actuators as well as non-negligible
energy loss at the power circuits level. Numerous studies have been carried out
in order to reduce this phenomenon. One of them consists in replacing the sign
function by saturation functions Fig. A.11, or by sigmoid functions such as tan(s)
or arctan(s), Fig. A.12 [8].
Nevertheless, it has been proven that to overcome this chattering phenomenon the
best solution is to consider higher order sliding modes such as the twisting algorithm
or the super twisting [25, 60].
A.3 Summary 121
The control design techniques based on the switching between several controllers
have been the subject of intensive applications these last few years. The importance
of such methods comes from the existence of systems that are not stabilizable by a
single controller. In effect, a large range of dynamical systems is modeled by a fam-
ily of continuous subsystems and a logic rule orchestrating the switching between
these subsystems, see Fig. A.13
Based on this, switching systems appear as a rigorous concept for studying com-
plexes systems, even if their theoretical properties are still the subject of intensive
research.
A.3 Summary
This appendix has been devoted to the presentation of the theoretical aspects on sta-
bility and control of nonlinear and switching systems. There is no general method-
ology for the design of controller for nonlinear systems as opposed to controller
design for linear systems. Depending on the class of nonlinear systems under study,
some approaches are better suited than others. In addition, we have attempted to
explain in a simple way the principle of some nonlinear control design techniques
that fall within the scope of this book with the aim of using some of them for the
stabilization of underactuated mechanical systems.
Appendix B
Limits of Linearization and Dangers
of Destabilization
Equation (B.3) indicates that for any initial condition x0 , the solution exponen-
tially converges towards the equilibrium point.
However, according to (B.2), the nonlinear system possesses a second equilib-
rium point at x(t) = 1.
The impact of negligence of this point can be illustrated by calculating the solu-
tion of the nonlinear system:
x0 e−t
x(t) = (B.4)
1 + x0 (e−t − 1)
Clearly, (B.5) is controllable while the linearized system around the point
x3 (t) = 0 given by
⎡ ⎤ ⎡ ⎤
x˙1 1 0
⎣ x˙2 ⎦ = ⎣ 0 0 ⎦ u1 (B.6)
u2
x˙3 0 1
is not controllable for x2 (t)!
On the other hand, the use of linear controller can sometimes lead to destabiliza-
tion; for example, the consequence of the peaking phenomenon on a linear system
can lead to the system instability [78, 80].
To illustrate this concept let us consider the partially linear coupled system de-
scribed by the dynamics:
ẋ = f (x, y)
(B.7)
ẏ = Ay + By
From (B.8), one can verify that the assumption (b3) is satisfied.
By designing a linear controller as follows:
u = −a 2 y1 − 2ay2 (B.9)
y2 = −a 2 te−at (B.10)
and from this solution it appears that the dynamic |y2 (t)| rises to a peak, then con-
verges exponentially to 0. By computation, we can show that the peak time is t = a1 .
From (B.10), we can conclude that for important values of a, y converges faster
towards 0. Hence, from assumption (b3), it seems that important values of a allow
a fast stabilization of the nonlinear system.
Nevertheless, in [38], it was shown that this is not true. Indeed, if we substitute
(B.10) in (B.8). By integrating, the resulting expression is given by
x02
x 2 (t) = (B.11)
1 + x02 (t + (1 + at)e−at − 1)
126 B Limits of Linearization and Dangers of Destabilization
Other examples and discussion of this phenomenon are in [38, 78, 80].
Appendix C
A Little Differential Geometry
This section is devoted to the definition of some concepts and basic tools of dif-
ferential geometry introduced in nonlinear automatic control theory, since the early
1970s, by Eliott, Lobry, Hermann, Krener, Brockett, and others.
Lie Derivative and Bracket Let f and g be two vector fields on an open Ω of
Rn with all continuous partial derivatives, and denote by ∂f ∂g
∂x and ∂x the Jacobian
matrices.
The Lie derivative of g along f is the vector field
∂g
Lf g = f.
∂x
The Lie bracket of f and g is the vector field
[f, g] = Lf g − Lg f.
adff g = [f, g]
adkf g = f, adk−1
f g , k = 2, 3, . . .
k=m
[gi , gj ] = αijk gk .
k=1
That is, if for all the f and g in Δ, [f, g] belongs to Δ (Δ is closed by Lie
bracket).
• A set of linearly independent vectors {g1 , . . . , gm } is a complete integrable set, if
the system of n − m partial derivative equations
∂h ∂h
g1 = 0, ..., gn−m = 0
∂x ∂x
ẋ = f (x) + g(x)u
(C.1)
y = h(x)
for all x ∈ Ω.
Appendix D
Controllability of Continuous Systems
One of the main goals of automatic control is to establish control laws so that a
system evolves according to a predetermined objective. This requires controllability
of the system. Intuitively, the controllability means that we can bring a system from
one state to another by means of a control. Conversely, non-controllability implies
that some states are unreachable for any control.
ẋ = Ax + Bu (D.1)
y = Cx (D.2)
where An×n is the state matrix, x ∈ Rn is the vector states, Bn×m the control matrix,
u controls belonging to a set of admissible controls U, Cp×n the output matrix and
y ∈ Rp the system outputs.
Definition D.1 [35] The system (D.1) is controllable if for each couple (x0 , xd ) of
Rn there exist a finite time T and a control u defined on [0, T ] that brings the system
from an initial state x(0) = x0 to a desired state x(T ) = xd .
Theorem D.1 The linear system (D.1) is controllable if and only if the rank of its
controllability matrix
C = B AB . . . An−1 B (D.3)
Pole Placement Design When a system is controllable, the pole placement prin-
ciple consists of determining a control law u = −Kx such that σ (A − BK) = σd ,
where σ is the spectrum of (A − BK) and σd is the desired spectrum.
The difficulty of this approach lies in the determination of the spectrum since
there is no general methodology for doing so. This method offers the possibility to
place the closed-loop poles anywhere in the negative half-plane, regardless of the
open-loop poles location. As a result, the response time can be controlled. However,
if the poles are placed too far into the negative half-plane, the values of the gain K
are very large and can cause saturation problems and can lead to instability.
Remark D.1 The control law u is designed assuming that the state vector x is avail-
able. This assumption is not always true. Sometimes, some states are not available,
D.2 Controllability Concepts for Nonlinear Systems 131
The notion of controllability which seems simple and intuitive for linear systems
is rather complicated for nonlinear systems where several definitions of the latter
exist. The first results on nonlinear system controllability are due to Sussmann and
Jurdjevic [79], Lobry [48], Hermann and Krener [30], Sussmann [76, 77] and for a
nice presentation see also Nijmeijer and Van der Schaft [55].
A nonlinear system is generally represented by
ẋ = f (x, u)
(D.5)
y = h(x)
Definition D.4 The system (D.5) is said locally controllable at x0 , if for all neigh-
borhood U of x0 , Au (x0 ) is a neighborhood of x0 , where
Remark D.2 W Au is the smallest set containing U-accessible pairs (that is,
x W Au x if and only if there exist x 0 , . . . , x k , such that x 0 = x , x k = x and let
x i Au x i−1 or x i−1 Au x i for i = 1, . . . , k).
The concept of weak controllability is a global concept which does not reflect
the behavior of a system in the neighborhood of a point. Therefore, it is necessary
to introduce the concept of weak local controllability
ẋ = f (x) + g(x)u
(D.8)
y = h(x)
Definition D.7 System (D.8) satisfies the rank condition if the rank of the nonlinear
controllability matrix
Cfg = g(x) adf g(x) ad2f g(x) . . . adn−1 f g(x) (D.9)
Theorem D.2 [30] If the system (D.8) satisfies the rank condition then it is locally
weakly controllable.
References
1. D.V. Anosov, On stability of equilibrium points of relay systems. J. Autom. Remote Control
2,135–149 (1959)
2. P.J. Antsalkis, A.N. Michel, Linear Systems (McGraw-Hill, New York, 1997)
3. J-P. Aubin, H. Frankowska, Set-Valued Analysis (Birkhäuser, Basel, 1990)
4. J.P. Barbot, Systèmes à structure variable. Technical report, École Nationale Supérieure
d’Electronique et de Ses Applications, ENSEA, France, 2009
5. N. Bedrossian, Approximate feedback linearisation: the cart pole example, in Proc. IEEE Int.
Conf. on Robotics and Automation (1992), pp. 1987–1992
6. I. Belgacem, Automates hybrides Zénon: Exécution-simulation. Master at Aboubekr Belkaid
University, Tlemcen, Algeria, 2009
7. A.M. Bloch, M. Reyhanoglu, N.H. McClamroch, Control and stabilization of nonholonomic
dynamic systems. IEEE Trans. Autom. Control 37(11),1746–1757 (1992)
8. T. Boukhobza, Contribution aux formes d’observabilité pour les observateurs à modes glis-
sants. Ph.D. thesis, Paris Sud Centre d’Orsay University, France, 1997
9. M.S. Branicky, Stability of switched and hybrid systems, in Proc. 33rd IEEE Conf. on Decision
and Control, USA (1994), pp. 3498–3503
10. M.S. Branicky, Studies in hybrid systems: modeling, analysis, and control. Ph.D. thesis, De-
partement of Electrical and Computer Engineering, Massachusetts Institute of Technology,
1995
11. M.S. Branicky, Stability of hybrid systems: State of the art, in Proc. 36th IEEE Conf. on
Decision and Control, USA (1997), pp. 120–125
12. M.S. Branicky, Multiple Lyapunov functions and other analysis tools for switched and hybrid
systems. IEEE Trans. Autom. Control 43(4), 475–482 (1998)
13. R.W. Brockett, Asymptotic Stability and Feedback Stabilization (Birkhäuser, Basel, 1983)
14. K. Busawon, Lecture notes in control systems engineering. Technical report, Northumbria
University, Newcastle, United Kingdom, 2008
15. S. Canudas, G. Bastin, Theory of Robot Control (Springer, Berlin, 1996)
16. G. Chesi, Y.S. Hung, Robust commutation time for switching polynomial systems, in Proc.
46th IEEE Conf. on Decision and Control, USA (2007)
17. B. D’Andréa Novel, M.C. De-Lara, Commande Linéaire des Systèmes Dynamiques (École des
Mines Press, 2000)
18. H. Dang Vu, C. Delcarte, Bifurcation et Chaos (Ellipses edition, 2000)
19. R. DeCarlo, M. Branicky, S. Petterssond, B. Lennartson, Perspectives and results on the sta-
bility and stabilizability of hybrid systems. Proc. IEEE 88, 1069–1082 (2000)
20. W.E. Dixon, A. Behal, D. Dawson, S. Nagarkatti, Nonlinear Control of Engineering Systems
(Birkhäusser, Basel, 2003)
21. S.V. Emel’yanov, Variable Structure Control Systems (Nauka, Moscow, 1967)
22. A.G. Fillipov, Differential Equations with Discontinuous Right Hand-Sides. Mathematics and
its Applications (Kluwer Academic, Dordrecht, 1983)
23. T. Floquet, Contributions à la commande par modes glissants d’ordre supérieur. Ph.D. thesis,
University of Sciences and Technology of Lille, France, 2000
24. G.F. Franklin, J.D. Powell, A. Emami-Naeini. Feedback Control of Dynamic Systems
(Prentice-Hall, Englewood Cliffs, 2002)
25. L. Fridman, A. Levant, Sliding Modes of Higher Order as a Natural Phenomenon in Control
Theory (Springer, Berlin, 1996)
26. L. Gurvits, R. Shorten, O. Mason, On the stability of switched positive linear systems. IEEE
Trans. Autom. Control 52, 1099–1103 (2007)
27. J. Hauser, Nonlinear control via uniform system approximation. Syst. Control Lett. 17, 145–
154 (1991)
134 D Controllability of Continuous Systems
28. J. Hauser, S. Sastry, P. Kokotović, Nonlinear control via approximate input-output lineariza-
tion. IEEE Trans. Autom. Control 37(3), 392–398 (1992)
29. B. Heck, Sliding mode control for singulary perturbed systems. Int. J. Control 53(4) (1991)
30. R. Hermann, A.J. Krener, Nonlinear controllability and observability. IEEE Trans. Autom.
Control 22, 728–740 (1977)
31. L.R. Hunt, R. Su, G. Meyer Gurvits, R. Shorten, O. Mason, Global transformations of nonlin-
ear systems. IEEE Trans. Autom. Control 28(1), 24–31 (1983)
32. A. Isidori, Nonlinear Control Systems (Springer, Berlin, 1995)
33. B. Jakubczyk, W. Respondek, On linearisation of control systems. Rep. Sci. Pol. Acad. 28(9),
517–522 (1980)
34. J. Jouffroy, Stabilité et systèmes non linéaire: réflexion sur l’analyse de contraction. Ph.D.
thesis, Savoie University, France, 2002
35. T. Kailath, Linear Systems (Prentice-Hall, Englewood Cliffs, 1981)
36. W. Kang, Approximate linearisation of nonlinear control systems, in Proc. IEEE Conf. on
Decision and Control (1993), pp. 2766–1771
37. H.K. Khalil, Nonlinear Systems (Prentice-Hall, Englewood Cliffs, 2002)
38. P. Kokotović, The joy of feedback: Nonlinear and adaptive. IEEE Control Syst. Mag. 12(3),
7–17 (1992)
39. A.J. Krener, Approximate linearisation by state feedback and coordinate change. Syst. Control
Lett. 5, 181–185 (1984)
40. A.J. Krener, M. Hubbard, S. Karaban, B. Maag, Poincaré’s Linearisation Method Applied to
the Design of Nonlinear Compensators (Springer, Berlin, 1991)
41. M. Kristić, P. Kokotović, Nonlinear and Adaptive Control Design (Wiley, New York, 1995)
42. H. Kwakernak, R. Sivan, Linear Optimal Control Systems (Library of Congress, 1972)
43. H.G. Kwatny, G.L. Blankenship, Nonlinear Control and Analytical Mechanics: A Computa-
tional Approach (Birkhäuser, Basel, 2000)
44. F. Lamnabhi Lagarrique, P. Rouchon, Commandes Non Linéaires (Lavoisier, Paris, 2003)
45. M. Latfaoui, Linéaisation approximative par feedback. Master at Aboubekr Belkaid Univer-
sity, Tlemcen, Algeria, 2002
46. J. Lévine, Analyse et commande des systèmes non linéaires. Technical report, École des Mines
de Paris, France, 2004
47. D. Liberzon, A.S. Morse, Basic problems in stability and design of switched systems. IEEE
Control Syst. Mag. 19(5), 59–70 (1999)
48. C. Lobry, Contrôlabilité des Systèmes Non Linéaires (CNRS, Paris, 1981)
49. D.G. Luenberger, Introduction to Dynamic Systems (Wiley, New York, 1979)
50. N. Manamanni, Aperçu rapide sur les systèmes hybrides continus. Technical report, University
of Reims, France, 2007
51. M. Margaliot, M.S. Branicky, Nice reachability for planar bilinear control systems with appli-
cations to planar linear switched systems. IEEE Trans. Autom. Control (May 2008)
52. O. Mason, R. Shorten, A conjecture on the existence of common quadratic Lyapunov functions
for positive linear systems, in Proc. American Control Conf., USA (2003), pp. 4469–4470
53. S. Mastellone, D.M. Stipanović, M.W. Spong, Stability and convergence for systems with
switching equilibria, in Proc. 46th IEEE Conf. on Decision and Control, USA (2007),
pp. 4013–4020
54. A.N. Michel, L. Hou, Stability results involving time-averaged Lyapunov function derivatives.
J. Nonlinear Anal., Hybrid Syst. 3(1), 51–64 (2009)
55. H. Nijmeijer, A. van der Schaft, Nonlinear Dynamical Control Systems (Springer, Berlin,
1990)
56. K. Ogata, Modern Control Engineering (Prentice-Hall, Englewood Cliffs, 1997)
57. R. Ortega, A. Loria, P. Nicklasson, H. Sira-Ramirez, Passivity-Based Control of Euler La-
grange Systems (Springer, Berlin, 1998)
References 135
58. P. Peleties, Modeling and design of interacting continuous-time/discrete event systems. Ph.D.
thesis, Electrical Engineering, Purdue Univ., West Lafayette, IN, 1992
59. P. Peleties, R.A. DeCarlo, Asymptotic stability of m-switched systems using Lyapunov-like
functions, in Proc. American Control Conf., USA (1991), pp. 1679–1684
60. W. Perruquetti, J.P. Barbot, Sliding Mode Control in Engineering (Taylor and Francis, London,
2002)
61. V.M. Popov, Absolute stability of nonlinear control systems of automatic control. J. Autom.
Remote Control 22, 857–875 (1962)
62. J.P. Richard, Mathématiques pour les Systèmes Dynamiques (Lavoisier, Paris, 2002)
63. H. Saadaoui, Contribution à la synthèse d’observateurs non linéaires pour des classes de sys-
tèmes dynamiques hybrides. Ph.D. thesis, Cergy Pontoise University, France, 2007
64. R.G. Sanfelice, A.R. Teel, R. Sepulchre, A hybrid systems approach to trajectory tracking con-
trol for juggling systems, in Proc. 46th IEEE Conf. on Decision and Control, USA (2007), pp.
5282–5287
65. T. Sari, C. Lobry, Contrôle Non Linéaire et Applications (Hermann, Paris, 2005)
66. S.S. Sastry, Nonlinear Systems: Analysis, Stability and Control (Springer, Berlin, 1999)
67. R. Sepulchre, M. Janković, P. Kokotović, Contructive Nonlinear Control (Springer, Berlin,
1997)
68. C. Shen, Q. Wei, F. Shumin, On exponential stability of switched systems with delay: multiple
Lyapunov functions approach, in Proc. Chineese Control Conf., China (2007), pp. 664–668
69. H. Sira Ramirez, Sliding regimes in general nonlinear systems: a relative degree approach. Int.
J. Control 50(4), 1487–1506 (1989)
70. J. Slotine, W. Li, Nonlinear Systems Analysis (Prentice-Hall, Englewood Cliffs, 1993)
71. E.D. Sontag, Y. Wang, On characterizations of the input-to-state stability property. Syst. Con-
trol Lett. 24, 351–359 (1995)
72. S.K. Spurgeon, C. Edwards, Sliding Mode Control: Theory and Applications (Taylor and Fran-
cis, London, 1983)
73. T. Sugie, K. Fujimoto, Control of inverted pendulum systems based on approximate linearisa-
tion, in Proc. IEEE Conf. on Decision and Control (1994), pp. 1647–1649
74. Z. Sun, S.S. Ge, Analysis and synthesis of switched linear control systems. Automatica 41,
181–195 (2005)
75. Z. Sun, S.S. Ge, Switched Linear Systems (Springer, Berlin, 2005)
76. H.J. Sussmann, Lie Brackets, Real Analyticity and Geometric Control (Birkhäuser, Basel,
1983)
77. H.J. Sussmann, A general theorem on local controllability. SIAM J. Control Optim. 25(1),
158–194 (1987)
78. H.J. Sussmann, Limitations on stabilizability of globally minimum phase systems. IEEE
Trans. Autom. Control 35(1), 117–119 (1990)
79. H.J. Sussmann, V. Jurdjevic, Controllability of nonlinear systems. J. Differ. Equ. 12(2), 95–
116 (1991)
80. H.J. Sussmann, P. Kokotović, The peaking phenomenon and the global stabilization of nonlin-
ear systems. IEEE Trans. Autom. Control 36(4), 424–440 (1991)
81. C.J. Tomlin, J. Lygeros, S. Sastry, Introduction to Dynamic Systems (Springer, Berlin, 2003)
82. Y.Z. Tzypkin, Theory of Control of Relay Systems (Gostekhizdat, Moscow, 1955)
83. V.I. Utkin, Variable structure systems with sliding mode. IEEE Trans. Autom. Control 22(2),
212–222 (1977)
84. V.I. Utkin, Sliding Modes in Control Optimization (Springer, Berlin, 1992)
85. A. van der Schaft, H. Schumacher, An Introduction to Hybrid Dynamical Systems (Springer,
Berlin, 2002)
86. A. van der Shaft, L2 Gain and Passivity Techniques in Nonlinear Control (Springer, Berlin,
2000)
136 D Controllability of Continuous Systems
A E
Acrobot, 8, 9, 18, 27, 29, 42, 47, 64, 65 Energy, 9, 113
Approximation kinetic, 16
high order, 83, 85, 112 potential, 16, 28, 29
linear, 97, 106 Euler–Lagrange equation, 15, 17, 19, 27,
65
B Equilibrium, 94
Backstepping, 10, 11, 35, 40, 44, 49, 55, 64,
114 F
Ball and beam, 8, 18, 27, 31, 43, 48, 64, 83, Finite time convergence, 84, 118
85 Forwarding, 10, 11, 44, 49
BIBS, 56 Frobenius theorem, 109, 128
Brockett, 8, 10, 21, 22, 71 Function
Brunovsky form, 107, 112 Chetaev, 99
Lipscitzian, 94
C Lyapunov, 64, 97
Centrifugal term, 16 common, 101
CFD, 10, 35, 40, 55, 65 multiples, 74, 101
Chaos, 123 weak, 102
Chattering, 88, 119
Christoffel symbols, 16 G
Classification, 10, 35, 43, 55 Generalized momentum, 27, 50
class I, 47
class II, 47, 48 H
Complete integrability, 128 Hurwitz, 84
Constraint
holonomic, 19, 27 I
non-holonomic, 18, 27, 37 Inertial wheel pendulum, 9, 30, 65
Controllability, 8, 17, 21–23, 40, 108, 129, Inverted pendulum, 8, 9, 18, 28, 42, 48, 49, 65,
131 67
Coriolis, 16 Involutivity, 128
D J
Degree of complexity, 36, 40 Jacubezyk–Respondek theorem, 108
Diffeomorphism, 108, 109, 127
Distribution, 128 K
Kalman criterion, 129
L equilibrium, 95, 96
Lagrange vector of multipliers, 19 exponential, 96
LaSalle invariance principle, 98 global, 96
Legendre switching system, x, 100
normal form, 17 Stabilization, 8, 10, 11
transformation, 16 feedback, 130
Lie local, x, 106
bracket, 127 of a system, x, 105
derivative, 127 Structural properties, 8, 10, 19, 35, 50
Limit cycles, 123 Structure
Linearization chain, 10, 35, 39–41, 55
approximate, 82, 83, 110 isolated vertex, 10, 35, 39, 40, 43, 55, 82
feedback, 23, 40, 56, 107 tree, 10, 35, 39, 40, 42, 55, 64, 78
input–output, 108 Switch, 9, 71
input–state, 107 Switching control, 10, 71
limits, 123 Symmetry, 26
partial, 23, 43, 66, 68, 81, 109 kinetic, 27, 47, 49
collocated, 24 System
coupled inputs, 25 aeronautic, 18
non-collocated, 24 aeronautical, 7, 26, 27
LQR, 9, 71 autonomous, 94
chain, 20
N Lagrangian, 18, 26
Normal form, 109 marine, 8, 18
feedforward, 11, 48, 50, 53 mechanical, 7, 27
non-triangular, 11, 53 fully actuated, 21
non-triangular quadratic, 50 underactuated, 10, 27, 35
strict feedback, 11, 45–48, 50, 53, 66 minimum phase, 110
mobile, 8
O
non-autonomous, 94
Observability, 17
non-minimum phase, 21, 110
P nonlinear, 15, 19, 66, 96, 106
Passivity, x, 8, 9, 113 spatial, 7
Peaking phenomenon, 56, 125 switching, 121
Pendubot, 8, 9, 18, 27, 29, 42, 48, 64, 65, 67
Pendular systems, 64 T
Pole placement, 9, 71, 80, 130 Tora, 8, 9, 18, 27, 29, 42, 47, 64, 65, 67
Trajectory tracking, 10, 57, 105
R Triangular form, 114
Reduction, 49 Two order sliding mode, 88
Relative degree, 8, 21, 40, 41, 43, 108, 109
robust, 111 U
Robot Under water vehicles, 19
flexible, 8, 18 Underactuated mechanical systems, 55
joint elasticity, 41, 64 Unicycle, 18, 20, 124
walking biped, 10
wheeled, 18 V
Variable
S external, 27, 47, 49
Satellites, 8, 18, 27 shape, 27, 47, 49, 50, 90
Sliding mass on cart, 41, 64 VTOL, 9
Sliding mode, 10, 116
Stability, 93, 94 Z
asymptotic, 96 Zero dynamics, 17, 56, 110