0% found this document useful (0 votes)
5 views144 pages

Control

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 144

EL2450 - Hybrid and Embedded Control Systems

Exercises

Plant

Hold Sample

DA Computer AD

int count; int ref;


int u; string str;
int main(int y); int main(int p);
count=count+1; openport(p);
u=1/2*sqrt(y)-r; ref=read(p);

TASK 1 TASK 2

Automatic Control Lab, School of Electrical Engineering


Royal Institute of Technology (KTH), Stockholm, Sweden
2
CONTENTS

Exercises 6

I Time-triggered control 6
1 Review exercises: aliasing, z-transform, matrix exponential . . . . . . . . . . . . . . . . . . . 7
2 Models of sampled systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3 Analysis of sampled systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
4 Computer realization of controllers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
5 Implementation aspects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

II Event-triggered control 22
6 Event-based control and Real-time systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
7 Real-time scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
8 Models of computation I: Discrete-event systems . . . . . . . . . . . . . . . . . . . . . . . . 29
9 Models of computation II: Transition systems . . . . . . . . . . . . . . . . . . . . . . . . . . 31

III Hybrid control 35


10 Modeling of hybrid systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
11 Stability of hybrid systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
12 Simulation and bisimulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
13 Reachability, timed automata and rectangular automata . . . . . . . . . . . . . . . . . . . . . 44

Solutions 51

I Time-triggered control 51
Review exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
Models of sampled systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Analysis of sampled systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
Computer realization of controllers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Implementation aspects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88

II Event-triggered control 93
Event-based control and Real-time systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94

3
Real-time scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
Models of computation I: Discrete-event systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
Models of computation II: Transition systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117

III Hybrid control 120


Modeling of hybrid systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
Stability of hybrid systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
Simulation and bisimulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
Reachability, timed automata and rectangular automata . . . . . . . . . . . . . . . . . . . . . . . . 138

Bibliography 143

4
Exercises

5
Part I

Time-triggered control

6
1 Review exercises: aliasing, z-transform, matrix exponential
E XERCISE 1.1 (Ex. 7.3 in [14])
Consider a sampling and reconstruction system as in Figure 1.1. The input signal is x(t) = cos(ω0 t). The
Fourier transform of the signal is

X(jω) = π [δ(ω − ω0 ) + δ(ω + ω0 )]

and the reconstruction (low-pass) filter has the transfer function


(
h, −ωs /2 < ω < ωs /2
F (jω) =
0, else

where ωs = h is the sampling frequency. Find the reconstructed output signal xr (t) for the following input
frequencies
(a) ω0 = ωs /6

(b) ω0 = 2ωs /6

(c) ω0 = 4ωs /6

(d) ω0 = 5ωs /6

x(t)

p(t)
t
0
h
p(t)
F (jw)
x(t) xs (t) xr (t)

t
0
Sampling Reconstruction
xs (t)

t
0

Figure 1.1: Sampling and reconstruction of a band-limited signal.

E XERCISE 1.2
Let the matrix A be  
0 1
A= .
−1 0
Compute the matrix exponential eA .

7
E XERCISE 1.3
Compute the z-transform of
x(kh) = e−kh/T T > 0.

E XERCISE 1.4
Compute the z-transform of
x(kh) = sin(wkh)

E XERCISE 1.5
Given the following system described by the following difference equation

y(k + 2) − 1.5y(k + 1) + 0.5y(k) = u(k + 1)

with initial condition y(0) = 0.5 and y(1) = 1.25, determine the output when the input u(k) is a unitary step.

2 Models of sampled systems


E XERCISE 2.1 (Ex. 2.1 in [2])
Consider the scalar system
dx
= −ax + bu
dt
y = cx.

Let the input be constant over periods of length h. Sample the system and discuss how the poles of the discrete-
time system vary with the sampling frequency.

E XERCISE 2.2
Consider the following continuous-time transfer function
1
G(s) = .
(s + 1)(s + 2)
The system is sampled with sampling period h = 1.

(a) Derive a state-space representation of the sampled system.

(b) Find the pulse-transfer function corresponding to the system in (a).

8
E XERCISE 2.3 (Ex. 2.2 in [2])
Derive the discrete-time system corresponding to the following continuous-time systems when a zero order-
hold circuit is used
(a)
  
0 1 0
ẋ = x+ u
−1 0 1

y= 1 0 x

(b)
d2 y dy du
2
+ 3 + 2y = + 3u
dt dt dt
(c)
d3 y
=u
dt3

E XERCISE 2.4 (Ex. 2.3 in [2])


The following difference equations are assumed to describe continuous-time systems sampled using a zero-
order-hold circuit and the sampling period h. Determine, if possible, the corresponding continuous-time sys-
tems.
(a)
y(kh) − 0.5y(kh − h) = 6u(kh − h)
(b)
   
−0.5 1 0.5
x(kh + h) = x(kh) + u(kh)
0 −0.3 0.7

y(kh) = 1 1 x(kh)

(c)
y(kh) + 0.5y(kh − h) = 6u(kh − h)

E XERCISE 2.5 (Ex. 2.11 in [2])


The transfer function of a motor can be written as
1
G(s) = .
s(s + 1)
Determine:
(a) the sampled system
(b) the pulse-transfer function
(c) the pulse response
(d) a difference equation relating input and output
(e) the variation of the poles and zeros of the pulse-transfer function with the sampling period

9
E XERCISE 2.6 (Ex. 2.12 in [2])
A continuous-time system with transfer function
1 −sτ
G(s) = e
s
is sampled with sampling period h = 1, where τ = 0.5.

(a) Determine a state-space representation of the sampled system. What is the order of the sampled-system?

(b) Determine the pulse-transfer function and the pulse response of the sampled system

(c) Determine the poles and zeros of the sampled system.

E XERCISE 2.7 (Ex. 2.13 in [2])


Solve Problem 2.6 with
1
G(s) = e−sτ
s+1
and h = 1 and τ = 1.5.

E XERCISE 2.8 (Ex. 2.15 in [2])


Determine the polynomials A(q), B(q), A∗ (q −1 )) and B ∗ (q −1 ) so that the systems

A(q)y(k) = B(q)u(k)

and
A∗ (q −1 )y(k) = B ∗ (q −1 )u(k − d)
represent the system
y(k) − 0.5y(k − 1) = u(k − 9) + 0.2u(k − 10).
What is d? What is the order of the system?

E XERCISE 2.9 (Ex. 2.17 in [2])


Use the z-transform to determine the output sequence of the difference equation

y(k + 2) − 1.5y(k + 1) + 0.5y(k) = u(k + 1)

when u(k) is a step at k = 0 and when y(0) = 0.5 and y(−1) = 1.

10
E XERCISE 2.10
Consider the following continuous time controller
s0 s + s1 t0 s + t1
U (s) = − Y (s) + R(s)
s + r1 s + r1
where s0 , s1 , t0 , t1 and r1 are parameters that are chosen to obtain the desired closed-loop performance. Dis-
cretize the controller using exact sampling by means of sampled control theory. Assume that the sampling
interval is h, and write the sampled controller on the form u(kh) = −Hy (q)y(kh) + Hr (q)r(kh).

E XERCISE 2.11 (Ex. 2.21 in [2])


If β < α, then
s+β
s+α
is called a lead filter (i.e. it gives a phase advance). Consider the discrete-time system

z+b
z+a
(a) Determine when it is a lead filter

(b) Simulate the step response for different pole and zero locations

3 Analysis of sampled systems


E XERCISE 3.1 (Ex. 3.2 in [2])
Consider the system in Figure 3.1 and let
K
H(z) = K>0
z(z − 0.2)(z − 0.4)

Determine the values of K for which the closed-loop system is stable.

r e y
H(z)
-

Figure 3.1: Closed-loop system for Problem 3.1.

11
E XERCISE 3.2 (Ex. 3.3 in [2])
Consider the system in Figure 3.2. Assume the sampling is periodic with period h, and that the D-A converter
holds the control signal constant over a sampling interval. The control algorithm is assumed to be

u(kh) = K r(kh − τ ) − y(kh − τ )

where K > 0 and τ is the computation time. The transfer function of the process is
1
G(s) = .
s
(a) How large are the values of the regulator gain, K, for which the closed-loop system is stable when τ = 0
and τ = h?

(b) Compare this system with the corresponding continuous-time systems, that is, when there is a continuous-
time proportional controller and a time delay in the process.

H(q)

r u y
C(q) A/D G(s) D/A
-

Figure 3.2: Closed-loop system for Problem 3.2.

E XERCISE 3.3 (Ex. 3.6 in [2])


Is the following system (a) observable, (b) reachable?
   
0.5 −0.5 6
x(k + 1) = x(k) + u(k)
0 0.25 4

y(k) = 2 −4 x(k)

E XERCISE 3.4 (Ex. 3.7 in [2])


Is the following system reachable?
   
1 0 1 1
x(k + 1) = x(k) + u(k).
0 0.5 1 0

Assume that a scalar input v(k) such that


 
1
u(k) = v(k)
−1

is introduced. Is the system reachable from v(k)?

12
E XERCISE 3.5 (Ex. 3.11 in [2])
Determine the stability and the stationary value of the output for the system described by Figure 3.2 with
1
H(q) =
q(q − 0.5)
where r is a step function and C(q) = K (proportional controller), K>0.

E XERCISE 3.6 (Ex. 3.12 in [2])


Consider the Problem 3.5. Determine the steady-state error between the reference signal r and the output y,
when r is a unit ramp, that is r(k) = k. Assume C(q) to be a proportional controller.

E XERCISE 3.7 (Ex. 3.18 in [2])


Consider a continuous-time (CT) system
ẋ(t) = Ax(t) + Bu(t)
y(t) = Cx(t).
The zero-order hold sampling of CT gives the discrete-time (DT) system
x(kh + h) = Φx(kh) + Γu(kh)
y(kh) = Cx(kh).
Consider the following statements:
(a) CT stable ⇒ DT stable
(b) CT unstable ⇒ DT unstable
(c) CT controllable ⇒ DT controllable
(d) CT observable ⇒ DT observable.
Which statements are true and which are false (explain why) in the following cases:
(i) For all sampling intervals h > 0
(ii) For all h > 0 except for isolated values
(iii) Neither (i) nor (ii).

E XERCISE 3.8 (Ex. 3.20 in [2])


Given the system
(q 2 + 0.4q)y(k) = u(k),
(a) for which values of K in the proportional controller

u(k) = K r(k) − y(k)
is the closed-loop system stable?
(b) Determine the stationary error r − y when r is a step and K=0.5 in the controller (a).

13
E XERCISE 3.9 (Ex. 4.1 in [2])
A general second-order discrete-time system can be written as
   
a11 a12 b
x(k + 1) = x(k) + 1 u(k)
a21 a22 b2

y(k) = c1 c2 x(k).

Determine a state-feedback controller in the form

u(k) = −Lx(k)

such that the characteristic equation of the closed-loop system is

z 2 + p1 z + p2 = 0.

Use the previous result to compute the deadbeat controller for the double integrator.

E XERCISE 3.10 (Ex. 4.2 in [2])


Given the system
   
1 0.1 1
x(k + 1) = x(k) + u(k)
0.5 0.1 0

y(k) = 1 1 x(k).

Determine a linear state-feedback controller

u(k) = −Lx(k)

such that the poles of the closed-loop system are placed in 0.1 and 0.25.

E XERCISE 3.11 (Ex. 4.5 in [2])


The system
   
0.78 0 0.22
x(k + 1) = x(k) + u(k)
0.22 1 0.03

y(k) = 0 1 x(k).

represents the normalized motor for the sampling interval of h = 0.25. Determine observers for the state based
on the output by using each of the following:

(a) Direct calculation.

(b) An full-state observer.

(c) The reduced-order observer.

14
E XERCISE 3.12 (Ex. 4.8 in [2])
Given the discrete-time system
     
0.5 1 0.2 1
x(k + 1) = x(k) + u(k) + v(k)
0.5 0.7 0.1 0

y(k) = 1 0 x(k).

where v is a constant disturbance. Determine controller such that the influence of v can be eliminated in steady
state in each of the following cases:
(a) The state and v can be measured.

(b) The state can be measured.

(c) Only the output can be measured.

E XERCISE 3.13 (Ex. 4.6 in [2])


Figure 3.13 shows a system with two tanks, where the input signal is the flow to the first tank and the output is
the level of water in the second tank. The continuous-time model of the system is
   
−0.0197 1 0.0263
ẋ = x+ u
0.0178 −0.0129 0

y = 0 1 x.

x1

x2

Figure 3.13: Closed-loop system for Problem 3.13.

(a) Sample the system with h = 12.

(b) Verify that the pulse-transfer operator for the system is


0.030q + 0.026
H(q) =
q2− 1.65q + 0.68

(c) Determine a full-state observer. Choose the gain such that the observer is twice as fast as the open-loop
system.

15
E XERCISE 3.14
Consider the following scalar linear system

ẋ(t) = −5x(t) + u(t)


y(t) = x(t).

(a) Sample the system with sampling period h = 1,

(b) Show, using Lyapunov result, that the sampled system is stable when the input u(kh) = 0 for k ≥ 0.

E XERCISE 3.15
Consider the following linear system
   
−1 0 1
ẋ(t) = x(t) + u(t)
0 −2 1
y(t) = x(t).

(a) Sample the system with sampling period h = 1

(b) Design a controller that place the poles in 0.1 and 0.2.

(c) Show, using Lyapunov result, that the closed loop sampled system is stable

4 Computer realization of controllers


E XERCISE 4.1
Consider the following pulse-transfer
z−1
H(z) =
(z − 0.5)(z − 2)
(a) Design a digital PI controller
(K + Ki )z − K
Hc (z) =
z−1
that places the poles of the closed-loop system in the origin.

(b) Find a state-space representation of the digital controller in (a).

E XERCISE 4.2 (Ex. 8.2 in [2])


Use different methods to make an approximation of the transfer function
a
G(s) =
s+a

16
(a) Euler’s method

(b) Tustin’s approximation

(c) Tustin’s approximation with pre-warping using ω1 = a as warping frequency

E XERCISE 4.3 (Ex. 8.3 in [2])


The lead network with transfer function
s+1
Gℓ (s) = 4
s+2

Give a phase advance of about 20 at ωc = 1.6rad/s. Approximate the network for h = 0.25 using

(a) Euler’s method

(b) Backward differences

(c) Tustin’s approximation

(d) Tustin’s approximation with pre-warping using ω1 = ωc as warping frequency

E XERCISE 4.4 (Ex. 8.7 in [2])


Consider the tank system in Problem 2.13. Assume the following specifications:

1. The steady-state error after a step in the reference value is zero

2. The crossover frequency of the compensated system is 0.025 rad/s

3. The phase margin is about 50◦ .

(a) Design a PI-controller such that the specifications are fulfilled.

(b) Determine the poles and the zeros of the closed-loop system. What is the damping corresponding to the
complex poles?

(c) Choose a suitable sampling interval and approximate the continuous-time controller using Tustin’s method
with pre-warping. Use the crossover frequency as warping frequency.

E XERCISE 4.5 (Ex. 8.4 in [2])


The choice of sampling period depends on many factors. One way to determine the sampling frequency is to use
continuous-time arguments. Approximate the sampled system as the hold circuit followed by the continuous-
time system. Assuming that the phase margin can be decreased by 5◦ to 15◦ , verify that a rule of thumb in
selecting the sampling frequency is
hωc ≈ 0.15 to 0.5
where ωc is the crossover frequency of the continuous-time system.

17
E XERCISE 4.6 (Ex. 8.12 in [2])
Consider the continuous-time double integrator described by
   
0 1 0
ẋ = x+ u
0 0 1

y = 1 0 x.

Assume that a time-continuous design has been made giving the controller

u(t) = 2r(t) − 12 x̂(t)
dx̂(t) 
= Ax̂(t) + Bu(t) + K y(t) − C x̂(t)
dt

with K T = (1, 1).

(a) Assume that the controller should be implemented using a computer. Modify the controller (not the
observer part) for the sampling interval h = 0.2 using the approximation for state models.

(b) Approximate the observer using a backward-difference approximation

E XERCISE 4.7
Consider the following continuous time controller
s0 s + s1 t0 s + t1
U (s) = − Y (s) + R(s)
s + r1 s + r1
where s0 , s1 , t0 , t1 and r1 are parameters that are chosen to obtain the desired closed-loop performance.

(a) Discretize the controller using forward difference approximation. Assume that the sampling interval is
h, and write the sampled controller on the form u(kh) = −Hy (q)y(kh) + Hr (q)r(kh).

(b) Assume the following numerical values of the coefficients: r1 = 10, s0 = 1, s1 = 2, t0 = 0.5 and
t1 = 10. Compare the discretizations obtained in part (a) for the sampling intervals h = 0.01, h = 0.1
and h = 1. Which of those sampling intervals should be used for the forward difference approximation?

E XERCISE 4.8
Consider the following continuous-time controller in state-space form

ẋ = Ax + Be
u = Cx + De

(a) Derive the backward-difference approximation in state-space form of the controller, i.e. derive Φc , Γc ,
H and J for a system

w(k + 1) = Φc w(k) + Γc e(k)


u(k) = Hw(k) + Je(k)

18
(b) Prove that the Tustin’s approximation of the controller is given by
  
Ac h Ac h −1
Φc = I + I−
2 2
 −1
Ac h Bc h
Γc = I −
2 2
 −1
Ac h
H = Cc I −
2
 
Ac h −1 Bc h
J = D c + Cc I − .
2 2

5 Implementation aspects
E XERCISE 5.1
Consider the discrete-time controller characterized by the pulse-transfer function
1
H(z) = .
(z − 1)(z − 1/2)(z 2 + 1/2z + 1/4)

Implement the controller in parallel form.

E XERCISE 5.2
(a) Given the system in Figure 5.2, find the controller Cs (s) such that the closed loop transfer function from
r to y becomes
C(s)P (s) −sτ
Hcℓ = e
1 + C(s)P (s)

(b) Let
1
P (s) =
s+1
8
Hcℓ (s)e−sτ = 2 e−sτ
s + 4s + 8
find the expression for the Smith predictor Cs (s).

r(t) y(t)
Cs (s) e−sτ P (s)
__

Figure 5.2: System of Problem 5.2.

19
E XERCISE 5.3
A process with transfer function
z
P (z) =
z − 0.5
is controlled by the PI-controller
z
C(z) = Kp + Ki
z−1
where Kp = 0.2 and Ki = 0.1. The control is performed over a wireless network, as shown in Figure 5.3. Due
to retransmission of dropped packets, the network induces time-varying delays. How large can the maximum
delay be, so that the closed loop system is stable?

P (z)

Sample
ZOH G(s)

Wireless Network

C(z)

Figure 5.3: Closed loop system for Problem 5.3.

E XERCISE 5.4 (Inspired by Ex. 9.15 in [2])


Two different algorithms for a PI-controller are listed. Use the linear model for roundoff to analyze the sensi-
tivity of the algorithms to roundoff in multiplications and divisions. Assume that the multiplications happen in
the order as they appear in the formula and that they are executed before the division.

Algorithm 1:
repeat
e:=r-y
u:=k*(e+i)
i:=i+e*h/ti
forever

20
Algorithm 2:
repeat
e:=r-y
u:=i+k*e
i:=k*i+k*h*e/ti
forever

E XERCISE 5.5
Consider a first-order system with the discrete transfer function
1 1
H(z) = a= .
1 − az −1 8
Assume the controller is implemented using fixed point arithmetic with 8 bits word length and h = 1 second.
Determine the system’s unit step response for sufficient number of samples to reach steady-state. Assume that
the data representation consists of

• 1 bit for sign

• 2 bits for the integer part

• 5 bits for the fraction part

and consider the cases of truncation and round-off.

21
Part II

Event-triggered control

22
6 Event-based control and Real-time systems
E XERCISE 6.1
Consider the following first-order system:
ẋ = x + u
where x, u ∈ R are the state and control input, respectively. Moreover, x = 5 when t = 0. The feedback
control law is given by:
u = −2 x

(a) Show that under this control law, the closed-loop system asymptotically converges to the origin, using a
Lyapunov function.

(b) Suppose x(t) is sampled periodically and the controller is followed by a Zero Order Hold, namely

u(t) = −2 x(kh), t ∈ [kh, kh + h),

where k ∈ N, h > 0 is the sampling period. What is the maximal value hmax of h before the closed-loop
system becomes unstable?

(c) Now x(t) is sampled aperiodically at the sequence {tk }, k ∈ N and the controller is followed by a Zero
Order Hold, namely

u(t) = −2 x(tk ), t ∈ [tk , tk+1 ).

how to design the sequence {tk }, k ∈ N such that the closed-loop system still converges to the origin?
what is the maximal sampling interval maxk (tk+1 − tk ) in this case?

(d) Describe how the event-triggered controller could be implemented on a digital platform and compare it
with periodic controller in (b).

E XERCISE 6.2
Consider the following unstable second-order system
   
2 1 1
ẋ(t) = x(t) + u(t).
1 −2 1
| {z } |{z}
A B

(a) Show that a linear feedback control law u(t) = Kx(t) with
 
K = −3 −1 ,

renders the closed-loop system asymptotically stable.

(b) In order to implement the controller on a digital platform, the state of the system is sampled aperiodically
at a sequence of time instants {tk }, k ∈ N, and the control signal is now given by

u(t) = Kx(tk ), t ∈ [tk , tk+1 ).

Find the closed loop equation of the system in terms of the state x(t) and the state error e(t), where

e(t) = x(tk ) − x(t), t ∈ [tk , tk+1 ).

23
(c) Using the quadratic Lyapunov function
 
1 T 3 0
V (x) = x x,
2 0 1

design the sequence {tk }, k ∈ N, such that the origin is still a stable equilibrium point of the closed-loop
system.

E XERCISE 6.3
In an embedded control system the control algorithm is implemented as a task in a CPU. The control task Jc
can compute the new control action only after the acquisition task Ja has acquired new sensor measurements.
The two tasks are independent and they share the same CPU. Suppose the sampling period is h = 0.4 seconds
and the tasks have the following specifications

Ci
Jc 0.1
Ja 0.2
We assume that the period Ti and the deadline Di are the same for the two tasks, and the release time is 0 for
both tasks.

(a) Is possible to schedule the two tasks Jc and Ja ? Determine the schedule length and draw the schedule.

(b) Suppose that a third task is running in the CPU. The specifications for the task are

Ci Ti = D i ri
Jx 0.2 0.8 0.3

and we assume that the task Jx has higher priority than the tasks Jc and Ja . We also assume the CPU can
handle preemption. Are the three tasks schedulable? Draw the schedule and determine the worst-case response
time for the control task Jc .

E XERCISE 6.4
A digital PID controller is used to control the plant, which sampled with period h = 2 has the following transfer
function
1 z − 0.1
P (z) = .
100 z − 0.5
The control law is  
z
C(z) = 15 1 + .
z−1
Assume that the control task Jc is implemented on a computer and has Cc = 1 as the worst case computation
time. Assume that a higher priority interrupt occurs at time t = 2 which has a worst case computation time CI .
Determine the largest value of CI such that the closed loop system is stable.

E XERCISE 6.5

24
A robot has been designed with three different tasks JA , JB , JC , with increasing priority. The task JA is a low
priority thread which implements the DC-motor controller, the task JB periodically send a "ping" through the
wireless network card so that it is possible to know if the system is running. Finally the task JC , with highest
priority, is responsible to check the status of the data bus between two I/O ports, as shown in Figure 6.5. The
control task is at low priority since the robot is moving very slowly in a cluttered environment. Since the data
bus is a shared resource there is a semaphore that regulates the access to the bus. The tasks have the following
characteristics
Ti Ci
JA 8 4
JB 5 2
JC 1 0.1
Assuming the kernel can handle preemption, analyze the following possible working condition:
• at time t = 0, the task JA is running and acquires the bus in order to send a new control input to the
DC-motors,

• at time t = 2 the task JC needs to access the bus meanwhile the control task JA is setting the new control
signal,

• at the same t JB is ready to be executed to send the "ping" signal.

(a) Show graphically which tasks are running. What happens to the high priority task JC ? Compute the
response time of JC in this situation.

(b) Suggest a possible way to overcome the problem in (a).

DC-Motors

Data Bus
I/O I/O

Dedicated Data Bus


Ping Network Card

CPU

Figure 6.5: Schedule for the control task Jc and the task handling the interrupt, of Problem 6.4.

E XERCISE 6.6 (Jackson’s algorithm, page 52 in [4])

25
All the tasks consist of a single job, have synchronous arrival times but have different computation times and
deadlines. They are assumed to be independent. Each task can be characterized by two parameters, deadline di
and computation time Ci
J = {Ji |Ji = Ji (Ci , di ), i = 1, . . . , n}.
We consider here the Jackson’s algorithm to schedule a set J of n aperiodic tasks minimizing a quantity called
maximum lateness and defined as 
Lmax := max fi − di .
i∈J
The algorithm, also called Earliest Due Date (EDD), can be expressed by the following rule
Theorem 1. Given a set of n independent tasks, any algorithm that executes the tasks in order of nondecreasing
deadlines is optimal with respect to minimizing the maximum lateness.

(a) Consider a set of 5 independent tasks simultaneously activated at time t = 0. The parameters are indi-
cated in the following table

J1 J2 J3 J4 J5
Ci 1 1 1 3 2
di 3 10 7 8 5

Determine what is the maximum lateness using the scheduling algorithm EDD.
(b) Prove the optimality of the algorithm.

7 Real-time scheduling
E XERCISE 7.1 (Ex. 4.3 in [4])
Verify the schedulability and construct the schedule according to the rate monotonic algorithm for the following
set of periodic tasks
Ci Ti
J1 1 4
J2 2 6
J3 3 10

E XERCISE 7.2 (Ex. 4.4 in [4])


Verify the schedulability under EDF of the task set given in Exercise 7.1 and then construct the corresponding
schedule.

E XERCISE 7.3
Consider the following set of tasks
Ci Ti Di
J1 1 3 3
J2 2 4 4
J3 1 7 7

26
Are the tasks schedulable with rate monotonic algorithm? Are the tasks schedulable with earliest deadline first
algorithm?

E XERCISE 7.4
Consider the following set of tasks
Ci Ti Di
J1 1 4 4
J2 2 5 5
J3 3 10 10
Assume that task J1 is a control task. Every time that a measurement is acquired, task J1 is released. When
executing, it computes an updated control signal and outputs it.
(a) Which scheduling of RM or EDF is preferable if we want to minimize the delay between the acquisition
and control output?

(b) Suppose that J2 is also a control task and that we want its maximum delay between acquisition and
control output to be two time steps. Suggest a schedule which guarantees a delay of maximally two time
steps, and prove that all tasks will meet their deadlines.

E XERCISE 7.5
Consider the two tasks J1 and J2 with computation times, periods and deadlines as defined by the following
table:
Ci Ti Di
J1 1 3 3
J2 1 4 4

(a) Suppose the tasks are scheduled using the rate monotonic algorithm. Will J1 and J2 meet their deadlines
according to the schedulability condition on the utilization factor? What is the schedule length, i.e., the
shortest time interval that is necessary to consider in order to describe the whole time evolution of the
scheduler? Plot the time evolution of the scheduler when the release time for both tasks is at t = 0.

(b) If the two tasks implement a controller it is important to know what is the worst-case delay between the
time the controller is ready to sample and the time a new input u(kh) is ready to be released. Find the
worst-case response time for J1 and J2 . Compare with the result in (a).

E XERCISE 7.6
Consider the set of periodic tasks given in the table below:

Ci Ti Oi
J1 1 3 1
J2 2 5 1
J3 1 6 0

27
where for task i, Ci the worst-case execution time, Ti denotes the period, and Oi the offset for the respective
tasks. Assume that the deadlines coincide with the period. The offset denotes the relative release time of the
first task instance for each task. Assume that all tasks are released at time 0 with their respective offset Oi .
(a) Determine the schedule length.

Determine the worst-case response time for task J2 for each of the following three scheduling policies:
(b) Rate-monotonic scheduling
(c) Deadline-monotonic scheduling
(d) Earliest-deadline-first scheduling

E XERCISE 7.7
A control task Jc is scheduled in a computer together with two other tasks J1 and J2 . Assume that the three
tasks are scheduled using a rate monotonic algorithm. Assume that the release time for all tasks are at zero and
that the tasks have the following characteristics

Ci Ti Di
J1 1 4 4
J2 1 6 6
Jc 2 5 5
(a) Is the set of tasks schedulable with rate monotonic scheduling? Determine the worst-case response time
for the control task Jc .
(b) Suppose the control task implements a sampled version of the continuous-time controller with delay
ẋ(t) = Ax(t) + By(t − τ )
u(t) = Cx(t)
where we let τ be the worst-case response time Rc of the task Jc . Suppose that the sampling period of the
controller is h = 2 and Rc = 3. Derive a state-space representation for the sampled controller. Suggest
also an implementation of the controller by specifying a few lines of computer code.
(c) In order to improve performance the rate monotonic scheduling is substituted by a new scheduling al-
gorithm that give highest priority to the control task and intermediate and lowest to the task J1 and J2 ,
respectively. Are the tasks schedulable in this case?

E XERCISE 7.8
Compute the maximum processor utilization that can be assigned to a polling server to guarantee the following
periodic task will meet their deadlines
Ci Ti
J1 1 5
J2 2 8

28
E XERCISE 7.9
Together with the periodic tasks
Ci Ti
J1 1 4
J2 1 8
we want to schedule the following aperiodic tasks with a polling server having Ts = 5 and Cs = 2. The
aperiodic tasks are
ai Ci
a1 2 3
a2 7 2
a3 9 1
a3 29 4

E XERCISE 7.10
Consider the set of tasks in Problem 7.5, assuming that an aperiodic task could ask for CPU time. In order
to handle the aperiodic task we run a polling server Js with computation time Cs = 3 and period Ts = 6.
Assume that the aperiodic task has computation time Ca = 3 and asks for the CPU at time t = 3. Plot the time
evolution when a polling server is used together with the two tasks J1 and J2 scheduled as in Problem 7.5 part
(a). Describe the scheduling activity illustrated in the plots.

8 Models of computation I: Discrete-event systems


E XERCISE 8.1
Consider the problem of controlling a gate which is lowered when a train is approaching and it is raised when
the train has passed. We assume that the railway is unidirectional and that a train can be detected 1500m before
the gate and 1000m after the gate. The sensors give binary outputs i.e., they give a ’0’ when the train is not over
the sensor and a ’1’ when the train is over the sensor. The gate has a sensor which gives a binary information
and in particular gives ’0’ if the gate is (fully) closed and ’1’ if the gate is (fully) opened. Figure 8.1 shows
a schema of the system. The gate needs to be lowered as soon as a train is approaching, and raised when the
train has passed. Model the system as a discrete-event system. Assume that trains, for safety reasons are distant
from each other, so that no train approaches before the previous train has left.

E XERCISE 8.2
A vending machine dispenses soda for $0.45. It accepts only dimes ($0.10) and quarters ($0.25). It does not
give change in return if your money is not correct. The soda is dispensed only if the exact amount of money
is inserted. Model the vending machine using a discrete-event system. Is it possible that the machine does not
dispense soda? Prove it formally.

29
1000m 1500m

Train
Gate

S3 A S2 S1

Controller

Figure 8.1: Control of a gate. Problem 8.1.

E XERCISE 8.3
Consider the automaton describing some discrete-event system shown in Figure 8.3. Describe formally the
0
q1 q2

Figure 8.3: Automaton A of Problem 8.3.

DES. Compute the marked language Lm and the generated language L.

E XERCISE 8.4 (Example 2.13 in [6])


Consider the automaton A of Figure 8.4. Compute the language marked by the automaton A, Lm (A) and the
language generated by the automaton, L(A).

0 1
q1 q2 q3

0 0,1

Figure 8.4: Automaton A of Problem 8.4.

30
E XERCISE 8.5 (Example 3.8 in [6])
Consider the automaton A of Figure 8.5. Determine the minimum state automaton.
1
0
0 1
a b c d
0
0
1
1

0
1
1 1
e f g h
0

1
0
0

Figure 8.5: Automaton A of Problem 8.5.

E XERCISE 8.6 (Example 2.5 in [6])


Consider the automaton
A = ({q0 , q1 }, {0, 1}, δ, q0 , {q1 })
be a nondeterministic automaton where

δ(q0 , 0) = {q0 , q1 } δ(q0 , 1) = {q1 } δ(q1 , 0) = δ(q1 , 1) = {q0 , q1 }.

Construct an deterministic automaton A′ which accept the same Lm .

9 Models of computation II: Transition systems


E XERCISE 9.1 [3]
Consider the circuit diagram of the sequential circuit with input variable x, output variable y, and register r, cf.
Figure 9.1. The control function for output variable y is given by

λy = ¬(x ⊕ r)

where ⊕ stands for exclusive (XOR, or parity function). The register evaluation changes according to the circuit
function
δr = x ∨ r
where ∨ stands for disjunction (OR). Initially, the register evaluation is [r = 0]. Model the circuit behavior by
a finite transition system.

31
Figure 9.1: Diagram for a simple hardware circuit.

E XERCISE 9.2 [5]


Consider an automaton as shown in Figure. 9.2, where E = {a1 , a2 , b} is the set of events; X = {0, 1, 2, 3, 4, 5}
is the set of states; note that state 0 is the initial state and state 4 is the marked state. Try to understand the
automaton and explain the set of languages it marks.

Figure 9.2: This automaton can be used as a model of a “status check” system when a machine is turned on, if
its states are given the following interpretation: 0 = “Initializing”, 1 = “I am ON”, 2 = “My status is OK”, 3 =
“I have a problem”, 4 = “Status report done”, and 5 = “Error”.

E XERCISE 9.3 [5]


Consider a simple T-intersection of the traffic system, as shown in Fig. 9.3. There are four types of vehicles:

• (1, 2)-type vehicles coming from point 1 and turning right towards 2;

• (1, 3)-type vehicles coming from point 1 and turning right towards 3;

• (2, 3)-type vehicles going straight from 2 to 3;

• (3, 2)-type vehicles going straight from 3 to 2.

The traffic light is set so that it either turns red for (1, 2) and (1, 3) vehicles (green for (2, 3) and (3, 2) vehicles),
or it turns green for (1, 2) and (1, 3) vehicles (red for (2, 3) and (3, 2) vehicles). Model the traffic system as a
transition system T = (S, Act, →, I) to describe the changes to the number of the vehicles of the individual
types waiting at the intersection over time. Model arrivals of new cars and departures of the waiting cars and
changes of the traffic light colors. Assume that initially, there is no vehicle at the intersection.

32
Figure 9.3: A simple T-intersection with four types of vehicles.

E XERCISE 9.4 [5]


Queuing systems arises in many application domain such as computer networks, manufacturing, logistics and
transportation. A queuing systems is composed by three basic elements: 1) the entities, generally referred to
as customers, that do the waiting in their request for resources, 2) the resources for which the waiting is done,
which are referred to as servers, and 3) the space where the waiting is done, which is defined as queue. Typical
examples of servers are communications channels, which have a finite capacity to transmit information. In
such a case, the customers are the unit of information and the queue is the amount of unit of information that is
waiting to be transmitted over the channel.
A basic queue system is reported in figure 9.4. The circle represent a server, the open box is a queue
preceding the server. The slots in the queue are waiting customers. The arrival rate of customers in the queue
is denoted by a, whereas the departure rate of customers is denoted by b.
Model the queue system of figure 9.4 by a transition system. How many states has the system?
queu e server

C ustom ers C ustom ers


A rrivals D eparture

Figure 9.4: A basic queue system.

E XERCISE 9.5 [15]


Consider the transition system T = {S, Σ, →, SS }, where the cardinality of S is finite. The reachability
algorithm is

Initialization : Reach1 = ∅;
Reach0 = SS ;
i = 0;
Loop : W hile Reachi 6= Reachi−1 do
Reachi+1 = Reachi ∪ {s ∈ S : ∃ : s ∈ Reachi , σ ∈ Σ, s →σ s′ ∈→};

i = i + 1;

Prove formally that

33
• the reachability algorithm finishes in a finite number of steps;

• upon exiting the algorithm, Reachi = ReachT (SS ).

E XERCISE 9.6 [15]


Give the Transition System T = {S, Σ, →, SS } reported in figure 9.6, describe the reach set when SS = {3}
and SS = {2} by using the reachability algorithm.

a b a

2 3

a a a b

4 5 6

Figure 9.6: A Transition System.

34
Part III

Hybrid control

35
10 Modeling of hybrid systems
E XERCISE 10.1
A water level in a tank is controlled through a relay controller, which senses continuously the water level and
turns a pump on or off. When the pump is off the water level decreases by 2 cm/s and when it is on, the water
level increases by 1 cm/s. It takes 2 s for the control signal to reach the pump. It is required to keep the water
level between 5 and 12 cm.

(a) Assuming that the controller starts the pump when the level reaches some threshold and turns it of when
it reaches some other threshold, model the closed-loop system as a hybrid automaton.

(b) What thresholds should be used to fulfill the specifications?

E XERCISE 10.2
Consider the quantized control system in Figure 10.2. Such a system can be modeled as a hybrid automaton
with continuous dynamics corresponding to P (s)C(s) and discrete states corresponding to the levels of the
quantizer. Suppose that each level of the quantizer can be encoded by a binary word of k bits. Then, how
many discrete states N should the hybrid automaton have? Describe when discrete transitions in the hybrid
automaton should take place.

v = Q(u)
P (s)
v

Q D
u
D
u 2
C(s)

Figure 10.2: Quantized system in Problem 10.2.

E XERCISE 10.3
A system to cool a nuclear reactor is composed by two independently moving rods. Initially the coolant temper-
ature x is 510 degrees and both rods are outside the reactor core. The temperature inside the reactor increases
according to the following (linearized) system

ẋ = 0.1x − 50.

When the temperature reaches 550 degrees the reactor must be cooled down using the rods. Three things can
happen

• the first rod is put into the reactor core

• the second rod is put into the reactor core

• none of the rods can be put into the reactor

36
For mechanical reasons a rod can be placed in the core if it has not been there for at least 20 seconds. If no
rod is available the reactor should be shut down. The two rods can refrigerate the coolant according to the two
following ODEs

rod 1: ẋ = 0.1x − 56
rod 2: ẋ = 0.1x − 60

When the temperature is decreased to 510 degrees the rods are removed from the reactor core. Model the
system, including controller, as a hybrid system.

Rod 2

Rod 1

Controller

Reactor
Figure 10.3: Nuclear reactor core with the two control rods

E XERCISE 10.4
Consider the classical sampled control system, shown in Figure 10.4. Model the system with a hybrid automa-
ton. Suppose that the sampling period is k and that the hold circuit is a zero-order hold.

E XERCISE 10.5
Consider a hybrid system with two discrete states q1 and q2 . In state q1 the dynamics are described by the linear
system  
−1 0
ẋ = A1 x =
p −1
and in state q2 by  
−1 p
ẋ = A2 x = .
0 −1
Assume the system is

37
u(t) y(t)
t t
u(t) y(t)
Process

Hold Sample

uk ūk ȳk yk
DA Computer AD

...... t ....... t
uk yk

Figure 10.4: Sampled data control system of Problem 10.4.

in state q1 if 2k ≤ t < 2k + 1 and

in state q2 if 2k + 1 ≤ t < 2k + 2,

where k = 0, 1, 2, . . . .

(a) Formally define a hybrid system with initial state q1 , which operates in the way described above.

(b) Starting from x(0) = x0 , specify the evolution of the state x(t) in the interval t ∈ [0, 3) as a function of
x0 .

E XERCISE 10.6
Consider the hybrid system of Figure 10.6:

(a) Describe it as a hybrid automaton, H = (Q, X, Init, f, D, E, G, R)

(b) Find all the domains D(q3 ) so that the hybrid system is live?

(c) Plot the solution of the hybrid system.

x(0) := 0 x≥5 x ≤ 3, x := −2
q1 q2 q3
ẋ = 2 ẋ = −1 ẋ = x + 2
x<5 x>3 D(q3 ) =?

Figure 10.6: Hybrid system for Problem 10.6.

38
11 Stability of hybrid systems
E XERCISE 11.1
Consider three balls with unit mass, velocities v1 , v2 , v3 , and suppose that they are touching at time t = τ0 , see
Figure 11.1. The initial velocity of Ball 1 is v1 (τ0 ) = 1 and Balls 2 and 3 are at rest, i.e., v2 (τ0 ) = v3 (τ0 ) = 0.

v1

Ball 1 Ball 2 Ball 3

Figure 11.1: Three balls system. The Ball 1 has velocity v1 at time t = 0.

Assume that the impact is a sequence of simple inelastic impacts occurring at τ0′ = τ1′ = τ2′ = . . . (using
notation from hybrid time trajectory). The first inelastic collision occurs at τ0′ between balls 1 and 2, resulting
in v1 (τ1 ) = v2 (τ1 ) = 1/2 and v3 (τ1 ) = 0. Since v2 (τ1′ ) > v3 (τ1′ ), Ball 2 hits Ball 3 instantaneously giving
v1 (τ2 ) = 1/2, and v2 (τ2 ) = v3 (τ2 ) = 1/4. Now v1 (τ2′ ) > v2 (τ2′ ), so Ball 1 hits Ball 2 again resulting in a new
inelastic collision. This leads to an infinite sequence of collisions.
(a) Model the inelastic collisions of the three-ball system described above as a hybrid automaton H =
(Q, X, Init, f, D, E, G, R) with one discrete variable Q = {q} and three continuous variables X =
{v1 , v2 , v3 }.
(b) Is the execution described above a Zeno execution? Motivate.
(c) What is the accumulation point of the infinite series of hits described above? Make a physical interpreta-
tion.

E XERCISE 11.2
Consider the following system
    
ẋ1 −1 0 2 x1
ẋ2  =  0 −1 3  x2 
ẋ3 −2 −3 −2 x3
Show, using a Lyapunov function, that the system is asymptotically stable.

E XERCISE 11.3
Consider the following theorem:
Theorem 2. A linear system
ẋ = Ax
is asymptotically stable if and only if for any positive definite symmetric matrix Q the equation
AT P + P A = −Q
in the unknown P ∈ Rn×n has a solution which is positive definite and symmetric.
Show the necessary part of the previous theorem (i.e., the if part).

39
E XERCISE 11.4
Consider the following system

ẋ1 = −x1 + g(x2 )


ẋ2 = −x2 + h(x1 )

where the functions g and h are such that

|g(z)| ≤ |z|/2 |h(z)| ≤ |z|/2

Show that the system is asymptotically stable.

E XERCISE 11.5
Consider the following discontinuous differential equations

ẋ1 = −sgn(x1 ) + 2sgn(x2 )


ẋ2 = −2sgn(x1 ) − sgn(x2 ).

where (
+1 if z ≥ 0
sgn(z) =
−1 if z < 0.
Assume x(0) 6= 0,

(a) define a hybrid automaton that models the discontinuous system

(b) does the hybrid automaton exhibit Zeno executions for every initial state?

E XERCISE 11.6
Consider the following switching system

ẋ = aq x, aq < 0 ∀q

where q ∈ {1, 2} and

Ω1 = {x ∈ R|x ∈ [2k, 2k + 1), k = 0, 1, 2, . . . }


Ω2 = {x ∈ R|x ∈ [2k + 1, 2k + 2), k = 0, 1, 2, . . . }

Show that the system is asymptotically stable.

40
E XERCISE 11.7
Consider the following switching system
ẋ = Aq x
where q ∈ {1, 2} and
 
−1 0
A1 =
0 −2
 
−3 0
A2 = .
0 −5

Let Ωq be such that

Ω1 = {x ∈ R2 |x1 ≥ 0}
Ω2 = {x ∈ R2 |x1 < 0}

Show that the system is asymptotically stable.

E XERCISE 11.8
Consider the following switching system
ẋ = Aq x
where q ∈ {1, 2} and
 
−a1 b1
A1 =
0 −c1
 
−a2 b2
A2 = .
0 −c2

Assume that ai , bi and ci , i = 1, 2 are real numbers and that ai , ci > 0. Show that the switched system is
asymptotically stable.

E XERCISE 11.9
Consider a system that follows the dynamics
ẋ = A1 x
for a time ǫ/2 and then switches to the system

ẋ = A2 x

for a time ǫ/2. It then switches back to the first system, and so on.
(a) Model the system as a switched system

(b) Model the system as a hybrid automaton

(c) Let t0 be a time instance at which the system begins a period in mode 1 (the first system) with initial
condition x0 . Determine the state at t0 + ǫ/2 and t0 + ǫ.

(d) Let ǫ tend to zero (very fast switching). Determine the solution the hybrid system will tend to.

41
E XERCISE 11.10
Consider the following hybrid system
ẋ = Aq x
where
 
−1 0
A1 =
0 −2
 
−3 0
A2 = .
0 −5
Let Ωq be such that

Ω1 = {x ∈ R2 |x1 ≥ 0}
Ω2 = {x ∈ R2 |x1 < 0}

show that the switched system is asymptotically stable using a common Lyapunov function.

E XERCISE 11.11 (Example 2.1.5 page 18-19 in [13])


Consider the following switched system with q ∈ {1, 2}

ẋ = Aq x

where
 
−1 −1
A1 =
1 −1
 
−1 −10
A2 = .
0.1 −1
(a) Show that is impossible to find a quadratic common Lyapunov function.
(b) Show that the origin is asymptotically stable for any switching sequence.

E XERCISE 11.12
Consider the following two-dimensional state-dependent switched system
(
A1 x if x1 ≤ 0
ẋ =
A2 x if x1 > 0

where    
−5 −4 −2 −4
A1 = and A2 = .
−1 −2 20 −2
(a) Prove that there is not a common quadratic Lyapunov function suitable to prove stability of the system
(b) Prove that the switched system is asymptotically stable using the multiple Lyapunov approach.

42
12 Simulation and bisimulation
E XERCISE 12.1
Consider the following two bisimilar transition systems T = (S, Σ, →, S0 , SF ) and T ′ = (S ′ , Σ, →′ , S0′ , SF′ )
illustrated in the figure below. Find the bisimulation relation ∼∈ S × S ′ .

b b
s1 s2 s3 s′1
b
a a a
b c
s4 s5 s′2 s′3
c
c c c
a c a b
b a
s6 a, b s7 s′4 s′5 s′6 s′7
c

Figure 12.1: The transition systems T (left) and T ′ (right).

E XERCISE 12.2
Find the minimal quotient T that is bisimilar to the transition system T .

s1 s2
b
a b a b

s3 s4 s5 s6
a a
b c b c

s7 s8

a a

Figure 12.2: The transition systems T .

E XERCISE 12.3
Find two transition systems T = (S, Σ, →, S0 , SF ) and T ′ = (S ′ , Σ, →′ , S0′ , SF′ ), with the property that there
exist two bisimulations ∼1 ⊆ S × S ′ and ∼2 ⊆ S × S ′ , such that ∼1 6=∼2 .
Hint: First find a transition system T that is not equal to its coarsest bisimulation quotient. Then look for an
appropriate T ′ that satisfies the desired condition.

43
E XERCISE 12.4
Consider the infinite transition system T = (S, Σ = {a, b, c, d}, →, S0 , SF ) and finite transition systems
T1 = (Q, Σ, →1 , Q0 , QF ), T2 = (R, Σ, →2 , R0 , RF ), T3 = (U, Σ, →3 , U0 , UF ), T4 = (V, Σ, →4 , V0 , VF ).

a) Which of T1 , T2 , T3 , T4 are bisimilar with T ? Which of them are not and why?

b) Find all simulation relations between the given transition systems.

a a a...
s1 s3 s5
b b b...

c c c...
s2 s4 s6
d d d...

a a

q1 a q3 a
r1 r3 u1
b b v1
b
b c
c c c
a, b, c, d
q2 q4 r2 r4 u2
d d

d d

Figure 12.4: The transition systems T (top) and T1 − T4 (bottom, from left to right).

E XERCISE 12.5
Consider the following hybrid automaton H, with the set of initial states Init = {(q1 , x = 0, y = 0)} and view
it as a transition system with the set of generators Σ = {turn_right, turn_lef t, t}, where t represents the time,
and turn_right, turn_lef t are the events. Is it bisimilar to any of the transition systems T1 , T2 , T3 , orT4 ?

13 Reachability, timed automata and rectangular automata


E XERCISE 13.1
Consider the following two tank system in Fig. 13.1 modeled as a hybrid automaton A in Fig. 13.1.

a) Compute P ost(qoff,off , x = 175, y ∈ [100, 200]).

44
turn_right
x ≥ 10?
y := 0

q1 q2
ẋ = 1 ẋ = −5
ẏ = −1 ẏ = 1

turn_left
x ≤ 0?

Figure 12.5: Hybrid automaton H.

t t t t
turn_right t

s1 s2 s1 s2

turn_left turn_right, turn_left


t t t

t t t
s1 s2 s1 s2 s3

turn_left turn_right turn_left turn_right

s4 s3 s6 s5 s4
t t t

t t t

Figure 12.5: Transition systems T1 (top left), T2 (top right), T3 (bottom left) and T4 (bottom right).

b) Compute P ost(qoff,off , x = 200, y = 200).

c) Compute P re(qon,on , x = 150, y = 200).

Let the initial state be (qoff,off , x = 190, y = 200).

d) Verify that the level of water in the second water tank always remains within safe bounds, i.e., that
y(t) ∈ [150, 350], for each t ≥ 0.
Hint: Compute the reach set from the initial state through repeated computation of P ost.

e) Is there a state (qoff,off , x, y), where x = y reachable from the initial state?

45
Figure 13.1: Two tank system, 1: x, 2: y, 3: α1 ; α2 (on or off), 4: α3 (on or off), 5: α4 .

qoff,off qoff,on
ẋ = α1 = 5
x ≥ 200? ẋ = α1 − α3 = −25
ẏ = −α4 = −18 ẏ = α3 − α4 = 12
x ∈ [175, 200] x ∈ [150, 200]

x ≥ 175? x ≤ 150?

qon,off qon,on
ẋ = α1 + α2 = 25 ẋ = α1 + α2 − α3 = −5
ẏ = −α4 = −18 ẏ = α3 − α4 = 12
x ∈ [100, 175]
x ≤ 100? x ∈ [100, 150]

Figure 13.1: Two tanks hybrid automaton A.

E XERCISE 13.2
Consider the following timed automaton A of a time code lock. Find the corresponding region automaton
Reg(A)?

E XERCISE 13.3
Is the following rectangular automaton A initialized? If yes, find a bisimilar timed automaton A′ .

46
qǫ qk qke qkey
k e y
x≤1 x≤1 x≤1
x := 0 x := 0 x := 0
#
x≤1

q#
x≤3
reset
x=3
x := 0

Figure 13.2: Timed automaton A.

3 0 ≤ x ≤ 120
x := 0
y := 0

ẋ = [2, 3] ẋ = [3, 5]
ẏ = 1 y ≥ 100 ẏ = 1
x := 0
x ≥ 60 ∨ y ≥ 50
y := 0
x ≥ 150
x := 0
ẋ = [2, 3]
y := 00
ẏ = 2

Figure 13.3: Rectangular automaton A.

E XERCISE 13.4
Consider the following linear system  
−1 0
ẋ = x
0 −5
Assume that the initial condition is defined in the following set

x0 ∈ {(x1 , x2 ) ∈ R2 |x1 = x2 , −10 ≤ x1 ≤ 10}

We want to verify that no trajectories enter in a Bad set defined as

Bad = {(x1 , x2 ) ∈ R| − 8 ≤ x1 ≤ 0 ∧ 2 ≤ x2 ≤ 6}.

E XERCISE 13.5

47
Consider the following linear system  
−5 −5
ẋ = x.
0 −1
Assume that the initial condition lies in the following set

x0 ∈ {(x1 , x2 ) ∈ R2 | − 2 ≤ x1 ≤ 0 ∧ 2 ≤ x2 ≤ 3}.

Describe the system as a transition system and verify that no trajectories enter a Bad set defined as the triangle
with vertices v1 = (−3, 2), v2 = (−3, −3) and v3 = (−1, 0).

E XERCISE 13.6
Consider the following controlled switched system
 
  0 1  
ẋ1 x
=  1  1 + B1 u if kxk < 1
ẋ2 5 x2
3 
  0 1    
ẋ1 x1 0
= 1 1 + u if 1 ≤ kxk ≤ 3
ẋ2 − x 2 1
3 3 
 
  0 1    
ẋ1 x −1 0 
= −  1  1 + u otherwise
ẋ2 5 x 2 1
3
Assume that the initial conditions are x0 ∈ {x ∈ R2 |kxk > 3},

(a) Determine a control strategy such that Reachq ∪ Ω1 6= ∅, i.e. Ω1 can be reached from any initial condition
when B1 = 0.

Suppose in the following B1 = (0, 1)T ,

(b) Is it possible to determine a linear control input such that (0, 0) is globally asymptotically stable?

(c) Construct a piecewise linear system such that (0, 0) is globally asymptotically stable.

(d) Suppose now, that we do not want that the solution of the linear system would enter the Bad set Ω1 .
Determine a controller such that Reachq ∩ Ω1 = ∅.

48
E XERCISE 13.7
A system to cool a nuclear reactor is composed by two independently moving rods. Initially the coolant temper-
ature x is 510 degrees and both rods are outside the reactor core. The temperature inside the reactor increases
accordingly to the following (linearized) system

ẋ = 0.1x − 50.

When the temperature reaches 550 degrees the reactor mush be cool down using the rods. Three things can
happen
• the first rod is put into the reactor core

• the second rod is put into the reactor core

• none of the rods can be put into the reactor


For mechanical reasons the rods can be placed in the core if it has not been there for at least 20 seconds. The
two rods can refrigerate the coolant accordingly to the two following ODEs

rod 1: ẋ = 0.1x − 56
rod 2: ẋ = 0.1x − 60

When the temperature is decreased to 510 degrees the rods are removed from the reactor core.
a Model the system as a hybrid system.

b If the temperature goes above 550 degrees, but there is no rod available to put down in the reactor, there
will be a meltdown. Determine if this Bad state can be reached.

Rod 2

Rod 1

Controller

Reactor
Figure 13.7: Nuclear reactor core with the two control rods

49
Solutions

50
Part I

Time-triggered control

51
Solutions to review exercises
S OLUTION 1.1
Before solving the exercise we review some concepts on sampling and aliasing

Shannon sampling theorem


Let x(t) be band-limited signal that is, X(jω) = 0 for |ω| > ωm . Then x(t) is uniquely determined by its
samples x(kh), k = 0, ±1, ±2, . . . if
ωs > 2ωm
where ωs = 2π/h is the sampling frequency, h the sampling period. The frequency ws /2 is called the Nyquist
frequency.

Reconstruction
Let x(t) be the signal to be sampled. The sampled signal xs (t) is obtained multiplying the input signal x(t) by
a period impulse train signal p(t), see Figure1.1. We have that
xs (t) = x(t)p(t)
X∞
p(t) = δ(t − kh).
k=−∞

Thus the sampled signal is



X
xs (t) = x(kh)δ(t − kh).
k=−∞
If we let the signal xs (t) pass through an ideal low-pass filter (see Figure 1.1.1) with impulse response
w 
s
f (t) = sinc t
2
and frequency response 
h, −ωs /2 < ω < ωs /2;
F (jω) =
0, otherwise.
as shown in Figure 1.1.1. The output signal is
Z ∞
xr (t) = xs (t) ∗ f (t) = xs (t − τ )f (τ )dτ
−∞
Z ∞
!
∞ X
= x(kh)δ(t − τ − kh) f (τ )dτ
−∞ k=−∞

X ω 
s
= x(kh)sinc (t − kh)
2
k=−∞
X∞ π 
= x(kh)sinc (t − kh) .
h
k=−∞

Notice that perfect reconstruction requires an infinite number of samples.


Returning to the solution of the exercise we have that the Fourier transform of the sampled signal is given
by

1 X
Xs (ω) = X(ω + kωs )
h
k=−∞

(a) The reconstructed signal is xr (t) = cos(ω0 t) since ωs = 6ω0 > 2ω0 .

52
f (t)
F (jw)

−ωs /2) ωs /2

Figure 1.1.1: Impulse and frequency response of an ideal low-pass filter.

Xs (t)

ω0 − ωs −ω0 + ωs

ω
−ω0 ω0 ω0 + ωs
−ω0 − ωs −ωs ωs

Figure 1.1.2: Frequency response of the signal with ω0 = ωs /6

(b) The reconstructed signal is xr (t) = cos(ω0 t) since ωs = 6ω0 /2 > 2ω0 .

Xs (t)

ω0 − ωs −ω0 + ωs

ω
−ω0 ω0 ω 0 + ωs
−ω0 − ωs −ωs ωs

Figure 1.1.3: Frequency response of the signal with ω0 = 6ωs /2


(c) The reconstructed signal is xr (t) = cos (−ω0 + ωs )t = cos(ω0 /2t) since ωs = 6ω0 /4 < 2ω0 .

(d) The reconstructed signal is xr (t) = cos (−ω0 + ωs )t = cos(ω0 /5t) since ωs = 6ω0 /5 < 2ω0 .

53
Xs (t)

ω0 − ω s −ω0 + ωs

ω
ω0 + ωs −ω0 ω0
−ωs ωs −ω0 − ωs

Figure 1.1.4: Frequency response of the signal with ω0 = 6ωs /4

Xs (t)

ω0 + ωs ω0 − ω s

ω
−ω0 −ω0 + ωs ω0 −ω0 − ωs
−ωs ωs

Figure 1.1.5: Frequency response of the signal with ω0 = 6ωs /5

54
55
56
S OLUTION 1.2
We have that
eAh = α0 Ah + Iα1 .
The eigenvalues of Ah are ±ih thus we need to solve
eih = α0 ih + α1
e−ih = −α0 ih + α1 .
This gives
eih − e−ih sin h
α0 = =
2ih h
eih + e−ih
α1 = = cos h.
2
Thus    
Ah sin h 0 1 1 0
e = h + cos h
h −1 0 0 1
We remind here some useful way of computing the matrix exponential of a matrix A ∈ Rn×n . Depending
on the form of the matrix A we can compute the exponential in different ways
• If A is diagonal then
   
a11 0 . . . 0 ea11 0 ... 0
 0 a22 . . . 0   0 ea22 ... 0 
eA =  .
   
A= .
. .. . . ..  ⇒ .. .. .. 
 . . . .   .. . . . 
0 0 . . . ann 0 0 . . . eann

• A is nilpotent of order m. Then Am = 0 and Am+i = 0 for i = 1, 2, . . . . Then it is possible to use the
following series expansion to calculate the exponential
A2 Am−1
eA = I + A + + ··· +
2! (m − 1)!

• Using the inverse Laplace transform we have



eAt = L−1 (sI − A)−1

• In general it is possible to compute the exponential of a matrix (or any continuous matrix function f (A)))
using the Cayley-Hamilton Theorem. For every function f there is a polynomial p of degree less than n
such that
f (A) = p(A) = α0 An−1 + α1 An−2 + · · · + αn−1 I.
If the matrix A has distinct eigenvalues, the n coefficient α0 , . . . , αn−1 are computed solving the system
of n equations
f (λi ) = p(λi ) i = 1, . . . , n.
If the there is a multiple eigenvalue with multiplicitity m, then the additional conditions
f (1) (λi ) = p(i) (λi )
..
.
f (m−1) (λi ) = p(m−1) (λi )

hold, where f (i) is the ith derivative with respect to λ.

57
S OLUTION 1.3
We recall here what is the z-transform of a signal. Consider a discrete-time signal x(kh), k = 0, 1, . . . . The
z-transform of x(kh) is defined as

X
Z {x(kh)} = X(z) = x(kh)z −k
k=0

where z is a complex variable.

Using the definition



X ∞ n
X ok
−kh/T −k
X(z) = e z = e−h/T z −1 .
k=0 k=0

If |e−h/T z −1 | < 1 then


1 z
X(z) = = .
1− z −1 e−h/T z − e−h/T

S OLUTION 1.4
Using the definition

X
X(z) = sin(wkh)z −k .
k=0
Since
ejwkh − e−jwkh
sin wkh =
2i
then
∞ ∞
1 X n iwh −1 ok 1 X n −iwh −1 ok
X(z) = e z − e z
2i 2i
k=0 k=0

If |e±iwh z −1 | < 1 then


 
1 z z z sin wh
X(z) = iwh
− = ··· = .
2i z−e z − e−iwh z2 − 2z cos wh + 1

S OLUTION 1.5
For a discrete-time signal x(k), we have the following

X
Z(xk ) = X(z) = x(k)z −k = x(0) + x(1)z −1 + . . .
k=0
Z(xk+1 ) = x(1) + x(2)z −1 + . . .

= z x(0) − z x(0) + z x(1)z −1 + x(2)z −2 + . . .

= z x(0) + x(1)z −1 + x(2)z −2 + . . . − z x(0)
= z X(z) − z x(0)

similarly,

Z(xk+2 ) = z 2 X(z) − z 2 x(0) − z x(1)

58
The discrete step is the following function

0, if k < 0;
u(k) =
1, if k ≥ 0.

Thus
z
U (z) = .
z−1
Using the previous z-transform we have

z 2 Y (z) − z 2 y(0) − z y(1) − 1.5z Y (z) + 1.5z y(0) + 0.5 Y (z) = z U (z) − z u(0).

Collecting Y (z) and substituting the initial conditions we get

0.5z 2 − 0.5z z
Y (z) = + U (z)
z 2 − 1.5z + 0.5 z 2 − 1.5z + 0.5
Since U (z) = z/(z − 1) then
0.5z z2
Y (z) = + .
z − 0.5 (z − 1)2 (z − 0.5)
Inverse transform gives
 
0.5(k + 1) − 1 0.5k+1
y(k) = 0.5k+1 + + u(k − 1)
0.52 0.52
 
k+1 k−1 k−1
= 0.5 + + 0.5 u(k − 1)
0.5

Solutions to models of sampled systems


S OLUTION 2.1
The sampled system is given by

x(kh + h) = Φx(kh) + Γu(kh)


y(kh) = Cx(kh)

where

Φ = e−ah
Z h
b 
Γ= e−as dsb = 1 − e−ah .
0 a
Thus the sampled system is
b 
x(kh + h) = e−ah x(kh) + 1 − e−ah u(kh)
a
y(kh) = cx(kh).

The poles of the sampled system are the eigenvalues of Φ. Thus there is a real pole at e−ah . If h is small
e−ah ≈ 1. If a > 0 then the pole moves towards the origin as h increases, if a < 0 it moves along the positive
real axis, as shown in Figure 2.1.1.

59
h=∞ h=0
a>0 a<0
h incr. h incr.

Figure 2.1.1: Closed-loop system for Problem 2.1.

S OLUTION 2.2
(a) The transfer function can be written as

1 α β 1 1
G(s) = = + = − .
(s + 1)(s + 2) s+1 s+2 s+1 s+2

A state-space representation (in diagonal form) is then


   
−1 0 1
ẋ = x+ u
0 −2 1
| {z } |{z}
A B

y = 1 −1 x.
| {z }
C

The state-space representation of the sampled system is

x(k + 1) = Φx(k) + Γu(k)


y(k) = Cx(k)

where
 −1 
Ah e 0
Φ=e =
0 e−2
Z h Z 1  −s   
As e 1 − e−1
Γ= e ds B = = 1−e−2
0 0 e−2s ds 2

since A is diagonal.

(b) The pulse-transfer function is given by


 1  
−1
 z−e−1
0 1 − e−1
H(z) = C(zI − Φ) Γ = 1 −1 1 1−e−2
0 z−e−2 2
z(3/2 + e−1 − 1/2e−2 ) + (3/2e−3 − e−2 − 1/2e−1 )
=
(z − e−1 )(z − e−2 )

60
S OLUTION 2.3
(a) The sampled system is

x(kh + h) = Φx(kh) + Γu(kh)


y(kh) = Cx(kh)

where
Z h
Ah
Φ=e Γ= eAh Bds.
0

To compute eAh we use that


  
Ah −1 −1
 −1 1 s 1
e =L (sI − A) =L .
2
s +1 −1 s

Since
 
s
L−1 = cos h
s2 + 1
 
−1 1
L = sin h
s2 + 1

then  
cos h sin h
eAh = .
− sin h cos h

Equivalently we can compute eAh using Cayley-Hamilton’s theorem. The matrix eAh can be written as

eAh = a0 Ah + a1 I

where the constants a0 and a1 are computed solving the characteristic equation

eλ k = a 0 λi + a 1 k = 1, . . . , n

where n is the dimension of the matrix A and λk are distinct eigenvalues of the matrix Ah. In this
example the eigenvalues of Ah are ±hi. Thus we need to solve the following system of equations

eih = a0 ih + a1
e−ih = −a0 ih + a1

which gives

1  ih  sin h
a0 = e − e−ih =
2hi h
1  ih 
a1 = e + e−ih = cos h.
2
Finally we have
     
Ah sin h 0 1 1 0 cos h sin h
e = h + cos h = .
h −1 0 0 1 − sin h cos h

61
(b) Using Laplace transform we obtain

s2 Y (s) + 3sY (s) + Y (s) = sU (s) + 3U (s).

Thus the system has the transfer function


s+3 2 1
G(s) = = − .
s2 + 3s + 2 s+1 s+2
One state-space realization of the system with transfer function G(s) is
   
−1 0 1
ẋ = x+ u
0 −2 1
| {z } |{z}
A B

y = 2 −1 x.
| {z }
C

Thus the sampled system is

x(kh + h) = Φx(kh) + Γu(kh)


y(kh) = Cx(kh)

with
 
Ah e−h 0
Φ=e =
0 e−2h
Z h Z h  −s   
As e 1 − e−h
Γ= e Bds = ds = 1−e−2h
0 0 e−2s 2

(c) One state-space realization of the system is


   
0
0 0 1
ẋ = 10 0 x + 0 u
0
1 0 0
|{z } | {z }
A B

y = 0 0 1 x.
| {z }
C

We need to compute Φ and Γ. In this case we can use the series expansion of eAh

A2 h2
eAh = I + Ah + + ...
2
since A3 = 0, and thus all the successive powers of A. Thus in this case
 
1 0 0
Φ = eAh =  h 1 0
2
h /2 h 1
Z h Z h
As
T T
Γ= e Bds = 1 s s2 /2 ds = h h2 /2 h3 /6
0 0

62
S OLUTION 2.4
We will use in this exercise the following relation
ln Φ
Φ = eAh ⇒ A =
h
(a)
y(kh) − 0.5y(kh − h) = 6u(kh − h) ⇒ y(kh) − 0.5q −1 y(kh) = 6q −1 u(kh)
which can be transformed in state-space as

x(kh + h) = Φx(kh) + Γu(kh) = 0.5x(kh) + 6u(kh)


y(kh) = x(kh).

The continuous time system is then

ẋ(t) = ax(t) + bu(t)


y(t) = x(t),

where, in this case since Φ and Γ are scalars, we have


ln Φ ln 2
a= =−
h h
Z h
12 ln 2
b = Γ/ eas ds =
0 h

(b)
   
−0.5 1 0.5
x(kh + h) = x(kh) + u(kh)
0 −0.3 0.7
| {z }
Φ

y(kh) = 1 1 x(kh).

We compute the eigenvalues of Φ


 
s + 0.5 −1
det(sI − Φ) = = 0 ⇔ (s + 0.5)(s + 0.3) = 0
0 s + 0.3
λ1 = −0.5, λ2 = −0.3.

Both eigenvalues of Φ are on the negative real axis, thus no corresponding continuous system exists.

(c) We can proceed as in (a). In this case Φ = −0.5 which means that the sampled system has a pole on the
negative real axis. Thus, as in (b), no corresponding continuous system exists.

S OLUTION 2.5
(Ex. 2.11 in [2])

(a) A state space representation of the transfer function is


   
0 0 1
ẋ = x+ u
0 −1 1

y = 1 −1 x.

63
In this case
 
Ah 1 0
Φ=e =
0 e−h
Z h  
As h
Γ= e Bds =
0 1 − e−h

(b) The pulse-transfer function is given by


 −1  
−1
 z−1 0 h
H(z) = C(zI − Φ) Γ = 1 −1
0 z − e−h 1 − e−h
(h + e−h − 1)z + (1 − e−h − he−h )
=
(z − 1)(z − e−h )

(c) The pulse response is 


0, k = 0;
h(k) =
CΦk−1 Γ, k ≥ 1.
Since  k
Φk = eAh = eAkh

then we have
  
k−1
 1 0 h
h(k) = CΦ Γ = 1 −1
0 e−(k−1)h 1 − e−h
= h − e−(kh−h) + e−hk

(d) A difference equation relating input and output is obtained from H(q).

(h + e−h − 1)q + (1 − e−h − he−h )


y(kh) = H(q)u(kh) = u(kh)
(q − 1)(q − e−h )

which gives

y(kh + 2h) − (1 + e−h )y(kh + h) + e−h y(kh) = (h + e−h − 1)u(kh + h) + (1 − e−h − he−h )u(kh)

(e) The poles are in z = 1 and z = e−h . The second pole moves from 1 to 0 as h goes from 0 to ∞. There
is a zero in
1 − e−h − he−h
z=− .
h + e−h − 1
The zero moves from -1 to 0 as h increases, see Figure 2.5.1

S OLUTION 2.6
(a) We notice that τ < h. The continuous-time system in state-space is

0 ·x + |{z}
ẋ = |{z} 1 ·u(t − τ )
A B
1 ·x
y = |{z}
C

64
h
0 5 10 15 20

-0,2

-0,4

-0,6

-0,8

-1

Figure 2.5.1: The zero of Problem 2.5 as function of h.

Sampling the continuous-time system with sampling period h = 1 we get

x(k + 1) = Φx(k) + Γ0 u(k) + Γ1 u(k − 1)


y(k) = x(k),

where

Φ = eAh = e0 = 1
Z h−τ
Γ0 = eAs dsB = 0.5
0
Z τ
A(h−τ )
Γ1 = e eAs dsB = 0.5.
0

The system in state space is


      
x(k + 1) 1 0.5 x(k) 0.5
= + u(k)
u(k) 0 0 u(k − 1) 1
 
 x(k)
y(k) = 1 0 .
u(k − 1)

The system is of second order.

(b) The pulse-transfer function is

0.5(z + 1)
H(z) = C(zI − Φ)−1 Γ = · · · = .
z(z − 1)

To determine the pulse-response we inverse-transform H(z).


 
0.5(z + 1) z z
H(z) = = 0.5 z −1 + z −2 .
z(z − 1) z−1 z−1

65
The inverse transform of z/(z − 1) is a step. Thus we have the sum of two steps delayed of 1 and 2
time-steps, thus the pulse-response is

 0, k = 0;
h(kh) = 0.5, k = 1;

1, k > 1.

(c) We can consider H(z) computed in (b). There are to poles, one in z = 0 and another in z = 1. There is
a zero in z = −1.

S OLUTION 2.7
In this case the time delay is longer than the sampling period, τ > h. The sampled system with h = 1 is

x(k + 1) = Φx(k) + Γ0 u k − (d − 1) + Γ1 u(k − d)
y(k) = x(k),

where we compute d as the integer such that

τ = (d − 1)h + τ ′ , 0 < τ′ ≤ h

and where Γ0 and Γ1 are computed as in the solution of exercise 2.6, where τ is replaced by τ ′ . In this example
d = 2 and τ ′ = 0.5, and where

Φ = e−1
Γ0 = 1 − e−0.5
Γ1 = e−0.5 − e−1 .

A state representation is then


      
x(k + 1) Φ Γ0 Γ1 x(k) 0
u(k − 1) =  0 0 1  u(k − 2) + 0 u(k)
u(k) 0 0 0 u(k − 1) 1
 
 x(k)
y(k) = 1 0 0 u(k − 2) .

u(k − 1)

We still have a finite dimensional system (third order).

S OLUTION 2.8
The system
y(k) − 0.5y(k − 1) = u(k − 9) + 0.2u(k − 10)
can be shifted in time so that

y(k + 10) − 0.5y(k + 9) = u(k + 1) + 0.2u(k)

which can be written as


(q 10 − 0.5q 9 )y(k) = (q + 0.2)u(k).
Thus

A(q) = q 10 − 0.5q 9
B(q) = q + 0.2

66
and the system order is degA(q) = 10. We can rewrite the given system as

(1 − 0.5q −1 )y(k) = (1 + 0.2q −1 )u(k − 9).

where

A∗ (q −1 ) = 1 − 0.5q −1
B ∗ (q −1 ) = 1 + 0.2q −1

with d = 9. Notice that


B(q) q + 0.2 −1 B ∗ (q −1 )
−9 1 + 0.2q
= 10 = q =
A(q) q − 0.5q 9 1 − 0.5q −1 A∗ (q −1 )
S OLUTION 2.9
We can rewrite the system
y(k + 2) − 1.5y(k + 1) + 0.5y(k) = u(k + 1)
as
q 2 y(k) − 1.5qy(k) + 0.5y(k) = qu(k).
We use the z-transform to find the output sequence when the input is a step, namely

0, k < 0;
u(k) =
1, k ≥ 0.

when y(0) = 0.5 and y(−1) = 1. We have


 
z 2 (Y (z) − y(0) − y(1)z −1 ) − 1.5z Y (z) − y(0) + 0.5Y (z) = z U − u(0) .

We need to compute y(1). From the given difference equation we have

y(1) = 1.5y(0) − 0.5y(−1) + u(0) = 1.25

Thus substituting in the z-transform and rearranging the terms, we get

(z 2 − 1.5z + 0.5)Y (z) − 0.5z 2 − 1.25z + 0.75z = zU (z) − z.

Thus we have
0.5z(z − 1) z
Y (z) = + U (z).
(z − 1)(z − 0.5) (z − 1)(z − 0.5)
Now U (z) = z/(z − 1) this we obtain
0.5z 2 1
Y (z) = + 2
+ .
z − 0.5 (z − 1) z − 0.5
Using the following inverse z-transforms
 
−1 z
Z = e−k/T , e−1/T = 0.5 ⇒ T = 1/ ln 2
z − e1/T
 
z
Z −1
z −1
= e−(k−1) ln 2
z − 0.5
   
−1 1 −1 −1 z
Z =Z z =k−1
(z − 1)2 (z − 1)2
we get
y(k) = 0.5e−k ln 2 + 2(k − 1) + e−(k−1) ln 2

67
S OLUTION 2.10
The controller can be written as

U (s) = −Uy (s) + Ur (s) = −Gy (s)Y (s) + Gr (s)R(s)

where the transfer functions Gy (s) and Gr (s) are given


s0 s + s1 s 1 − s 0 r1
Gy (s) = = s0 +
s + r1 s + r1
t0 s + t1 t1 − t 0 r1
Gr (s) = = t0 + .
s + r1 s + r1
We need to transform this two transfer functions in state-space form. We have

ẋy (t) = −r1 xy (t) + (s1 − s0 r1 )y(t)


uy (t) = xy (t) + s0 y(t)

and

ẋr (t) = −r1 xr (t) + (t1 − t0 r1 )r(t)


ur (t) = xr (t) + t0 r(t).

The sampled systems corresponding to the previous continuous time systems, when the sampling interval in h,
are

xy (kh + h) = Φxy (kh) + γy y(kh)


uy (kh) = xy (kh) + s0 y(kh)

and

xr (kh + h) = Φxr (kh) + γr r(kh)


ur (kh) = xr (kh) + t0 r(kh)

where

Φ = e−r1 h
Z h
s 1 − s 0 r1
γy = e−r1 s ds (s1 − s0 r1 ) = −(e−r1 h − 1)
0 r1
Z h
t1 − t 0 r1
γr = e−r1 s ds (t1 − t0 r1 ) = −(e−r1 h − 1) .
0 r1
From the state representation we can compute the pulse transfer function as
 
γy
uy (kh) = + s0 y(kh)
q − φy
 
γr
ur (kh) = + t0 r(kh).
q − φr
Thus the sampled controller in the form asked in the problem is
s0 q + γ y − s0 φy t0 q + γ r − t0 φ r
u(kh) = − y(kh) + r(kh).
q − φy q − φr
| {z } | {z }
Hy (q) Hr (q)

68
S OLUTION 2.11
(Ex. 2.21 in [2]) Consider the discrete time filter
z+b
z+a
(a)
   
eiωh + b cos ωh + b + i sin ωh
arg = arg
eiωh + a cos ωh + b + i sin ωh
   
sin ωh sin ωh
= arctan − arctan .
b + cos ωh a + cos ωh
We have a phase lead if
   
sin ωh sin ωh
arctan > arctan , 0 < ωh < π
b + cos ωh a + cos ωh
sin ωh sin ωh
> .
b + cos ωh a + cos ωh
Thus we have lead if b < a.

Solutions to analysis of sampled systems


S OLUTION 3.1
The characteristic equation of the closed loop system is

z(z − 0.2)(z − 0.4) + K = 0 K > 0.

The stability can be determined using the root locus. The starting points are z = 0, z = 0.2 and z = 0.4. The
asymptotes have the directions ±π/3 and −π. The crossing of the asymptotes is 0.2. To find the value of K
such that the root locus intersects the unit circle we let z = a + ib with a2 + b2 = 1. This gives

(a + ib)(a + ib − 0.2)(a + ib − 0.4) = −K.

Multiplying by a − ib and since a2 + b2 = 1 we obtain

a2 − 0.6a − b2 + 0.08 + i(2ab − 0.6b) = −K(a − ib).

Equating real parts and imaginary parts we obtain

a2 − 0.6a − b2 + 0.08 = −Ka


b(2a − 0.6) = Kb.

If b 6= 0 then

a2 − 0.6a − (1 − a2 ) + 0.087 = −a(2a − 0.6)


4a2 − 1.2a − 0.92 = 0.

Solving with respect to a we get



√ 0.652
a = 0.15 ± 0.0225 + 0.23 =
−0.352

69
Root Locus
1
0.5π/T
0.6π/T 0.4π/T

0.8 0.7π/T 0.1 0.3π/T


0.2
0.3
0.6
0.8π/T 0.4 0.2π/T
0.5
0.4 0.6
0.7
0.9π/T 0.8 0.1π/T
0.2
0.9

Imaginary Axis
π/T
0
π/T

−0.2
0.9π/T 0.1π/T

−0.4
System: H
Gain: 0.705
0.8π/T Pole: 0.652 − 0.758i 0.2π/T
−0.6 Damping: 0.00019
Overshoot (%): 99.9
Frequency (rad/sec): 0.86
−0.8 0.7π/T 0.3π/T

0.6π/T 0.4π/T
0.5π/T
−1
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1
Real Axis

Figure 3.1.1: Root locus for the system in Problem 3.1

This gives K = 0.70 and K = −1.30. The root locus may also cross the unit circle if b = 0 for a = ±1. A
root at z = −1 is obtained when

−1(−1 − 0.2)(−1 − 0.4) + K = 0

namely when K = 1.68. There is a root at z = 1 when K = −0.48. The closed loop system is stable for

K ≤ 0.70

S OLUTION 3.2
We sample the system G(s). In order to do this we derive a state-space realization of the given system

ẋ = u
y=x

which gives the following matrices of the sampled system

Φ = e−h 0 = 1
Z h
Γ= ds = h.
0

The pulse transfer operator is


h
H(q) = C(qI − Φ)−1 Γ = .
q−1
(a) When τ = 0 the regulator is
u(kh) = Ke(kh)
and the characteristic equation of the closed loop system becomes

1 + C(z)H(z) = Kh + z − 1 = 0.

The system is stable if


|1 − Kh| < 1 ⇒ 0 < K < 2/h.

70
When there is a delay of one sample, τ = h then the characteristic equation becomes

z 2 − z + Kh = 0.

The roots of the characteristic equation (the poles of the system) must be inside the unit circle for guar-
anteeing stability and thus |z1 | < 1 and |z2 | < 1. Thus |z1 ||z2 | = |z1 z2 | < 1. Since z1 z2 = Kh we
have
K < 1/h.

(b) Consider the continuous-time system G(s) in series with a time delay of τ seconds. The transfer function
is then
K
Ḡ(s) = e−sτ .
s
The phase of the system as function of the frequency is
π
argḠ(jω) = − − ωτ
2
and the gain is
K
|Ḡ(jω)| = .
ω
The system is stable if the gain is less than 1 at the cross over frequency, which satisfies
π π
− − ωc τ = π ⇒ ωc =
2 2τ
The system is stable if
K
|Ḡ(jωc )| = <1
ωc
which yields (
π ∞ τ =0
K< = π
2τ 2h τ =π
The continuous-time system will be stable for all values of K if τ = 0 and for K < π/2h when τ = h.
This value is about 50% larger than the value obtained for the sampled system in (a).

S OLUTION 3.3
(a) The observability matrix is    
C 2 −4
Wo = = .
CΦ 1 −2
The system is not observable since rank(Wo ) = 1, or detWo = 0.

(b) The controllability matrix is  


 6 1
Wc = Γ ΦΓ =
4 1
which has full rank. Thus the system is reachable.

71
S OLUTION 3.4
The controllability matrix is  
1 1 1 1 
Wc = Γ ΦΓ =
1 0 0.5 0
which has full rank (check the first two rows of Wc ), thus the system is reachable. From the input u we get the
system
   
1 0 0
x(k + 1) = x(k) + v(k).
0 0.5 1

In this case  
0 0
Wc =
1 0.5
which has rank one, and thus the system is not reachable from v.

S OLUTION 3.5
The closed loop system is
C(q)H(q)
y(k) = Hcℓ (q)r(k) = r(k).
1 + C(q)H(q)
(a) With C(q) = K, K > 0 we get
K
y(k) = r(k).
q2 − 0.5q + K
The characteristic polynomial of the closed loop system is

z 2 − 0.5z + K = 0,

and stability is guaranteed if the roots of the characteristic polynomial are inside the unit circle. In general for
a second order polynomial
z 2 + a1 z + a2 = 0
all the roots are inside the unit circle if1

a2 < 1
a2 > −1 + a1
a2 > −1 − a1 .

If we apply this result to the characteristic polynomial of the given system Hcℓ we get

K<1
K > −1.5
K > −0.5

which together with the hypothesis that K > 0 gives 0 < K < 1. The steady state gain is given by
K
lim Hcℓ (z) = .
z→1 K + 0.5

1
The conditions come from the Jury’s stability criterion applied to a second order polynomial. For details see pag. 81 in [2].

72
S OLUTION 3.6
The z-transform of a ramp is given in Table 2, pag. 22 in [1] and we get
z
R(z) = .
(z − 1)2

Using the pulse transfer function from Problem 3.5 and the final value theorem we obtain
 
z − 1 
lim e(k) = lim (r(k) − y(k)) = lim  Hcℓ (z)R(z)
k→∞ k→∞ z→1 z | {z }
F (z)

if (1−z −1 )F (z) does not have any root on or outside the unit circle. For the Hcℓ (z) as in this case the condition
is not fulfilled. Let us consider the steady state of the first derivative of the error signal e(k)

z − 1 z 2 − 0.5z z 0.5
lim = lim =
k→∞ z→1 z z 2 − 0.5z + K (z − 1)2 K + 0.5

which is positive, meaning that in steady state the reference and the output diverge.

S OLUTION 3.7
(a) (i) - Poles are mapped as z = esh . This mapping maps the left half plane on the unit circle

(b) (i) - The right half plane is mapped outside the unit circle

(c) (ii) - Consider the harmonic oscillator:


   
0 1 0
ẋ = x+ u
−1 0 1

y= 1 0 x

which is reachable since  


0 1
Wc =
1 0
has full rank. If we sampled with sampling period h we get
   
cos ωh sin ωh 1 − cos ωh
x(kh + h) = x(kh) + u(kh)
− sin ωh cos ωh sin ωh
| {z }
Γ

y(kh) = 1 0 x(kh)

If we choose h = 2π/ω then Γ is the 0 vector and clearly the system is not controllable.

(d) (ii) - as in (c) we can find example sampling periods that make the system not observable.

S OLUTION 3.8
The open loop system has pulse transfer operator
1
H0 =
q2 + 0.4q

and the controller is proportional, thus C(q) = K.

73
(a) The closed loop system has pulse transfer operator
KH0 K
Hcℓ = = 2 .
1 + KH0 q + 0.4q + K
From the solution of Problem 3.5 we know that the poles are inside the unit circle if
K<1
K > −1 + 0.4
K > −1 − 0.4 ⇒ −0.6 < K < 1

(b) Let e(k) = r(k) − y(k) then


E(z) = (1 − Hcℓ ) R(z).
If K is chosen such that the closed loop system is stable, the final-value theorem can be used and
z−1 1 z − 1 z 2 + 0.4z z 1.4
lim e(k) = lim R(z) = 2
=
k→∞ z→1 z 1 + KH0 z z + 0.4z + K z − 1 1.4 + K
If for example we choose K = 0.5 then limk→∞ e(k) = 0.74.

S OLUTION 3.9
The closed loop system has the following characteristic equation
det (zI − (Φ − ΓL)) = z 2 −(a11 +a22 −b2 ℓ2 −b1 ℓ1 )z+a11 a22 −a12 a21 +(a12 b2 −a22 b1 )ℓ1 +(a21 b1 −a11 b2 )ℓ2 .
This must be equal to the given characteristic equations, thus
    
b1 b2 ℓ1 p1 + tr Φ
=
a12 b2 − a22 b1 a21 b1 − a11 b2 ℓ2 p2 − det Φ
where tr Φ = a11 + a22 and det Φ = a11 a22 − a12 a21 . The solution is
    
ℓ1 1 a21 b1 − a11 b2 −b2 p1 + tr Φ
=
ℓ2 ∆ −a12 b2 + a22 b1 b1 p2 − det Φ

where ∆ = a21 b21 − a12 b22 + b1 b2 (a22 − a11 ). To check when ∆ = 0 we consider the controllability matrix of
the system  
 b1 a11 b1 + a12 b2
Wc = Γ ΦΓ =
b2 a21 b1 + a22 b2
and we find that ∆ = det Wc . There exists a solution to the system of equation above if the system is control-
lable, since then the rank of the controllability matrix is full and det Wc 6= 0.
For the double integrator a11 = a12 = a22 = b2 = 1 and a21 = 0 and b1 = 0.5. In order to design an dead
beat controller we need to place the poles in zero, thus p1 = p2 = 0. This yields
      
ℓ1 −1 −1 2 1
= −1 = .
ℓ2 −0.5 −0.5 −1 1.5

S OLUTION 3.10
In this case the characteristic equation is
(z − 0.1)(z − 0.25) = z 2 − 0.35z + 0.025.
Using the result from Problem 3.9 we find that ∆ = 0.5 and L is obtained from
      
T ℓ1 1 0.5 0 0.75 0.75
L = = = .
ℓ2 ∆ 0.1 1 −0.025 0.1

74
S OLUTION 3.11
(a) The static observer gives
 
y(k − n + 1)
x̂(k) = Φn−1 Wo−1 
 .. 
. 
y(k)
 
0 0 ... 0  
 CΓ 0 ... 0  u(k − n + 1)
 
+ Φn−2 Γ Φn−3 Γ . . .

Γ − Φn−1 Wo−1 
 CΦΓ CΓ ... 0  .. 
 . .
 .. .. .. ..
 . . . . u(k − 1)
CΦn−2 Γ CΦn−3 Γ . . . CΓ
| {z }
Wu
| {z }
Ψ

In this case we have


    
C 0 1 −4.55 4.55
Wo = = Wo−1 =
CΦ 0.22 1 1 0
   
0 0
Wu = =
CΓ 0.03
Ψ = Γ − ΦWo−1 Wu
       
0.22 0.78 0 −4.55 4.55 0 0.114
= − = .
0.03 0.22 1 1 0 0.03 0

Thus we get
 
y(k − 1)
x̂(k) = ΦWo−1 + Ψu(k − 1)
y(k)
    
−3.55 3.55 y(k − 1) 0.114
= + u(k − 1).
0 1 y(k) 0

(b) The dynamic observer has the form

x̂(k + 1|k) = (Φ − KC)x̂(k|k − 1) + Γu(k) + Ky(k).

We choose K such that the the eigenvalues of Φ − KC are in the origin (very fast observer). Using the
results in Problem 3.9 where we use ΦT and C T instead of Φ and Γ, we obtain
 
2.77
K= .
1.78

(c) The reduced observer has the form

x̂(k|k) = (I − KC) [Φx̂(k − 1|k − 1) + Γu(k − 1)] + Ky(k).

In this case we want to find K such that

– CK = 1
– (I − KC)Φ has the eigenvalues in the origin.

75
The first condition imposes k2 = 1. Since
 
0.78 − 0.22k1 −k1
(I − KC)Φ =
0 0
in order to have eigenvalues in the origin we need to choose k1 = 0.78/0.22 = 3.55. The reduced
observer is then
     
0 −3.55 0.114 3.55
x̂(k|k) = x̂(k − 1|k − 1) + u(k − 1) + y(k).
0 0 0 1
Since x̂2 (k|k) = y(k) the we get
    
−3.55 3.55 y(k − 1) 0.114
x̂(k|k) = + u(k − 1)
0 1 y(k) 0
which is the same as the static observer computed in (a).

S OLUTION 3.12
The constant disturbance v(k), which typically has high energy at low frequencies, can be described by the
dynamical system
w(k + 1) = Aw w(k)
v(k) = Cw w(k),
where the matrix Aw typically has eigenvalues at the origin or on the imaginary axis. We consider the aug-
mented state vector  
x
z= .
w
The augmented system ca be described by
      
ẋ A Cw x B
= + u
ẇ 0 Aw w 0
 
 x
y= C 0
w
Sampling the system gives the following discrete-time system
      
x(k + 1) Φ Φxw x(k) Γ
= + u(k)
w(k + 1) 0 Φw w(k) 0
 
 x(k)
y(k) = C 0
w(k)
where
w(k + 1) = Φw w(k)
v(k) = Cw w(k),
and Φxw relates x(k + 1) and w(k). In this exercise the disturbance can be modeled by the system
w(k + 1) = w(k)
v(k) = w(k),
and the process is described by
Φw = 1
 
1
Φxw = .
0

76
(a) If the state x and the disturbance v can be measured then we can use the controller

u(k) = −Lx(k) − Lw w(k).

This gives the closed loop system

x(k + 1) = Φx(k) + Φxw w(k) − ΓLx(k) − ΓLw w(k)


y(k) = Cx(k)

In general it is not possible to eliminate completely the influence of w(k). This is possible only if
Φxw − ΓLw = 0. We will therefore consider the situation at the output in steady state

y(∞) = C (I − (Φ − ΓL))−1 (Φxw − ΓLw )w(∞) = Hw (1)w(∞).

The influence of w (or v) can be zero in steady state if

Hw (1) = 0.

Let φij the (i, j) element of Φ − ΓL and γi the ith element of Γ. Then

1 − Lw γ1 − φ22 + φ22 Lw γ1 − φ12 Lw γ2


C (I − (Φ − ΓL))−1 (Φxw − ΓLw ) = − =0
−1 + φ22 + φ11 − φ11 φ22 + φ12 φ21

yields
−1 + φ22
Lw = .
−γ1 + φ22 γ1 − φ12 γ2
If L is the feedback matrix that gives a dead beat controller, that is

L = 3.21 5.57

such that  
−0.142 −0.114
Φ − ΓL =
0.179 0.142
then we have Lw = 5.356.

(b) In this case the state is measurable, but not the disturbance. The disturbance can be calculated from the
state equation
Φxw w(k − 1) = x(k) − Φx(k − 1) − Γu(k − 1).

The first element of this vector gives

w(k − 1) = (1 0) (x(k) − Φx(k − 1) − Γu(k − 1)) .

Since w(k) is constant and x(k) is measurable, it is possible to calculate ŵ(k) = w(k − 1) . The
following control law can then be used

u(k) = −Lx(k) − Lw ŵ(k)

where L and Lw are the same as in (a). Compared with the controller in (a) there is a delay in the
detection of the disturbance.

77
(c) If only the output is measurable then the state and the disturbance can be estimated by using the following
observer:
        
x̂(k + 1) Φ Φxw x̂(k) Γ K
= + u(k) + ǫ(k)
ŵ(k + 1) 0 1 ŵ(k) 0 Kw
ǫ(k) = y(k) − C x̂(k).
The gains K and Kw can be determined so that the error goes to zero, since the augmented system is
observable. Let x̃(k) = x̂(k) − x(k) and similarly w̃(k) = ŵ(k) − w(k). Then
    
x̃(k + 1) Φ − KC Φxw x̃(k)
= .
w̃(k + 1) −Kw C 1 w̃(k)
The characteristic equation of the system matrix for the error is
z 3 + (k1 − 2.2)z 2 + (1.05 − 1.7k1 + k2 + kw )z + 0.7k1 + 0.15 − 0.7kw − k2 = 0.
The eigenvalues can be placed at the origin if
 
2.2
K= Kw = 3.33.
−0.64
The controller is
u(k) = −Lx̂(k) − Lw ŵ(k)
where L and Lw are the same as (a).

S OLUTION 3.13

(a) The eigenvalues of the matrix A are


λ1 = −0.0197
λ2 = −0.0129.
The matrices of the sampled system are
 
Ah 0.790 0
Φ=e =
0.176 0.857
 
0.281
Γ=
0.0296
(b) The pulse transfer operator is given by
 −1
−1
 q − 0.790 0  0.030q + 0.026
H(q) = C(qI − Φ) Γ = 0 1 0.281 0.0297 = 2
−0176 q − 0.857 q − 1.65q + 0.68
(c) The poles of the continuous-time system are at -0.0197 and -0.0129. The observer should be twice as fast as
the fastest mode of the open-loop system, thus we choose the poles of the observer in
z = e−0.0192·2·12 = 0.62.
The desired characteristic equation of Φ − KC is then
z 2 − 1.24z + 0.38 = 0.
Using the results from Problem 3.9 we obtain

K = 0.139 0.407 .

78
S OLUTION 3.14
(a) The sampled system is

x(k + 1) = Φx(k) + Γu(k)


y(k) = x(k)

where

Φ = e−5
1 − e−5
Γ= .
5

(b) In order to to prove stability with Lyapunov argument we need to choose a Lyapunov function V . Let
V = |x|. The increment of the Lyapunov function is then

∆V = |x+ | − |x| = |e−5 x| − |x| = (e−5 − 1)|x| ≤ 0.

since u(k) = 0. Since ∆V ≤ 0, then the system is stable. We actually know that the system is
asymptotically stable, since the pole is inside the unit circle. We can conclude on asymptotical stability
using the Lyapunov argument, noticing that ∆V < 0 if x 6= 0. Thus the system is asymptotically stable.

S OLUTION 3.15
(a) The sampled system is

x(k + 1) = Φx(k) + Γu(k)


y(k) = x(k)

where
 
e−1 0
Φ=
0 e−2
 
1 − e−1
Γ =  1 − e−2  .
2

(b) The characteristic polynomial is

(z − 0.1)(z − 0.2) = z 2 − 0.3z + 0.02.

Using the result from Problem 3.9 we obtain


 −2

−2 
  1 − e 1 − e   
ℓ1 1 −e −1 − −0.3 + e−1 + e−2 0.3059
=   =
ℓ2 −0.0636 −e−2 (1 −2e−1 ) 1 − e2−1 0.02 − e−1 e−2 0.0227

(c) The closed loop system is  


0.1745 −0.0143
x(k + 1) = x(k).
−0.1323 0.1255
| {z }
Φc

79
We consider the following Lyapunov function

V (x) = xT x

then
∆V (x) = xT ΦTc Φc x − xT x = xT (ΦTc Φc − I) x.
| {z }
Ω, symmetric

Since the eigenvalues of the symmetric matrix Ω are negative then ∆V ≤ 0. Thus the closed loop
system is stable. Notice that ∆V < 0 if x 6= 0 thus the closed loop system is asymptotically stable (the
eigenvalues are placed in 0.1 and 0.2).

Solutions to computer realization of controllers


S OLUTION 4.1
(a) The characteristic polynomial of the closed loop system is equal to the numerator of 1 + HHc , that is,

z 2 + (K + Ki − 2.5)z − K + 1.

The poles of the closed-loop system are in the origin if

K + Ki − 2.5 = 0
−K + 1 = 0

which yields K = 1 and Ki = 1.5.

(b) We can use the following partial fraction expansion for Hc (z)

N
Hc (z) = M + .
z−1
With simple calculations we obtain M = 2.5 and N = 1.5. Thus the state-space representation of the
controller is

x(k + 1) = x(k) + e(k)


u(k) = 1.5x(k) + 2.5e(k)

S OLUTION 4.2
(a) Using Euler’s method (Forward difference) we get

a ah
H(z) = = .
(z − 1)/h + a z − 1 + ah

This corresponds to the difference equation

y(kh + h) + (ah − 1)y(kh) = a h u(kh).

The difference equation is stable if

|ah − 1| < 1 ⇒ 0 < h < 2/a.

The approximation may be poor even if the difference equation is stable.

80
(b) Tustin’s approximation gives

a (z + 1ah/2)
H(z) = =
2z − 1 (1 + ah/2)z + (ah/2 − 1)
+a
hz + 1
ah/2 z+1
= .
1 + ah/2 ah/2 − 1
z+
ah/2 + 1

The pole of the discrete-time system will vary from 1 to - 1 when h varies from 0 to ∞. The discrete-time
approximation is always stable if a > 0.

(c) Tustin’s approximation with prewarping gives

a a/α z+1
H(z) = =
z−1 1 + a/α a/α − 1
α +a z+
z+1 a/α − 1

where
a
α= .
tan(ah/2)

S OLUTION 4.3
(a) Euler’s method gives

(z − 1)/h + 1 z−1+h z − 0.75


H(z) = 4 =4 =4
(z − 1)/h + 2 z − 1 + 2h z − 0.5

(b) Backward differences give

(z − 1)/(zh) + 1 z(1 + h) − 1 z − 0.80


H(z) = 4 =4 = 3.33
(z − 1)/(zh) + 2 z(1 + 2h) + 1 z − 0.667

(c) Tustin’s approximation gives

2z − 1
+1 z(1 + h/2) − (1 − h/2) z − 0.778
H(z) = 4 h z + 1 =4 = 3.6
2z − 1 z(1 + h) − (1 − h) z − 0.6
+2
hz + 1

(d) Tustin’s approximation with pre-warping

z−1
α +1 z(1 + 1/α) − (1 − 1α) z − 0.775
H(z) = 4 z + 1 =4 = 3.596
z−1 z(1 + 2/α) − (1 − 2/α) z − 0.596
α +2
z+1
All the four approximation has the form

z+a
H(z) = K .
z+b

81
The gain and phase at ω = 1.6rad/s are obtained from

eiωh + a (eiωh + a)(e−iωh + b)


H(eiωh ) = K = K
eiωh + b (eiωh + b)(e−iωh + b)
1 + ab + (a + b) cos(ωh) + i(b − a) sin(ωh)
=K
1 + b2 + 2b cos(ωh)

(b − a) sin ωh
argH(eiωh ) = arctan
1 + ab + (a + b) cos(ωh)

s
1 + a2 + 2a cos(ωh)
|H(eiωh )| = K .
1 + b2 + 2b cos(ωh)

The four different approximations give at ω = 1.6rad/s the following results

|H(.)| argH(.) Rel. err. |.| Rel. err. arg(.)


Continuous-time (no approx.) 2.946680646 0.3374560692
Euler 2.966414263 0.4105225474 0.67% 21.65%
Backward 2.922378065 0.2772636846 -0.82% -17.83%
Tustin 2.959732059 0.3369122161 0.44% -0.16%
Tustin with prewarping 2.946680646 0.3374560692 0.0% 0.0%

S OLUTION 4.4
The tank process in Problem 2.13 has the transfer function
0.000468
G(s) =
(s + 0.0197)(s + 0.0129)
(a) At the desired cross-over frequency we have

|G(iωc )| = 0.525
argG(iωc ) = −115◦ .

We use a PI controller in the form


K(T s + 1)
F (s) =
Ts
and we have the following specifications atωc

– gain 1/0.525
– phase -15◦ .

This gives K = 1.85 and T = 149.

(b) The characteristic equation of the closed loop system is

s2 + 0.0326s2 + 0.00112s + 5.91 10−6 = 0

which has roots s1,2 = −0.0135 ± i0.0281 and s3 = −0.006. The complex poles have a damping of
ζ = 0.43. The zero of the closed loop system is −0.0062.

82
(c) Tustin’s approximation with given warping is
!
z−1
1.85 α + 0.0067  
z+1 1.85(α + 0.0067) 0.0134
Hc (z) = = 1+ .
z−1 α (α + 0.0067)(z − 1)
α
z+1
The rule of thumb for the selection of the sampling period gives

h ≈ 6 − 20sec.

The choice of h = 12 seems to be reasonable. This gives α = 0.165 and


 
0.0778
Hc (z) = 1.925 1 + .
z−1

S OLUTION 4.5
The sample and zero-order hold circuit can be approximate by a delay of h/2 seconds. Indeed the output of the
zero-order hold is
uh (t) = u(kh), kh < t < kh + h

u(kh)

t
uh (t) u(kh) uh (t)
ZOH

Figure 4.5.1: The zero-order hold.

If u(kh) = δ(kh) then uh (t) is the impulse response of the ZOH filter. In this case the output, let us call it,
uδh (t) is a pulse of height 1 and duration h i.e.,

uδh (t) = (1(t) − 1(t − h)) .

The Laplace transform of the impulse response is the transfer function of the ZOH filter, which is
Z ∞
δ 1
ZOH(s) = L{uh (t)} = (1(t) − 1(t − h)) e−st dt = (1 − e−sh )/s.
0 h
When we sample a signal we have a scaling factor of 1/h, as

1 X
Xs (jω) = X(j(ω + kωs )).
h
k=−∞

83
Thus for small h we have
1 (1 − e−sh ) 1 − 1 + sh − (sh)2 /2 + . . . sh
SAM P LE(s) · ZOH(s) = = =1− + · · · ≈ e−sh/2
h s sh 2
which is approximately a delay of h/2. If we assume a decrease of the phase margin of 5◦ − 15◦ , then
180◦ ωc h ωc h
∆φZOH = ωc h/2 = = = 5◦ − 15◦ ,
2π 0.035
which then gives
ωc h = 0.17 − 0.52
or
ωc h ≈ 0.15 − 0.5.

S OLUTION 4.6
Consider the general problem of controlling a continuous-time system

ẋ(t) = Ax(t) + Bu(t)


y(t) = Cx(t)

with the continuous-time controller

u(t) = M r(t) − Lx(t).

The closed-loop system is

ẋ(t) = (A − BL)x(t) + BM r(t) = Ac x(t) + BM r(t)


y(t) = Cx(t).

If r(t) is constant over one sampling period, then the previous equation can be sampled, giving

x(kh + h) = Φc x(kh) + Γc M r(kh)


y(kh) = Cx(kh), (0.0)

where

Φc = eAc h
Z h
Γc = eAc s dsB.
0

Let us assume that the controller


u(kh) = M̃ r(kh) − L̃x(kh)
is used to control the sampled system

x(kh + h) = Φx(kh) + Γu(kh)


y(kh) = Cx(kh),

where

Φ = eAh
Z h
Γ= eAh dsB.
0

84
In this case the closed-loop system is

x(kh + h) = (Φ − ΓL̃)x(kh) + ΓM̃ r(kh)


y(kh) = Cx(kh). (0.0)

It is in general not possible to choose L̃ such that

Φc = Φ − ΓL̃.

However, we can make a series expansion and equate terms. Assume

L̃ = L0 + L1 h/2

then 
Φc ≈ I − (A − BL)h + A2 − BLA − ABL − (BL)2 h2 /2 + . . .
and 
Φ − ΓL̃ ≈ I + (A − BL0 )h + A2 − ABL0 − (BL1 )2 h2 /2 + . . .
The systems (0.0) and (0.0) have the same poles up to and including order h2 if

L̃ = L (I + (A − BL)h/2) .

The modification on M̃ is determined by assuming the steady-state values of (0.0) and (0.0) are the same. Let
the reference be constant and assume that the steady-state value of the state is x0 . This gives the relations

(I − Φc )x0 = Γc M r

and  
I − (Φ − ΓL̃) x0 = ΓM̃ r

The series expansion of the left-hand sides of these two relations are equal for power of h up to and including h2 .
Then, we need to determine M̃ so that the expansions of the right-hands are the same for h and h2 . Assuming

M̃ = M0 + M1 h/2

then
Γc M ≈ BM h + (A − BL)BM h2 /2 + . . .
and
ΓM̃ ≈ BM0 h + (BM1 − ABM0 )h2 /2 + . . . ,
which gives
M̃ = (I − LBh/2)M.
(a) We need to compute L̃ and M̃ , which are
   
1 h/2  1 h/2 
L̃ = L = 1 2 = 0.8 1.7
−h/2 1 − h −h/2 1 − h
M̃ = 2 − 2h = 1.6

(b) The backward difference approximation gives

1 − q −1
x̂(kh) = (A − KC)x̂(kh) + Bu(kh) + Ky(kh)
h
(I − Ah − KCh)x̂(kh) = q −1 x̂(kh) + Bu(kh) + Ky(kh).

85
Let  
−1 1 1 h
Φ0 = (I − Ah − KCh) = 2 .
h +h+1 −h 1 + h
This gives,

x̂(kh) = (Φ0 x̂(kh − h) + Φ0 Bu(kh) + Φ0 Ky(kh)


     
0.81 0.16 0.03 0.19
= x̂(kh − h) + u(kh) + y(kh).
−0.16 0.97 0.19 0.16

(c) Simulate the continuous-time controller and the discrete-time approximation. Let x(0) = (1, 1)T and
x̂(t) = (0, 0)T .

S OLUTION 4.7
(a) To obtain the forward difference approximation we substitute s = q−1h in the continuous-time expression.
We obtain
s0 q − s0 + hs1 t0 q − t0 + ht1
u(kh) = − y(kh) + r(kh)
q + r1 h − 1 q + r1 h − 1
(b) In order to compare the discretizations we examine the location of the controller pole. We use the graph
available in order to compute the poles for the exact discretization. We have the following results:

Discretization h = 0.01 h = 0.1 h=1


Exact 0.9048 0.3679 0.00004540
Forward difference 0.9 0 -9

We notice that the forward difference approximation yields an unstable system for h = 1 (pole in -9).
For h = 0.1, the forward difference approximation gives a pole at the origin, which corresponds to a one
step delay. Thus this approximations of the continuous-time controller cannot be considered satisfactory.
For h = 0.01 the pole of the forward difference approximation is very closed to the exact discretization
which means that this is a better choice.
We could also consider how the sampling interval relates to the dominating time constant of the system.
We know that the relation between this two parameters is given by the following rule-of-thumb
Tr
Nr = ≈ 4 − 10.
h
In this case only the controller dynamics are available, but using the time constant of the controller that
is Tc = 1/r1 = 0.1 we see that only for h = 0.01 the rule-of-thumb is fulfilled.

S OLUTION 4.8
(a) The backward differences approximation substitutes s with (z − 1)/zh. We can consider the Laplace
transform of the controller’s equation yields

sX(s) = AX(s) + BE(s)


U (s) = CX(s) + DE(s)

Substituting s with (z − 1)/zh we get the following equations

x(k + 1) − x(k) = hAx(k + 1) + hBe(k + 1)


u(k) = Cx(k) + De(k).

86
We consider first the state update equation. Dividing the signals at time k + 1 and k we get the following
x(k + 1) − hAx(k + 1) − hBe(k + 1) = x(k) =: w(k + 1).
Solving this equation for x in terms of w and e we get
x(k + 1) = (I − hA)−1 w(k + 1) + (I − hA)−1 Bhe(k + 1)
thus, since w(k + 1) = x(k) we get
w(k + 1) = (I − hA)−1 w(k) + (I − hA)−1 Bhe(k).
This gives that
Φc = (I − hA)−1
Γc = (I − hA)−1 Bh.
From the output equation substituting x(k) we get directly
u(k) = C(I − hA)−1 + {D + C(I − hA)−1 Bh}e(k).
Thus
H = C(I − hA)−1
J = D + C(I − hA)−1 Bh.

(b) To compute the Tustin’s approximation we proceed in the same way, and we substitute s with 2(z −
1)/(h(z + 1)). The state equation then becomes
Ac h  Bc h
x(k + 1) − x(k) = x(k + 1) − x(k) + (e(k + 1) + e(k)).
2 2
Again collecting the k + 1 terms on one side we get
Ah Bh Ah Bh
x(k + 1) − x(k) − e(k + 1) = x(k) + x(k) + e(k)
2 2 2 2
=: w(k + 1).
Thus we can derive x(k + 1) from the previous equation as function of w(k + 1) and e(k + 1)
Ah −1 Ah −1 Bh
x(k + 1) = (I − ) w(k + 1) − (I − ) e(k + 1).
2 2 2
The state equation becomes
Ah Ah −1 Ah Ah −1  Bh
w(k + 1) = (I + )(I − ) w(k) + (I + )(I − ) +I e(k).
| 2 {z 2 } | 2 {z2 2}
Φc Γc

The output equation becomes


Ah −1 Ah −1 Bh
u(k) = C(I − ) w(k) + {D + C(I − ) } e(k).
| {z 2 } | {z 2 2 }
Hc Jc

Notice that is possible to write Γc in a more compact way as follows


Ah Ah −1  Bh Ah Ah  Ah −1 Bh
(I + )(I − ) +I = (I + ) + (I − ) (I − )
2 2 2 2 2 2 2
Ah −1 Bh
= (I − ) .
2 2

87
Solutions to implementation aspects
S OLUTION 5.1
The poles of the controller are

1 3
z1 = 1; z2 = ; z3,4 = −1/4 ± i
2 4
We can represent the controller in Jordan form as

Σ1 : s1 (k + 1) = 1 s1 (k) + 2.286y(k)
1
Σ2 : s2 (k + 1) = s2 (k) − 3.073y(k)
2 √ !
  −1 3
  
s3 (k + 1) 4 4 s3 (k) 1.756
Σ3 : = √ + y(k)
s4 (k + 1) − 3 −1 s4 (k) 1.521
4 4

The control input u(k) is then

u(k) = 0.5s1 (k) + 0.8677s2 (k) + 0.8677s3 (k)

Another way to get the parallel form is the following. We write the given transfer function as

Σ1

y(k) P u(k)
Σ2

Σ3

Figure 5.1.1: Parallel form for the controller of Problem 5.1

1 A B Cz + D
H(z) = 2
= + + 2
(z − 1)(z − 1/2)(z + 1/2z + 1/4) z − 1 z − 1/2 z + 1/2z + 1/4

where the constants A, B, C, D are to be determined. It easy to find that the two polynomial in z are equal if
and only if
A = 1.14, B = −2.67, D = 0.95, C = 1.52.
Then the system is the parallel of
1.14
Σ1 : HΣ1 =
z−1
−2.67
Σ2 : HΣ2 =
z − 1/2
1.52z + 0.95
Σ3 : HΣ3 = 2 .
z + 1/2z + 1/4

88
P (z)
y(kh)

C(z)

Figure 5.3.1: Closed loop system for Problem 5.3.

S OLUTION 5.2
(a) The closed loop system from r to y shown in Figure has the following transfer function
Cs (s)P (s)e−sτ
H(s) = .
1 + Cs (s)P (s)e−sτ
In order to find Cs (s) we need to solve H(s) = Hcℓ (s) which gives
Cs (s)P (s)e−sτ C(s)P (s) −sτ
−sτ
= e
1 + Cs (s)P (s)e 1 + C(s)P (s)
 
Cs (s) 1 + C(s)P (s) = C(s) 1 + Cs (s)P (s)e−sτ
C(s)
Cs (s) =
1 + C(s)P (s) − C(s)P (s)e−sτ

(b) We first find C(s):


1
C(s) 8 8s + 8
s + 1 e−sτ = ⇒ C(s) = .
1 2
s + 4s + 8 s2 + 4s
1 + C(s)
s+1
Using the result in (a) we get
8(s + 1)
Cs (s) =
s2 + 4s + 8(1 − e−sτ )
S OLUTION 5.3
The control problem over wireless network can be represented as shown Figure 5.3.1. The delay ∆ is such that
∆(y(kh)) = y(kh − d(k)) d(k) ∈ {0, . . . , N }
The closed loop system is stable if
P (eiω )C(eiω ) 1
< ω ∈ [0, 2π]
1 + P (eiω )C(eiω ) N |eiω − 1|
where N is the number of samples that the control signal is delayed. Notice that the previous result is valid if
the closed loop transfer function is stable. In this case the closed loop transfer function is stable with poles
z1 = 1, z2 = 0.861, z3 = 0.5, z4 = 0.447.
If we plot the bode diagram of the closed loop system without delay versus the function 1/(N |eiω − 1|) for
different values of N we obtain the results shown in Figure 5.3.2. It can be seen that the closed loop system is
stable if N ≤ 3. Thus the maximum delay is 3 samples. Notice that the result is only a sufficient condition.
This means that it might be possible that the system is stable for larger delays than 3 samples.

89
Bode Diagram
60
Closed loop system
N=1
N=3
50 N=4

40

30
Magnitude (dB)

20

10

−10

−20
−2 −1 0 1
10 10 10 10
Frequency (rad/sec)

Figure 5.3.2: Bode diagram of the closed loop system of Problem 5.3 and the function 1/(N |eiω − 1|) for
different values of N .

S OLUTION 5.4
The linear model for multiplication with roundoff is shown in Figure 5.4.1. The signal ǫ is a stochastic variable
representing the roundoff, and it is uniformly distributed in the interval (−δ/2, δ/2). It thus has variance δ 2 /12.
Before computing the variance of u as function of the variance of i we show that the two algorithms compute
y y ǫ

x x P
π Q π

Figure 5.4.1: Linear model for multiplication with roundoff.

the same u. For Algorithm 1 we have


i+ =i+eh/ti
u = k(e + i) =⇒ u = ke + k(i − 1) + keh/ti

whereas for algorithm 2 we have


i+ =ki+keh/ti
u = i + ke =⇒ u = ke + k(i − 1) + keh/ti .

Alg 1: We have that


   
1 1 δ2
V ar{i+ } = V ar{i} + V ar (eh + ǫ) + ǫ = V ar{i} + 1 + .
ti ti 12

Using the approximation of roundoff we have that u ≈ k(e + i) + ǫ and thus

δ2
V ar{u} = kV ar{i} + .
12

90
Alg 2: We have that
   
δ2 1 h 1 δ2
V ar{i+ } = kV ar{i} + + V ar ((ke + ǫ) h + ǫ) + ǫ = kV ar{i} + 2 + + .
12 ti ti ti 12
Using the approximation of roundoff we have that u ≈ i + ke + ǫ and thus
δ2
V ar{u} = V ar{i} + .
12
If we consider
V ar{i+ } = αV ar{i} + f (δ 2 )
then after n iteration we have that !
n−1
X
i
V ar{i}n = α f (δ 2 ) ,
i=0
where we have considered V ar{i}0 = 0. Thus we have that
Alg 1: In this case  
1 δ2
V ar{i}n = n 1 +
ti 12
and thus   2
kn δ
V ar{u}n = kn + +1
ti 12
Alg 2: In this case !
n−1  2
X
i h 1 δ
V ar{i}n = k + +2
ti ti 12
i=0
and thus    2
kn − 1 h 1 δ δ2
V ar{u}n = + +2 +
k−1 ti ti 12 12
In order to compare the two algorithms, we assume h = ti = 1. (This is just for comparison, since multiplica-
tion with 1 normally does not add any roundoff error.) We then have for the two algorithms, that
δ2
V ar{u}(1)
n = (2kn + 1)
 n 12 
k −1 δ2
V ar{u}(2)
n = 4 +1 .
k−1 12
If we assume k < 1 and n large then
kn − 1 4
4 +1≈ +1
k−1 1−k
and thus
4
2kn + 1 > +1
1−k
which allows to conclude that the first algorithm is worst than the second. If k > 1 and n large then
kn − 1 kn
4 +1≈
k−1 k−1
and thus
2kn(k − 1) + 1 < k n
which allows to conclude that in this case the second algorithm is worst than the first.

91
S OLUTION 5.5
For a = 1/8 the transfer function is
1 y(z)
H(z) = =
1 r(z)
1 − z −1
8
thus in time we have
1
y(k) = y(k − 1) + r(k).
8
For a step r(k) = 1 for k ≥ 0 thus
1
y(k) = y(k − 1) + 1 k ≥ 0.
8
The data representation is the following

± 2 1 1/2 1/4 1/8 1/16 1/32


   •     
Sign Integer ←− Fraction −→
The steady state value is

z−1 1 z 1
lim = = 1.14285714.
z→1 z 1 z−1 1 − 0.125
1 − z −1
8
If we compute the steady-state value when truncation and roundoff are considered, and we have limited word-
length we obtain
iteration y(k) with truncation y(k) with roundoff exact
0 1 1 1.0
1 001.00100b 001.00100b 1 + 81
2 001.00100b 001.00101b 1
1 + 8 (1 + 8 ) = 1 + 18 + 64
1 1
=1+ 9
64
1 1 1 73
3 001.00100b 001.00101b 1 + 8 + 64 + 512 = 1 + 512
.. .. .. ..
. . . .
∞ 001.00100b = 1.125 001.00101b = 1.15625 1.14285714
which then gives the following steady-state values
T
yss = 1.125 Truncation
R
yss = 1.15625 Roundoff
E
yss = 1.14285714 Exact.

If we compute the relative error for the truncation an round-off we have


T − yE
yss ss
• Truncation: E
≈ −1.6%
yss
R − yE
yss ss
• Roundoff: E
≈ +1.2%
yss

92
Part II

Event-triggered control

93
Solutions to event-based control and real-time systems
S OLUTION 6.1
(a) The closed-loop system is given by
ẋ = x + u = x − 2x = −x.
Consider the Lyapunov function candidate V (x) = x2 . Clearly V (x) ≥ 0 and its time derivative along
the closed-loop system is
V̇ = 2x ẋ
= 2x (−2x) = −4x2 ≤ 0,

which means that V̇ < 0 when x 6= 0 and V̇ = 0 when x = 0. By Lyapunov’s stability theorem, the
closed-loop systemconverges to the origin asymptotically.
(b) Given the periodic control input, for t ∈ [kh, kh + h), system evolves by
Z t−kh
x(t) = et−kh x(kh) + ( es ds) u(kh)
0
= et−kh x(kh) + (et−kh − 1) u(kh)
When t = kh + h, we get
x(kh + h) = eh x(kh) + (eh − 1) u(kh)
= eh x(kh) + (eh − 1)(−2 x(kh))
= (2 − eh ) x(kh).

Thus x(kh) = (2 − eh )k x(0), k ∈ N. It means that for the closed-loop system to converge to the origin,
the following condition should hold
|2 − eh | < 1 .
Thus −1 < 2 − eh < 1 ⇒ 1 < eh < 3 ⇒ 0 < h < ln 3, i.e., hmax = ln 3 ≈ 1.1s.
(c) Denote by e(t) = x(tk ) − x(t), t ∈ [tk , tk+1 ). Then u(t) = −2(x(t) + e(t)), ∀t ≥ 0. Consider the same
Lyapunov function candidate as before V (x) = x2 . Its time derivative along the closed-loop system is
V̇ = 2x ẋ
= 2x (−2(x + e)) = −4(x2 + x e).

Since x, e ∈ R, x2 = |x|2 and xe ≥ −|x||e|, it gives that


V̇ ≤ −4 (|x|2 − |x||e|)
= −4|x| (|x| − |e|).

By enforcing |e(t)| < |x(t)|, ∀t > 0, V̇ ≤ 0 and V̇ = 0 only when x = 0. The same convergence result
can be obtained as in Problem 1.
For t ∈ [tk , tk+1 ) the closed-loop system evolves by
Z t−tk
t−tk
x(t) = e x(tk ) + ( es ds) u(tk )
0
= et−tk x(tk ) + (e t−tk
− 1) u(tk )
t−tk t−tk
=e x(tk ) + (e − 1) (−2x(tk ))
t−tk
= (2 − e )x(tk ).

94
Thus for t ∈ [tk , tk+1 ), the state error is given by

e(t) = x(tk ) − x(t)


1
= x(t) − x(t)
2 − et−tk
et−tk − 1
= x(t).
2 − et−tk

To enforce |e(t)| < |x(t)|, ∀t > 0, we get ∀t ∈ [tk , tk+1 )

et−tk − 1
|e(t)| = | x(t)| < |x(t)|
2 − et−tk
et−tk − 1
⇒| |<1
2 − et−tk
⇒ |et−tk | < 1.5
⇒ |etk+1 −tk | < 1.5
⇒ tk+1 − tk < ln 1.5.

That is to say, the maximal sampling interval is upper-bounded by ln 1.5 ≈ 0.4s.

(d) In order to implement the controller with event-based sampling and ZOH, the following steps should be
followed:

(a) Monitor: keep monitoring x(t), and check if |e(t)| < |x(t)| holds, where e(t) = x(t) − x(tk ). If
so, go to ’sample’ step.
(b) Sample: sample x(t) to obtain x(tk ). Save x(tk ), go to ’update’ step.
(c) Update: set u = −2x(tk ), go back to ’Monitor’.
(d) ZOH: hold u.

While to implement the periodic controller, only time t needs to be monitored and the following steps
should be followed:

(a) Monitor: keep monitoring t, and check if t = kh holds. If so, go to ’sample’ step.
(b) Sample: sample x(t) to obtain x(kh), go to ’update’ step.
(c) Update: set u = −2x(kh), go back to ’Monitor’.
(d) ZOH: hold u.

S OLUTION 6.2
(a) The closed-loop system is given by

ẋ(t) = (A + BK) x(t)


 
−1 0
= x(t).
−2 −3
| {z }
Acl

The eigenvalues of the matrix Acl are λ1 = −1 and λ2 = −3, which implies that the closed-loop system
is asymptotically stable.

95
(b) According to the definition of the state error e(t), the control signal can be rewritten as

u(t) = K x(t) + e(t) ,

so the closed loop equation of the system becomes

ẋ(t) = (A + BK) x(t) + BKe(t)


   
−1 0 −3 −1
= x(t) + e(t).
−2 −3 −3 −1

(c) The time-derivative of the candidate Lyapunov function along the closed-loop system is given by

V̇ = 3x1 x˙1 + x2 x˙2


= −3x21 − 2x1 x2 − 3x22 − 9x1 e1 − 3x1 e2 − 3x2 e1 − x2 e2
≤ −2(x21 + x22 ) + 9|x1 ||e1 | + 3|x1 ||e2 | + 3|x2 ||e1 | + |x2 ||e2 |
≤ −2kxk2 + 16kxkkek.

By choosing the event triggering condition

1
kek < kxk,
8

we have V̇ < 0, which implies that the closed-loop system still converges to the origin.

S OLUTION 6.3

(a) The two tasks are not independent in this particular case however. We first compute the CPU utilization
n
X Ci Cc Ca
U= = +
Ti Tc Ta
i=1

where the period Tc = Ta = h. The two tasks needs to be finished within the new sampling interval. Thus we
have
0.1 0.2
U= + = 0.75
0.4 0.4
thus the CPU is utilized 75% of the time. We still do not know if it is possible to schedule the tasks so that
they meet their deadlines. In order to do this we calculate the schedule length and we draw the schedule. The
schedule length is
lcm{Ta , Tc } = 0.4

Ja Jc

1
0
0
1

0 0.1 0.2 0.3 0.4


Figure 6.3.1: Schedule for the two tasks Ja and Jc of Problem 6.3.

96
Ja
1
0 1
0 1
0 1
0
0
1 0
1 0
1 0
1
0
1 0
1 0
1 0
1

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8

Jc
1
0 1
0 1
0 1
0
0
1 0
1 0
1 0
1

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8

Jx
1
0 1
0 1
0 1
0
0
1 0
1 0
1 0
1
0
1 0
1 0
1 0
1

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8


Figure 6.3.2: Schedule for the two tasks Ja , Jc and Jx of Problem 6.3.

(b) The utilization factor in this case is


0.1 0.2 0.2
U= + + =1
0.4 0.4 0.8
Thus the CPU will be fully occupied. In order to see if the tasks are schedulable we draw the schedule. In this
case the schedule length is
lcm{Ta , Tc , Tx } = 0.8.
Notice that Jx starts after 3 time steps since the release time is rx = 0.3. The worst case response time for the
control task Jc is 0.4 as is can be seen from Figure 6.3.2.

S OLUTION 6.4
In a closed loop with a delay we know that the stability is guaranteed if

P (eiω )C(eiω ) 1
iω iω
< iω
ω ∈ [0, π]
1 + P (e )C(e ) N |e − 1|

where N = ⌈Rc /h⌉, with Rc the worst case response time of the control task Jc caused by the high priority
task.
From the previous equation we have immediately that

|1 + P (eiω )C(eiω )|
N<
|eiω − 1||P (eiω )C(eiω )|

and this should hold for all ω ∈ [0, π]. We have that the magnitude of the closed loop transfer function is

P (eiω )C(eiω ) |10eiω − 1|


= 3
1 + P (eiω )C(eiω ) |130eiω − 103|
and thus we have that
1 |130eiω − 103|
N<
3 |(eiω − 1)(10eiω − 1)|
for all ω ∈ [0, π]. This means that

1 |130eiω − 103|
N < min
ω∈[0,π] 3 |(eiω − 1)(10eiω − 1)|

97
Since the right hand side of the previous inequality can be regarded as the magnitude of a transfer function with
one zero and two poles on the positive real axis, it is clear the minimum is at ω = π.
Thus we have that
1 |130eiω − 103|
N< ≈ 3.53 .
3 |(eiω − 1)(10eiω − 1)| ω=π
Since N is an integer then we have that N = 3. Thus we have that Rc = 2N = 6.
We know that the worst case computation time for the control task is Cc = 1 with period Tc = h = 2. and
that the high priority task is not released until time t = 2. The worst case response time for the control task is
given by
Rc = C c + C I
which than gives CI = 5. The schedule in this case looks as in Figure 6.4.1. As we can notice the control task

Jc
1
0 1
0 1
0 1
0 1
0 1
0
1
0 1
0 1
0 1
0 1
0 1
0

0 1 2 3 4 5 6 7 8 9 10 11
Interrupt

JI
1
0 1
0 1
0 1
0 1
0 1
0
0
1 0
1 0
1 0
1 0
1 0
1
0
1 0
1 0
1 0
1 0
1 0
1

0 1 2 3 4 5 6 7 8 9 10 11
Figure 6.4.1: Schedule for the control task Jc and the task handling the interrupt, of Problem 6.4.

misses its deadlines at t = 4 and t = 6. The control task is then delayed and the worst case response time is
exactly Rc = 6, which represents the maximum delay of the control task.

S OLUTION 6.5
(a) The situation is depicted in Figure 6.5.1. Let us consider the single instances.

JC

JB

JA

3
0 1
5
2 4

Figure 6.5.1: Priority inversion. Problem 6.5

0) The task JA is released and its starts its computations. The bus is locked.
1) The task JC requires to be executed. Since it has high priority the CPU preempts the task JA .

98
2) Since JC needs to access the bus, and this resource is occupied by the task JA the task JC is stopped.
At the same time task JB asks for CPU time, and since JC is stopped and it has higher priority of
JA , JB can be executed. The execution of JB prevents JA to release the bus.
3) The task JB with medium priority has finished to be executed and the CPU is given to the task JA .
4) Task JA finishes to write on the bus, and it releases it.
5) Task JC finally can use the bus and it is scheduled by the CPU.

What we can see from this particular situation is that a high priority task as JC is blocked by a low
priority task. We have a situation called priority inversion. The response time for the high priority task
JC in this case is RC = 4.1.

(b) A possible way to overcome the problem is to use the priority inheritance protocol. In this case the task
JA inherits the priority of the task which has higher priority and needs to use the blocked resource. The
task JA , thus acquires high priority and is able to release the resource as soon as possible, so that the high
priority task JC can be served as soon as possible. The priorities are set to the default one, once the bus
is released.

S OLUTION 6.6
(a) Using the EDD algorithm the tasks are scheduled considering the deadlines. Those that have early
deadline are scheduled first. From the table given in the problem we see that the order of scheduling
is
J1 → J5 → J3 → J4 → J2 .
The schedule is shown in Figure 6.6.1. The maximum lateness is equal to −1 due to task J4 . The fact

d1 d5 d3 d4 d2

J1 J5 J3 J4 J2

0 1 2 3 4 5 6 7 8 9 10

Figure 6.6.1: Jackson’s scheduler

that the lateness is negative means that all the tasks meet their deadlines.

(b) Let σ be a schedule produced by any algorithm A. If A is different from EDD, then there exist two tasks
Ja and Jb with da < db with Jb scheduled before the task Ja . Now, let σ ′ be a schedule obtained from σ
by exchanging Ja with Jb in the schedule σ, so that Ja is scheduled before Jb in σ ′ . If we exchange Ja
with Jb in σ the maximum lateness cannot increase. In fact for the schedule σ we have that the maximum
lateness is
Lmaxab = fa − da
as can be seen from Figure 6.6.2.

In σ ′ we have that
L′maxab = max(L′a , L′b ).
We can have two possible cases

– L′a ≥ L′b , then L′maxab = L′a = fa′ − da , and since fa′ < fa then we have L′maxab < Lmaxab ,

99
σ Jb Ja

σ′ Ja Jb

a0 fb fa′ fb′ = fa da db

Figure 6.6.2: Jackson’s scheduling - proof

– L′a ≤ L′b , then L′maxab = L′b = fb′ −db = fa −db , and since da < db we have that L′maxab < Lmaxab .

Since in both cases L′maxab < Lmaxab , then we can conclude that interchanging the tasks Ja and Jb in
σ we cannot increase the lateness of a set of tasks. By a finite number of such transpositions, σ can be
transformed in σEDD . Since at each step the lateness cannot increase, σEDD is optimal.

100
Solutions to real-time scheduling
S OLUTION 7.1
Notice that the four assumptions under which the schedulability test can be use are satisfied. We can remember
here the assumptions
A1 the instances of the task Ji are regularly activated at constant rate,

A2 all instances of the periodic task Ji have the same worst case execution time Ci ,

A3 all instances of a periodic task Ji have the same relative deadline, which is equal to the period,

A4 all tasks are independent i.e., there are no precedence relations and no resource constraints.
The schedulability test gives
1 2 3
U= + + = 0.8833  0.7798 = 3(21/3 − 1)
4 6 10
thus we cannot conclude if it is possible to schedule the three tasks. We need to use the necessary condition,
which ensures that the tasks are schedulable if the worst-case response time is less than the deadline. The
priority is assigned so that J1 has highest priority, J2 medium priority and J3 lowest priority. The analysis
yields

R1 = C1 = 1 ≤ D1 = 4

for task J2 we have

R20 = C2 = 2
 0
1 R2
R2 = C 2 + C1 = 2 + 1 = 3
T1
 1
2 R2
R2 = C 2 + C1 = 2 + 1 = 3 ≤ D 2 = 6
T1

for task J3 we have

R30 = C3 = 3
 0  0
1 R3 R3
R3 = C 3 + C1 + C2 =3+2+1=6
T1 T2
 1  1
2 R3 R3
R3 = C 3 + C1 + C2 =3+2+2=7
T1 T2
 2  2
3 R3 R3
R3 = C 3 + C1 + C2 =3+2+2=9
T1 T2
 3  3
R3 R3
R34 = C3 + C1 + C2 = 3 + 3 + 4 = 10
T1 T2
 4  4
5 R3 R3
R4 = C 3 + C1 + C2 = 3 + 3 + 4 = 10 ≤ D3 = 10
T1 T2
Thus the three tasks are schedulable with rate monotonic. The schedule length is given by

lcm(T1 , T2 , T,3 ) = 60

The schedule is shown in figure 7.1.1.

101
J1
t
0 10 20 30 40 50 60

J2
t
0 10 20 30 40 50 60

J3
t
0 10 20 30 40 50 60
Figure 7.1.1: Rate monotonic scheduling for the tasks J1 , J2 and J3 , in Problem 7.1

S OLUTION 7.2
The CPU utilization is
1 2 3
U= + + = 0.8833 ≤ 1
4 6 10
thus the schedulability test for EDF ensures that the three tasks are schedulable. The schedule length, as in
Problem 7.1, is
lcm(T1 , T2 , T,3 ) = 60.
The schedule is shown in figure 7.2.1.

J1
t
0 10 20 30 40 50 60

J2
t
0 10 20 30 40 50 60

J3
t
0 10 20 30 40 50 60
Figure 7.2.1: EDF scheduling for the tasks J1 , J2 and J3 , in Problem 7.2

102
S OLUTION 7.3
The CPU utilization is
3
X Ci C1 C2 C3 1 2 1
U= = + + = + + = 0.9762  3(21/3 − 1) = 0.7798
Ti T1 T2 T3 3 4 7
i=1

Let us consider the first 10 steps of the RM schedule. The RM algorithm assigns to J1 the highest priority, to
J2 middle priority and to J3 the lowest. The schedule length is

lcm(3, 4, 7) = 84

but we draw here only the first 9 steps, as shown in Figure 7.3.1. We can notice that the task J3 misses its
deadline at 7 and it is scheduled in the next time slot. Thus the set of tasks is not schedulable with RM.

J1
0 1 2 3 4 5 6 7 8 9 t

J2
0 1 2 3 4 5 6 7 8 9 t

J3
0 1 2 3 4 5 6 7 8 9 t

Figure 7.3.1: RM scheduling for the tasks J1 , J2 and J3 , in Problem 7.3. Notice that the task J3 misses its
deadline.

Since the utilization factor U is less than one, then we can use the EDF algorithm to schedule the task. The
schedule is shown in Figure 7.3.2.

S OLUTION 7.4
(a) The CPU utilization is
1 2 3
U= + + = 0.95
4 5 10
The RM algorithm assigns the highest priority to J1 , middle to J2 and the lowest to J3 . The schedule
length is given by
lcm(4, 5, 10) = 20.
The RM scheduling algorithm assigns the highest priority to the control task J1 which then has worst-
case response time of R1 = 1. In the case of the EDF we need to draw the schedule in order to compute
R1 . The schedule is shown in Figure 7.4.1. The EDF gives a worst-case response time of R1 = 2 as
is shown in Figure 7.4.1. With the EDF algorithm is possible to schedule tasks that are not schedulable
with RM, but we do not have anymore "control" on the response time of some tasks.

103
J1
t
0 10 20 30 40 50 60 70 80 84

J2
t
0 10 20 30 40 50 60 70 80 84

J3
t
0 10 20 30 40 50 60 70 80 84
Figure 7.3.2: EDF scheduling for the tasks J1 , J2 and J3 , in Problem 7.3. Notice that all the tasks meet their
deadlines.

Rc1

J1

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 t

J2
t
0 5 10 15 20

J3
0 t
10 20
Figure 7.4.1: EDF scheduling for the tasks J1 , J2 and J3 in Problem 7.4.

(b) In order to have R2 = 1 we need to assign the highest priority to the task 2. We then give to J1 middle
priority and J3 the lowest. The tasks meet their deadline if the worst-case response time is less than the
deadline. In this case we have

R2 = C 2 = 2 ≤ D 2 = 5

for task J1 we have

R10 = C1 = 1
 0
1 R1
R1 = C 1 + C2 = 1 + 2 = 3
T2
 1
2 R1
R1 = C 1 + C2 = 1 + 2 = 3 ≤ D 1 = 4
T2

104
for task J3 we have

R30 = C3 = 3
 0  0
1 R3 R3
R3 = C 3 + C1 + C2 =3+1+2=6
T1 T2
 1  1
R3 R3
R32 = C3 + C1 + C2 =3+2+4=9
T1 T2
 2  2
3 R3 R3
R3 = C 3 + C1 + C2 = 3 + 3 + 4 = 10
T1 T2
 3  3
4 R3 R3
R3 = C 3 + C1 + C2 = 3 + 3 + 4 = 10 ≤ D3 = 10
T1 T2

Thus the three tasks are schedulable. The schedule is shown in Figure 7.4.2

J2
t
0 5 10 15 20

J1

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 t

J3
0 t
10 20
Figure 7.4.2: Fixed-priority scheduling for the tasks J1 , J2 and J3 , in Problem 7.4. J2 in this case has the
highest priority.

S OLUTION 7.5

(a) The tasks will meet their deadlines since the schedulability condition for RM scheduling is fulfilled:

2
X Ci C1 C2 7 
U= = + = < 2 21/2 − 1 ≈ 0.8
Ti T1 T2 12
i=1

The schedule length is equal to lcm(T 1, T 2) = 12. The time evolution is shown in Figure 7.5.1.

(b) The worst-case response time for the higher priority tasks is given by the worst-case computation time.
Thus R1 = C1 = 1. For the second task we the worst-case response time is computed by
 
R2
R2 = C 2 + C1 .
T1

105
J1 00000
11111
00000
11111 1111
0000
0000
1111 1111
0000
0000
1111 11111
00000
00000
11111
00000
11111 0000
1111 0000
1111 00000
11111
0 1 2 3 4 5 6 7 8 9 10 11 12 t

11111111
00000000 111111111
000000000 11111111
00000000
J2 00000000
11111111 000000000
111111111 00000000
11111111
00000000
11111111 000000000
111111111 00000000
11111111
00000000
11111111 000000000
111111111 00000000
11111111
t
0 4 8 12
Figure 7.5.1: Rate monotonic scheduling for the tasks J1 and J2 , in Problem 7.5

In order to solve the equation we need to use an iterative procedure. Let R20 = 0, then
 0
1 R2
R2 = C 2 + C1 = C2 = 1
T1
 1
2 R2
R2 = C 2 + C1 = C2 + C1 = 2
T1
 2
2 R2
R3 = C 2 + = C2 + C1 = 2.
T1
Thus R2 = 2. This agrees with the time evolution plotted in (a).

S OLUTION 7.6
Since the tasks have an offset Oi we cannot use the response time formula for rate-monotonic since it
assumes that all tasks are released simultaneously at time 0. We need to solve the problem drawing the
schedule.
(a) In order to draw the schedule for the three tasks we need to compute the schedule length. The schedule
length is the least common multiple (l.c.m.) of the task periods plus the offset. The l.c.m. is 30, thus the
schedule length is 30+1=31.
(b) Rate-monotonic
The rate-monotonic assigns the priority so that J1 has the highest priority, J2 medium priority and J3 the
lowest.

J1

1 4 7 10 13 16 19 22 25 28 31 t

J2

1 6 11 16 21 26 31 t

J3
t
0 6 12 18 24 30 31

Thus the worst-case response time for task J2 is R2 = 3.


(c) Deadline-monotonic
The deadlines agree by assumption with the periods, so deadline-monotonic scheduling is identical to
rate-monotonic scheduling. Thus the worst-case response time for task J2 is R2 = 3.

106
(d) Earliest-deadline-first
The schedule is shown below.

J1

1 4 7 10 13 16 19 22 25 28 31 t

J2

1 6 11 16 21 26 31 t

J3
t
0 6 12 18 24 30 31

The worst-case response time for J2 is R2 = 3.

S OLUTION 7.7
(a) The utilization factor
3
X Ci C1 C2 C3 1 2 1
U= = + + = + + ≈ 0.82  3(21/3 − 1) ≈ 0.76.
Ti T1 T2 T3 4 5 6
i=1

so we derive the worst-case response times for all tasks. The priority assignment is such that J1 has the
highest priority, Jc intermediate priority and J2 the lowest priority. Thus the analysis yields

⇒ R1 = C 1 = 1
Rc0 = Cc = 2
 0
1 Rc
Rc = C c + C1 = 3
T1
 1
Rc
Rc2 = Cc + C1 = 3
T1
⇒ Rc = 3 ≤ D c = 5
R20 = C2 = 2
 0  0
1 R2 R2
R2 = C 2 + C1 + Cc = 4
T1 Tc
 1  1
R2 R2
R22 = C2 + C1 + Cc = 4
T1 Tc
⇒ R2 = 4 ≤ D2 = 6.

Thus the set of task is schedulable with the ate monotonic algorithm, with the worst-case response time
for the control task equal to Rc = 3.

(b) Since τ = Rc = 3 > h = 2 then the sampled version controller in state space from is

x(kh + h) = Φx(kh) + Γ0 y(kh − (d − 1)h) + Γ1 y(kh − dh)


u(kh) = Cx(kh)

107
where τ = (d − 1)h + τ ′ with τ ′ ≤ h and d ∈ N. Thus if we choose d = 2 we get τ ′ = 1. The state
space description becomes
      
x(kh + h) Φ Γ1 Γ0 x(kh) 0
y(kh − h) =  0 0 I  y(kh − 2h) + 0 y(kh)
y(kh) 0 0 0 y(kh − h) I
 
 x(kh)
u(kh) = C 0 0 y(kh − 2h)
y(kh − h)

A sketch of a computer program is given below. Notice that we assume that initial values of x(0),
y(−2h), y(−h) are known.

Computer Code
y=array{y(-1),y(-2)};
nexttime = getCurrentTime();
k = 0;
while true do
new_y=AD_conversion();
x(k) = Φx(k − 1) + Γ1 y(k − 3) + Γ0 y(k − 2);
u(k) = Cx(k);
DA_conversion();
y(k) =new_y;
k = k + 1;
nexttime = nexttime + h;
sleepUntil(nexttime);
end while

(c) The set of tasks will be schedulable if the worst-case response time, Ri is less than the Di for i ∈ 1, 2, c
In this case the control task has the highest priority, J1 intermediate priority and J2 the lowest priority.
The analysis yields

⇒ Rc = C c = 2
R10 = C1 = 1
 0
1 R1
R1 = C 1 + Cc = 3
Tc
 1
2 R1
R1 = C 1 + Cc = 3
Tc
⇒ R1 = 3 ≤ D 1 = 4
R20 = C2 = 1
 0  0
1 R2 R2
R2 = C 2 + Cc + C1 = 4
Tc T1
 1  1
2 R2 R2
R2 = C 2 + Cc + C1 = 4
Tc T1
⇒ R2 = 4 ≤ D2 = 6.

Thus the tasks are schedulable even if the task Jc is a high priority task. In doing this the delay τ is then
equal to Rc = 2.

108
S OLUTION 7.8
The schedulability of periodic tasks can be guaranteed by evaluating the interference by the polling server on
periodic execution. In the worst case, such an interference is the same as the one introduced by an equivalent
periodic task having a period equal to Ts and computation time equal to Cs . Independently of the number of
aperiodic tasks handled by the server, a maximum time equal to Cs is dedicated to aperiodic requests at each
server period. The utilization factor of the polling server is Us = Cs /Ts and hence the schedulability of a
periodic set with n tasks and utilization Up can be guaranteed if

Up + Us ≤ (n + 1)(21/(n+1) − 1)

Thus we need to have


n
X Ci
Us ≤ (n + 1)(21/(n+1) − 1) − Up = (n + 1)(21/(n+1) − 1) −
Ti
i=1

which in this case is


1 2
Us ≤ 3(21/3 − 1) − − ≈ 0.3298 ⇒ Usmax = 0.3298
5 8

S OLUTION 7.9
The tasks and the polling server can be scheduled since we computed the maximum utilization as
2
X Ci Cs 1 2 1
U= + = + + = 0.77 ≤ 3(21/3 − 1) = 0.78
Ti Ts 4 8 5
i=1

The schedule is shown is Figure 7.9.1

S OLUTION 7.10
The polling server has period Ts = 6 thus has lower priority with respect to J1 and J2 . The time evolution is
given in the following Figure 7.10.1.
Note that the server is preempted both by J1 and J2 , so the aperiodic task is finished to be served only after one
period of the server i.e., Ts = 6. The server has lower priority, so it runs only after J1 and J2 have terminated.
At time t = 0 the server is ready to be executed, but since no aperiodic tasks are waiting the server is suspended.
At t = 3 the aperiodic task is requiring the CPU but it will need to wait until the polling server is reactivated
i.e., until t = Ts = 6. At this time the server is ready to handle the request of the aperiodic task, but the J1 and
J2 with higher priority preempt the server.

Solutions to models of computation I: Discrete-event systems


S OLUTION 8.1
Let us define the following event alphabet

E = {s1 , s̄1 , s2 , s̄2 , s3 , s¯3 }

where

si is triggered when Si becomes 1


s̄i is triggered when Si becomes 0 i = 1, 2, 3.

109
J1
t
0 10 20 30 40
3
a1

t
2
a2

t
1
a3

t
4
a4
t
2
Cs
Js 1
t
0 10 20 30 40

J2
t
0 10 20 30 40
Figure 7.9.1: Polling server of Problem 7.9

We consider the following four possible states for the gate


Q = {O, C, R, L}
where O stands for ’opened’, C for ’closed’, R for ’raising’ and L for ’lowering’. The transition function is
defined as following
δ(O, s1 ) = L
δ(O, s̄1 ) = O
δ(L, s̄2 ) = C
δ(L, s2 ) = L
δ(C, s̄3 ) = C
δ(C, s3 ) = R
δ(R, s2 ) = O
δ(R, s̄2 ) = R
δ(R, s1 ) = L.
We defined as initial state the state O, and as set of final states
Qm = {O}.

110
J1 00000
11111
00000
11111 1111
0000
0000
1111 1111
0000
0000
1111 11111
00000
00000
11111
00000
11111 0000
1111 0000
1111 00000
11111
0 1 2 3 4 5 6 7 8 9 10 11 12 t

11111111
00000000 111111111
000000000 11111111
00000000
J2 00000000
11111111 000000000
111111111 00000000
11111111
00000000
11111111 000000000
111111111 00000000
11111111
00000000
11111111 000000000
111111111 00000000
11111111
t
0 4 8 12
3
11111
00000 11111111
00000000
Aperiodic 00000
11111
00000
11111
00000
11111
00000000
11111111
00000000
11111111
00000000
11111111
00000
11111 00000000
11111111
00000
11111 00000000
11111111

11111
00000 1111
0000
00000
11111
00000
11111 0000
1111
00000
1111100000000
11111111
1010 00000000
11111111
00000
11111 0000
1111
00000
1111100000000
11111111 00000000
11111111
Cs 00000
11111 0000
1111
00000
1111100000000
11111111
0
1 00000000
11111111
00000
11111 0000
1111
00000
1111100000000
11111111
0
1 00000000
11111111
00000
11111 0000
1111
00000
11111
00000
11111 0000011111111
0000
1111
1111100000000
10 00000000
11111111
0 6 12 t

Figure 7.10.1: Polling server of Problem 7.5

Thus the automaton is


A = (Q, E, δ, O, O)
where Q, E and δ are defined as before. In Figure 8.1.1 is shown the automaton.

s1
O L

s2 s̄2

R C
s3
Figure 8.1.1: Control of a gate. Problem 8.1.

S OLUTION 8.2
We consider the following automaton as model of the vending machine

A = (Q, E, δ, q0 , Qm )

where we have the state-space


Q = {0, 10, 20, 30, 40, 25, 35, 45, O}

111
where the state O denote ’overfull’. The event alphabet is
E = {D, Q}.
The initial state is q0 = 0 and the set of final states is Qm = {45}. The transition map is
δ(0, D) = 10
δ(0, Q) = 25
δ(10, D) = 20
δ(10, Q) = 35
δ(20, D) = 30
δ(20, Q) = 45
δ(25, D) = 35
δ(25, Q) = 0
δ(30, D) = 40
δ(30, Q) = 0
δ(35, D) = 45
δ(35, Q) = 0
δ(40, Q) = 0
δ(40, D) = 0.
The automaton is shown in Figure 8.2.1. We can compute the marked language by inspection
D D D D
0 10 20 30 40

Q Q Q

D D
Q
25 35 45 Q, D

Q
Q
O

Q, D

Figure 8.2.1: Automaton A of Problem 8.2.

Lm (A) = {DDQ, DQD, QDD}.


We notice directly form Figure 8.2.1 that the machine will not dispense soda if the wrong amount of money is
inserted. In that case we can reach the states 30 or 40 or O from where is not possible to reach the state 45 or if
we reach the states 25 or 35 and we overpay we will not get a soda. Notice that we can continue to insert coins
and the soda will never be dispensed i.e., we have a livelock.
In order to prove it formally we can notice that
DDD ∈ L(A)

112
but clearly
DDD ∈
/ Lm (A).
Thus the automaton blocks.

S OLUTION 8.3
In order to compute the marked language and generated language we consider the following notation. Let Rij k

be the set of all strings x such that δ(qi , x) = qj , and if δ(qi , y) = qℓ , for any y that is prefix (initial segment)
of x or ǫ, then ℓ ≤ k. That is, Rijk is the set of all strings that take the finite automaton from state q to state q
i j
without going through any state numbered higher than k. Note that by “going through a state” we mean both
entering and leaving that state. Thus i or j may be greater than k. It is possible to define Rij k recursively

k k−1 k−1
 k−1 k−1
Rij = Rik Rkk Rkj ∪ Rij

and
(
0 {a|δ(qi , a) = qi } if i = j
Rij =
{a|δ(qi , a) = qj } if i 6= j

We then have that [


n
Lm (A) = R0j
qj ∈Qm

and [
n
L(A) = R0i .
qi ∈Q
k can be in general represented by the regular expressions r k , which allows a more compact
A set of strings Rij ij
notation.
The automaton is formally defined by the following five-tuple

A = (Q, E, δ, q0 , Qm )

where

Q = {q1 , q2 }
E = {0, 1}
δ(q1 , 0) = q2
δ(q1 , 1) = q1
δ(q2 , 0) = q2
δ(q2 , 1) = q1
q0 = q1
Qm = {q2 }
0,
The first column of the table is computed by inspection can compute the rij

k=0 k=1
k
r11 ǫ ǫ
k
r12 0 0
k
r21 1 1
k
r22 ǫ 10

113
The second column is computed using the recursive formula introduced before. In particular we have
1 0

0 ∗ 0 0
r11 = r11 r11 r11 + r11 = ǫ(ǫ)∗ ǫ + ǫ = ǫ
1 0

0 ∗ 0 0
r12 = r11 r11 r12 + r12 = ǫ(ǫ)∗ 0 + 0 = 0
1 0

0 ∗ 0 0
r21 = r21 r11 r11 + r21 = 1(ǫ)∗ ǫ + 1 = 1
1 0

0 ∗ 0 0
r22 = r21 r11 r12 + r22 = 1(ǫ)∗ 0 + ǫ = 10

The marked language is then


2 1

1 ∗ 1 1
Lm (A) = r12 = r12 r22 r22 + r12 = 0(10)∗ 10 + 0 = 0(10)∗

In this case the generated language is


2 2
L(A) = r11 + r12 = 0(10)∗ 1 + ǫ + 0(10)∗

The prefix closure Lm (A) is

Lm (A) = {s ∈ E ∗ |∃t ∈ E ∗ , st ∈ Lm (A)}.

We prove that Lm (A) = L(A).

Prove Lm (A) ⊆ L(A): Let x ∈ Lm (A) then is trivial to see that x ∈ L(A).

Prove Lm (A) ⊇ L(A): Let now x ∈ L(A). Then x = {ǫ, 0, 010, 01010, 0101010, . . . , 01, 0101, 010101, 01010101, . . . }.
The strings in Lm (A) are 0, 010, 01010, 0101010, . . . thus prefixes are {ǫ, 0, 01, 0101, 01010, 010101, . . . }
and as we can see then x ∈ Lm (A).

We can conclude that Lm (A) = L(A), thus the DES is nonblocking.

S OLUTION 8.4
In this case the automaton is
A = (Q, E, δ, q0 , Qm )
where

Q = {q1 , q2 , q3 }
E = {0, 1}
δ(q1 , 0) = q2
δ(q1 , 1) = q3
δ(q2 , 0) = q1
δ(q2 , 1) = q3
δ(q3 , 0) = q2
δ(q3 , 1) = q2
q0 = q1
Qm = {q2 , q3 }

We apply the recursive formula introduced in the solution of Problem 8.3 to compute the marked language and
the generated language.

114
k=0 k=1 k=2
k
r11 ǫ ǫ (00)∗
k
r12 0 0 0(00)∗
k
r13 1 1 0∗ 1
k
r21 0 0 0(00)∗
k
r22 ǫ ǫ+00 (00)∗
k
r23 1 1+01 0∗ 1
k
r31 ∅ ∅ (0 + 1)(00)∗ 0
k
r32 0+1 0+1 (0 + 1)(00)∗
k
r33 ǫ ǫ ǫ + (0 + 1)0∗ 1
Certain equivalences among regular expressions has been used to simplify the expressions. For example
1 0

0 ∗ 0 0
r22 = r21 r11 r12 + r22 = 0(ǫ)∗ 0 + ǫ = ǫ + 00
2 we have
or, for example for r13
2 1

1 ∗ 1 1
r13 = r12 r22 r23 + r13 = 0(ǫ + 00)∗ (1 + 01) + 1

Since (ǫ + 00)∗ = (00)∗ and (1 + 01) = (ǫ + 0)1, the expression becomes


2
r13 = 0(00)∗ (ǫ + 0) + 1 + 1

and since (00)∗ (ǫ + 0) = 0∗ the expression reduces to


2
r13 = 00∗ 1 + 1 = 0∗ 1.

We have that
3 3
Lm (A) = r12 + r13
where
3 2

2 ∗ 2 2
∗
r12 = r13 r33 r32 + r12 = 0∗ 1 ǫ + (0 + 1)0∗ 1 (0 + 1)(00)∗ + 0(00)∗
∗
= 0∗ 1 (0 + 1)0∗ 1∗ (0 + 1)(00)∗ + 0(00)∗

and

3 2

2 ∗ 2 2
∗ 
r13 = r13 r33 r31 + r13 = 0∗ 1 ǫ + (0 + 1)0∗ 1 ǫ + (0 + 1)0∗ 1 + 0∗ 1
∗
= 0∗ 1 (0 + 1)0∗ 1∗

Thus we have ∗ 
3 3
Lm (A) = r12 + r13 = 0∗ 1 (0 + 1)0∗ 1 ǫ + (0 + 1)(00)∗ + 0(00)∗ .
The generated language L(A) is given by
3 3 3
L(A) = r11 + r12 + r13

where  
3 2 2 ∗ 2 2
r11 = r13 r33 r31 + r11 = 0∗ 1((0 + 1)0∗ 1)∗ (0 + 1)0 + ǫ (00)∗

115
S OLUTION 8.5
Two states p and q are said to be equivalent, p ≡ q if and only if for each input string x, δ(p, x) is a marked
state if an only if δ(q, x) is a marked place. Two states p and q are said to be distinguishable if there exists an
x such that δ(p, x) is in Qm and δ(q, x) is not.
We build a table as follows

b X
c X X
d X X X
e X X X
f X X X X
g X X X X X X
h X X X X X X
a b c d e f g
where an ’X’ is placed in the table each time a pari of states cannot be equivalent. Initially an ’X’ is placed
in each entry corresponding to one marked state and one non-marked state. It is in fact impossible that a
marked state and a non-marked state are equivalent. In the example, place an ’X’ in the entries (a, c), (b, c),
(c, d), (c, e), (c, f ), (c, g) and (c, h). Next for each pair of states p and q that are not already known to be
distinguishable, we consider the pairs of states r = δ(p, a) and s = δ(q, a) for a input symbol a. If states r
adn s have been shown to be distinguishable by some string x then p and q are distinguishable by the string ax.
Thus if the entry (r, s) n the table has an ’X’, an ’X’ is also placed at the entry (p, q). If the entry (r, s) does
not yet have an ’X’, then the pair (p, q) is placed on a list associated with the (r, s)-entry. At some future time,
if (r, s) entry receives and ’X’, then each pair on the list associated with the (r, s)-entry also receives and ’X’.
In the example , we place and ’X’ in the entry (a, b), since the entry (δ(b, 1), δ(a, 1)) = (c, f ) is already
with ’X’. Similarly, the (a, d)-entry receives an ’X’ since the entry (δ(a, 0), δ(b, 0)) = (b, c) has an ’X’.
Consideration of the (a, e)-entry on input 0 results in the pair (a, e) being placed on the list associated with
(b, h). Observe that on input 1, both a and e go to the same state f and hence no string starting with 1 can
distinguish a from e. Because of the 0-input, the pair (a, g) is placed on the list associated with (b, g). When
the (b, g)-entry is considered, it receives an ’X’ on account of a 1-input and (a, g) receives a ’X’. The string 01
distinguishes a from g.
At the end of the algorithm a ≡ e, b ≡ h and d ≡ f . The minimum automaton is shown in Figure 8.5.1.

S OLUTION 8.6
We can construct a deterministic automaton

A′ = (Q′ , {0, 1}, δ ′ , [q0 ], Qm )

accepting Lm (A) as follows. The set Q consists of all subsets of Q. These we denote with [q0 ], [q1 ], [q0 , q1 ],
and . Since δ(q0 , 0) = {q0 , q1 } then we have

δ ′ ([q0 ], 0) = [q0 , q1 ]

and similarly
δ ′ ([q0 ], 1) = [q1 ] δ ′ ([q1 ], 0) = δ ′ ([q1 ], 1) = [q0 , q1 ].
Naturally, δ ′ (, 0) = δ ′ (, 1) =. Finally
δ ′ ([q0 , q1 ], 0) = [q0 , q1 ]
since
δ({q0 , q1 }, 0) = δ(q0 , 0) ∪ δ(q1 , 0) = {q0 , q1 }∪ = {q0 , q1 },

116
[a,e]

0 0 1 1
0

0
[b,h] [g]

1 1
1
[c] [d,f]
0

Figure 8.5.1: Minimum automaton of Problem 8.5.

and similarly we get


δ([q0 , q1 ], 1) = [q0 , q1 ].
The final states are Q′m = {[q1 ], [q0 , q1 ]}.

Solutions to models of computation II: Transition systems


S OLUTION 9.1
The FTS is defined by T = (S, Act, →, I), where the set of states is given by S = {hx = 0, r = 0i, hx =
1, r = 0i, hx = 0, r = 1i, hx = 1, r = 1i}; the set of actions is irrelevant and omitted here; the transition
relation results directly from the evaluation function δr . For example, hx = 0, r = 1i → hx = 0, r = 1i if the
next input bit x is 0, and hx = 0, r = 1i → hx = 1, r = 1i if the next input bit x is 1; the set of initial sets is
{hx = 0, r = 1i, hx = 0, r = 0i}. You can also model y as the output of the FTS, as shown in Figure 9.1.1.

Figure 9.1.1: FTS for a simple hardware circuit.

S OLUTION 9.2
This automaton recognizes event sequences of length 3 or more, which begin with b and contain a1 or a2 in the
second position, followed by b in their third position. In other words, each such string must begin with b and
include a1 or a2 in the second position, followed by b in the third position. What follows after the third position
does not matter in this example.

117
S OLUTION 9.3
Denote by the queue lengths formed by four vehicles types and the state of the traffic light, i.e.,

S = {(x12 , x13 , x23 , x32 , y) : x12 , x13 , x23 , x32 ∈ Z, y ∈ {G, R}},

where G denotes green for (1, 2) and (1, 3) while R denotes red for (1, 2) and (1, 3). The event set is given by

Act = {a12 , a13 , a23 , a32 , d12 , d13 , d23 , d32 , g, r},

where a12 , a13 , a23 , a32 is a vehicle arrival for each of the four types; d12 , d13 , d23 , d32 is a vehicle depar-
ture upon clearing the intersection for each of the four types; g indicates the light turns green for (1, 2) and
(1, 3) type vehicles; r indicates the light turns red for (1, 2) and (1, 3) type vehicles. The initial state can be
arbitrarily set. The transition relation can be derived easily following the rules described above. For exam-
g d
→ (x12 , x13 , x23 , x32 , G), (x12 , x13 , x23 , x32 , G) −−12
ple, (x12 , x13 , x23 , x32 , R) − → (x12 − 1, x13 , x23 , x32 , G),
a13
(x12 , x13 , x23 , x32 , R) −−→ (x12 , x13 + 1, x23 , x32 , R).

S OLUTION 9.4
The queuing system of figure 9.4 has a generator set Σ = {a, d}.
A natural state variable is the number of customers in queue, thus the state-space is the set of non-negative
integers S = {0, 1, 2, . . .}.
The transition relation is

f (x, a) = x + 1 ∀x ≥ 0
f (x, d) = x − 1 ∀x > 0 .

The starting state q0 is chosen to be the initial number of customers in the system.
In figure 9.4.1, the transition system representing the basic queue system is reported. It is evident that the
cardinality of the state is infinite, but it is also countable.
a a a

0 1 2

b b b

Figure 9.4.1: A basic queue system.

S OLUTION 9.5
The proof is as follows:
1. In each iteration the number of elements in Reachi increases by at least 1. Since it can have, at most,
as many elements as S, there can only be as many iterations as the number of elements in S (minus the
number of elements in SS ).

2. Reachi is the set of states that can be reached in i steps, thus any state that can be reached in a finite
number of steps must be in one of the Reachi .

S OLUTION 9.6

118
Let SS = {3}. By applying the reachability algorithm we have

SS = 3
Reach0 = {3}
Reach1 = {1, 3, 5, 6}
Reach2 = {1, 2, 3, 5, 6}
Reach3 = S
Reach4 = S
ReachT ({3}) = S

Let SS = {2}. By applying the reachability algorithm we have

SS = 2
Reach0 = {2}
Reach1 = {2, 4, 5}
Reach2 = {2, 4, 5}
ReachT ({2}) = {2, 4, 5}

119
Part III

Hybrid control

120
Solution to modeling of hybrid systems
S OLUTION 10.1
(a) Let the water level be x ≥ 0 m and use four discrete states. Let the thresholds be xon and xof f . The
suggested model is illustrated in Figure 10.1.1.
“Pump off”
x ≤ xon ?
τ ≥ 2? q1
τ := 0
ẋ = −0.02
“Wait for off” τ̇ = 1
“Wait for on”
q4 q2
ẋ = 0.01 ẋ = −0.02
τ̇ = 1 τ̇ = 1

τ := 0 q3
τ ≥ 2?
x ≥ xof f ? ẋ = 0.01
τ̇ = 1
“Pump on”

Figure 10.1.1: Hybrid model of the tank and relay controller in Problem 10.1.

(The variable τ is included as a “timer”, used to keep track of how long the system stays in the waiting
states.)
(b) The water level decreases 2 · 0.02 m=0.04 m while waiting for the pump to turn on. Thus we need to
choose xon =0.09 to prevent underfilling. Similarly, choose
xof f = 0.12 − 2 · 0.01 = 0.10.

S OLUTION 10.2
We see that v can assume 2k values, called vi , i ∈ {0, . . . , 2k − 1}. So we introduce N = 2k discrete states qi .
In state qi , the system has vi as constant control signal, illustrated in Figure 10.2.1.

vi P (s) Q(s) u

Figure 10.2.1: Dynamics of state qi in Problem 10.2.

When do we switch states? We define the edges as


E = {(qi , qi+1 ), (qi+1 , qi )|i = 0, . . . , N − 2} ,
i.e., the system can only switch between adjacent states. The switchings are controlled by guards and domains.
The guards define when switching is allowed, and when the system is outside the domain, it must switch:
G(qi , qi+1 ) = {u ≥ vi + D/2}
G(qi+1 , qi ) = {u ≤ vi+1 − D/2}
D(qi ) = {vi − D/2 ≤ u ≤ vi + D/2}
So the hybrid system can be illustrated as in Figure 10.2.2.

121
qN −1

..
.

q1

u ≤ v1 − D/2 u ≥ v0 + D/2
q0

Figure 10.2.2: Control system with quantizer, modeled as a hybrid system. (Problem 10.2)

S OLUTION 10.3
A hybrid system system modeling the nuclear reactor is shown in Figure 10.3.1.

x = 510, c1 = c2 = 20

x ≥ 550 and c1 ≥ 20 ẋ = 0.1x − 50 x ≥ 550 and c2 ≥ 20


ċ1 = ċ2 = 1

ẋ = 0.1x − 56 x ≤ 510 and c1 := 0 ẋ = 0.1x − 60


x ≤ 510 and c2 := 0
ċ1 = ċ2 = 1 ċ1 = ċ2 = 1

Figure 10.3.1: The hybrid system for the nuclear reactor

S OLUTION 10.4
We model the system on linear state space form with a controller u = f (y) and neglect the time for computing
the control signal. (The computing time could otherwise be modeled as an extra waiting state between sampling
y(t) and applying the new u(t).) The suggested hybrid model is illustrated in Figure 10.4.1.
“Execution”
q1
ẋ = Ax + Bu
τ̇ = 1
τ ≥ k?

“Computation”

u := f (Cx)
τ := 0
Figure 10.4.1: The sampled control system in Problem 10.4.

The control law, as well as any quantization effects in the D/A or A/D converters are incorporated into f (y).

122
S OLUTION 10.5

(a) We need a timer variable t to control the switching, so the full state is (x, t)T . Now the hybrid automaton
can be described as H = (Q, X, Init, f, D, E, G, R), with

Q = {q1 , q2 }
X = R2 × R+
Init = q1 × (0, 0)
 T
f (qi , x, t) = Ai x 1
D(qi ) = {(x, t)T |t < 1}
E = {(q1 , q2 ), (q2 , q1 )}
G(q1 , q2 ) = G(q2 , q1 ) = {(x, t)T |t ≥ 1}
 T
R(q1 , q2 , x, t) = R(q2 , q1 , x, t) = x 0 .

(b) The trajectory evolves as follows:


A1 t t ∈ [0, 1)
 e x0 ,

x(t) = eA2 (t−1) x(1), t ∈ [1, 2)

 A1 (t−2)
e x(2), t ∈ [2, 3)

In order to express x(t) as a function of x0 we need to derive x(1) and x(2) as functions of x0 . We have

x(1) = eA1 (1−0) x0 = eA1 x0

and

x(2) = eA2 (2−1) x(1) = eA2 x(1).

Combining the two previous equations we have


A1 t t ∈ [0, 1)
e x 0 ,

x(t) = eA2 (t−1) eA1 x0 , t ∈ [1, 2)

 A1 (t−2) A2 A1
e e e x0 , t ∈ [2, 3).

S OLUTION 10.6

123
(a) The hybrid automaton is defined as H = (Q, X, Init, f, D, E, G, R) with
Q = {q1 , q2 , q3 }
X=R
Init = q1 × 0
f (q1 , x) = 2
f (q2 , x) = −1
f (q3 , x) = x + 2
D(q1 ) = {x ∈ R|x < 5}
D(q2 ) = {x ∈ R|x > 3}
D(q3 ) to be defined
E = {(q1 , q2 ), (q2 , q3 )}
G(q1 , q2 ) = {x ∈ R|x ≥ 5}
G(q2 , q3 ) = {x ∈ R|x ≤ 3}
R(q1 , q2 , x) = x
R(q2 , q3 , x) = −2

(b) In order for the hybrid automaton to be live we need to ensure that the domain D(q3 ) contains the
trajectory of the system ẋ = x + 2 with initial condition x0 = −2. But
x0 = −2 ⇒ ẋ = −2 + 2 = 0,
so the state will stay constant. Thus all domains that fulfill {x = 2} ∈ D(q3 ) guarantee liveness of the
system.
(c) The state-trajectory of the hybrid system is shown in Figure 10.6.1, where τ0 = 0, τ1 = 2.5 and τ2 = 4.5.

q
q3
q2
q1

x
5
3

τ0 τ0′ τ1′
τ1 τ2
-2

Figure 10.6.1: Trajectory of the hybrid system in Problem 10.6

Solutions to stability of hybrid systems


S OLUTION 11.1

124
(a) A possible solution is H = (Q, X, Init, f, D, E, G, R) where

Q = {q}
X = R3 and x = (v1 , v2 , v3 )T
Init = q × (1, 0, 0)
f (q, x) = (0, 0, 0)T
D(q) = {x ∈ R3 |v1 ≤ v2 ≤ v3 }
E = {(q, q)}
G(q, q) = {x ∈ R3 |v1 > v2 } ∪ {x ∈ R3 |v2 > v3 }
 !

 v1 + v2 v1 + v2

 , , v3 if v1 > v2
 2 2
R(q, q, x) = !

 v2 + v3 v2 + v3
 v1 , , if v2 > v3


2 2

(b) The collisions result in the following evolution for the continuous state:

1 1 1 1 1 3 3 1
(1, 0, 0) → ( , , 0) → ( , , ) → ( , , ) → . . .
2 2 2 4 4 8 8 4
which constitute a infinite sequence. We have that the solution is Zeno since it has an infinite number of
discrete transitions over a finite time interval, which in this case is a zero time interval.

(b) Since v1 (τi ) − v3 (τi ) = 2−i → 0 as i → ∞ and v1 (τ∞ ) = v2 (τ∞ ) = v3 (τ∞ ), it follows that the
accumulation point is (1/3,1/3,1/3). The physical interpretation is that after an infinite number of hits, the
balls will have the same velocity (equal to one third of the initial velocity of ball 1).

S OLUTION 11.2
We can consider the following Lyapunov function:
1
V (x) = xT x
2
It is an acceptable candidate, since
V (x) ≥ 0,
with equality only at x = 0. We then have
  
 −1 0 2 x1
V̇ = xT ẋ = xT Ax = x1 x2 x3  0 −1 3   x2 
−2 −3 −2 x3

which gives
V̇ (x) = −x21 − x22 − 2x23 < 0 ∀x 6= 0
Thus we can conclude that the system is asymptotically stable.

125
S OLUTION 11.3
Let P be a positive definite symmetric matrix which satisfies

AT P + P A = −Q.

Then the function V (x) = xT P x is a positive definite function. Let us compute V̇ (x)

d T
V̇ (x) = (x P x) = ẋT P x + xT P ẋ = xT AT P x + xT P Ax = xT (AT P + P A)x = −xT Qx.
dt

Thus V̇ (x) is negative definite. For Lyapunov’s stability criteria we can conclude that the system is asymptoti-
cally stable.

S OLUTION 11.4
Let us consider the following Lyapunov function

xT x x2 + x22
V (x) = = 1
2 2
Let us compute the time derivative of V (x). We have

V̇ (x) = x1 ẋ1 + x2 ẋ2 = −x21 − x22 + x1 g(x2 ) + x2 h(x1 )


1 1
≤ −x21 − x22 + x1 |x2 | + x2 |x1 |
2 2
≤ −x21 − x22 + |x1 x2 |
1
≤ − (x21 + x22 )
2

where we used (|x1 | − |x2 |)2 ≥ 0, which gives |x1 x2 | ≤ 12 (x21 + x22 ). Thus

V̇ < 0 ∀x 6= 0

thus the system is asymptotically stable. Notice that we do not need to know the expressions for g(.) and h(.)
to show asymptotic stability.

S OLUTION 11.5
(a) Depending on the sign of x1 or x2 we have four systems. If x1 > 0 and x2 > 0 then we have

ẋ1 = −1 + 2 = 1
ẋ2 = −2 − 1 = −3

If x1 < 0 and x2 > 0 then we have

ẋ1 = 1 + 2 = 3
ẋ2 = 2 − 1 = 1

If x1 < 0 and x2 < 0 then we have

ẋ1 = 1 − 2 = −1
ẋ2 = 2 + 1 = 3

126
x2

x1

Figure 11.5.1: A trajectory of the system in Problem 11.5. The dot represents the initial state.

If x1 > 0 and x2 < 0 then we have

ẋ1 = −1 − 2 = −3
ẋ2 = −2 + 1 = −1

A trajectory is represented in Figure 11.5.1. We can then represent the discontinuous system as a hybrid
system with four states. Formally we have H = (Q, X, Init, f, D, E, G, R) with

Q = {q1 , q2 , q2 , q4 }
X = R2
Init = q0 × (x10 , x20 ),

where q0 depends on the state x0 .

f (q1 , x) = (1, −3)T


f (q2 , x) = (3, 1)T
f (q3 , x) = (−1, 3)T
f (q4 , x) = (−3, −1)T
D(q1 ) = {x ∈ R2 |x1 ≥ 0, x2 ≥ 0}
D(q2 ) = {x ∈ R2 |x1 < 0, x2 ≥ 0}
D(q3 ) = {x ∈ R2 |x1 < 0, x2 < 0}
D(q4 ) = {x ∈ R2 |x1 ≥ 0, x2 < 0}
R(qi , qj , x) = x

We still need to define the set of edges E and the guards G. Since a trajectory of the given discontin-
uous system is always such that from the 1st quadrant it moves to the 4th and then 3rd and 2nd (see

127
Figure 11.5.1), the set of edges is then

E = {(q1 , q4 ), (q4 , q3 ), (q3 , q2 ), (q2 , q1 )}.

The guards are

G(q1 , q4 ) = {x ∈ R2 |x2 < 0}


G(q4 , q3 ) = {x ∈ R2 |x1 < 0}
G(q3 , q2 ) = {x ∈ R2 |x2 ≥ 0}
G(q2 , q1 ) = {x ∈ R2 |x1 ≥ 0}.

(b) We can notice that for any initial condition (x(0) 6= 0) a solution of the hybrid system will look like a
’spiral’ which is moving towards the origin. As we approach the origin the number of switches increases,
until we have an infinite number of switches in finite time. Thus we have Zeno behavior.

S OLUTION 11.6
Consider the Lyapunov function candidate
1
V (x) = x2 .
2
For both q = 1 and q = 2 it holds that

V̇ (x) = xẋ = aq x2 ≤ 0,

with equality only for x = 0. Then V (x) is a common Lyapunov function for the switched system, and it is
asymptotically stable.

S OLUTION 11.7
The matrices A1 and A2 commute, i.e.
 
3 0
A1 A2 = = A2 A1 .
0 10

Further, they are both diagonal, which means that the eigenvalues are the diagonal elements. All eigenvalues
are negative, so each matrix is asymptotically stable.
If the system matrices commute and are asymptotically stable, then the switched system is also asymptoti-
cally stable.

S OLUTION 11.8
1st method The second component of the state x satisfies

ẋ2 = −cq x2

where q = {1, 2}. Therefore x2 (t) decays to 0 asymptotically at the rate corresponding to min{c1 , c2 }.
The first component of x satisfies the equation

ẋ1 = −aq x1 + bq x2 .

This can be viewed as the asymptotically stable system ẋ1 = −aq x1 excited by a asymptotically decaying
input bq x2 . Thus x1 also converges to zero asymptotically.

128
2nd method A second way to show that the switched system is asymptotically stable consists in constructing
a common Lyapunov function for the family of linear system Aq . In this case it is possible to find a
quadratic common Lyapunov function
V (x) = xT P x
with P = diag(d1 , d2 ) with d1 , d2 > 0, since P must be positive definite. We then have
 
 2d1 aq −d1 bq
− ATq P + P Aq = q = 1, 2
−d1 bq 2d2 cq

To ensure that this matrix is positive definite we can fix d1 > 0 and then choose d2 > 0 large enough so
that
4d2 d1 aq cq + d21 b2q > 0, q = 1, 2.
Thus this concludes the proof.

S OLUTION 11.9
(a) To model the system as a switching system, we introduce a timer variable τ :

ẋ = Aq x
τ̇ = 1,

where q ∈ {1, 2} and

1
Ω1 = {x, τ |kǫ ≤ τ < (k + )ǫ, k = 0, 1, . . .}
2
1
Ω2 = {x, τ |(k + )ǫ ≤ τ < (k + 1)ǫ, k = 0, 1, . . .}.
2

(b) A corresponding hybrid automaton is H = (Q, X, Init, f, D, E, G, R), with

Q = {q1 , q2 }
X = Rn × R+ ( state vector: (x, τ )T )
Init = q1 × (x0 , t0 )T
f (q1 , x, τ ) = (A1 x, 1)T
f (q2 , x, τ ) = (A2 x, 1)T
D(q) = {x, τ |τ < ǫ/2} ∀q ∈ Q
E = {(q1 , q2 ), (q2 , q1 )}
G(e) = {x, τ |τ ≥ ǫ/2} ∀e ∈ E
R(e, x, τ ) = (x, 0)T ∀e ∈ E.

Note that here we let τ be reset at every switching instant, to simplify the system.

(c) If the system starts in q = q1 , with x = x0 at t = t0 , we get

x(t0 + ǫ/2) = eA1 ǫ/2 x0

and
x(t0 + ǫ) = eA2 ǫ/2 eA1 ǫ/2 x0 .

129
(d) We use the definition of the matrix exponential:

eAǫ = I + Aǫ + A2 ǫ2 + O(ǫ3 )

This allows us to write

eA2 ǫ/2 eA1 ǫ/2 = (I + A2 ǫ/2 + . . .) (I + A1 ǫ/2 + . . .)


A1 +A2
= I + (A1 + A2 )ǫ/2 + A2 A1 ǫ2 /4 + . . . ≈ { as ǫ → 0 } ≈ e 2
ǫ

So for fast switching (small ǫ), the system behaves like the average of the two subsystems.

S OLUTION 11.10
We choose Q = I and test if  
p1 0
P =
0 p2
solves the Lyapunov equation
   
−2p1 0 −1 0
P A1 + AT1 P = −I ⇔ = .
0 −4p2 0 −1

Apparently it does, if we choose p1 = 1/2 and p2 = 1/4. So V (x) = xT P x is a Lyapunov function for
system 1. Does it also work for system 2?

V̇ (x) = ẋT P x + xT P ẋ = xT A2 P x + xT P A2 x
 
T −3 0
=x x ≤ 0 ∀x,
0 −5/2

with equality only for x = 0. So V (x) is a common Lyapunov function for systems 1 and 2 and the switched
system is asymptotically stable.

S OLUTION 11.11
(a) Without loss of generality we can consider a positive definite matrix P in the form
 
1 s
P = .
s r

Assume the matrix P satisfies


ATq P + P Aq < 0 ∀q
then for q = 1 we have  
2 − 2q 2q + 1 − r
−AT1 P − P A1 =
2q + 1 − r 2q + 2r
and this matrix is positive definite only if

q<1
(r − 3)2
8q 2 + 1 + r2 − 6r < 0 ⇒ q 2 + <1 (0.0)
8
Similarly for the other matrix we have
 
2 − 5q r
2q + 10 − 10
−AT2 P − P A2 = r
2q + 10 − 10 20q + 2r

130
which is a positive definite matrix if

q < 10
(r − 300)2
600r − 800q 2 − 1000 − r2 ⇒ q 2 + < 100. (0.0)
800
The inequalities 0.0 and 0.0 represent the interiors of two ellipsoids As can be seen in Figure 11.11.1 the
two ellipsoids do not intersect.


100(3 − 8) ≈ 17


3+ 8≈6

Figure 11.11.1: Ellipsoids representing the inequalities (0.0) and (0.0).

(b) We would like to show that the system is however asymptotically stable. In order to do that, since the
switching is arbitrary, we consider a ’worst-case-switching’, which we define as follows. The vectors
A1 x and A2 x are parallel on two lines which pass through the origin. If such lines define a switching
surface, we then choose to follow the trajectory of the vector field which is pointing outward, so that
we have a ’worst-case-switching’, in the sense that it should move away from the origin. The situation
is shown in Figure 11.11.2. The switching surface can be found solving the equation which describes
collinearity
(A1 x) × (A2 x) = 0
which gives the equation

(−x1 − x2 )(0.1x1 − x2 ) + (x1 − x2 )(x1 + 10x2 ) = 0.

The equations of the two lines are

x2 = 1.18x1
x2 = −0.08x1 .

In order know which of the two vectors is pointing outward we can determine when (A1 x) × (A2 x) > 0
which gives that between the line −0.08x1 and 1.18x1 the vectors A2 x are pointing outward. We now

131
A2 x

A1 x

A1 x
A2 x

A2 x A1 x
A1 x

A2 x

Figure 11.11.2: Worst case switching in Problem 11.11

need to show that the trajectory is converging toward the origin. Let us consider, for example, the
following initial condition
x0 = (100, −8)
which is a point on the switching surface x2 = −0.08x1 . The trajectory follows the vector field A1 x and
intersects the next switching surface at the point
 
x̄1
= eA1 t̄ x0
1.18x̄1

which in our case is the point with coordinates (25, 29.5). Then the trajectory is given by the vector field
A2 x and the intersection point with the switching surface is (−86.3, 6.8). The next point is (−21.7, −25).
The next point is the intersection of the trajectory with the switching surface that contained the initial
condition x0 . We have the intersection at (25, −2). Thus after one rotation we have that the distance
of the trajectory to the origin is decreased from 100.1 to 25.1. Thus the system converges to the origin
considering the ’worst-case-switching’.

S OLUTION 11.12
(a) Use the same ideas as in Problem 11.11.

(b) Let us consider the following two positive definite symmetric matrices
   
1 0 10 0
P1 = and P2 =
0 3 0 3

If we consider the following Lyapunov equations

ATq Pq + Pq Aq q = 1, 2

132
we have
 
10 7
AT1 P1 + P1 A1 = − = Q1
7 12
 
T −40 20
A2 P2 + P2 A2 = = Q2
20 12

Since the matrices Q1 and Q2 are symmetric matrices and the eigenvalues are σ(Q1 ) = {−3.9, −18.1}
and σ(Q2 ) = {−1.6, −50.1}, the two matrices are negative definite. Thus the two linear systems are
asymptotically stable.
Let us consider the two Lyapunov functions

V1 = xT P1 x
V2 = xT P2 x.

In order to prove stability using the Lyapunov function approach we need to prove that the sequence

{Vq (x(τiq ))}

is non-decreasing. In this case we can notice that we have the switching when x1 changes sign. We can
notice that
lim V2 (x) = 3x22
x1 →0+

and
lim V1 (x) = 3x22
x1 →0−

Thus the two Lyapunov function form a continuous non-increasing function, see Figure 11.12.1.

Vq (x)

t
q=1 q=2 q=1 q=2 q=1 q=2

Figure 11.12.1: Multiple Lyapunov functions for Problem 11.12

Solutions to simulation and bisimulation


S OLUTION 12.1
First of all, we notice that as there is only one initial state in T and one in T ′ , it has to hold that s1 ∼ s′1 ,
s1 6∼ s′i , and s′1 6∼ si , for all i ∈ {2, . . . 7}.

133
a a
Because s1 ∼ s′1 and s1 − → s′i , with the property that s4 ∼ s′i .
→ s4 , there must exist s′i , such that s′1 −
Because s4 is accepting, s′i has to be accepting, too. From P osta (s′1 ) = {s′2 }, we obtain directly that s4 ∼ s′2 .
By similar reasoning from s1 ∼ s′1 , P ostb (s1 ) = {s2 } and P ostb (s1 ) = {s′3 }, we learn that necessarily
s2 ∼ s′3 .
Furthermore,

• s4 ∼ s′2 , P osta (s4 ) = {s6 }, and P osta (s′2 ) = {s′4 } implies s6 ∼ s′4 ;

• s4 ∼ s′2 , P ostb (s4 ) = {s5 }, and P ostb (s′2 ) = {s′5 } implies s5 ∼ s′5 ;

• s2 ∼ s′3 , P osta (s2 ) = {s5 }, and P osta (s′3 ) = {s′6 } implies s5 ∼ s′6 ;

• s2 ∼ s′3 , P ostb (s2 ) = {s3 }, and P ostb (s′3 ) = {s′7 } implies s3 ∼ s′7 ;

• s6 ∼ s′4 , P ostc (s6 ) = {s7 }, and P ostc (s′4 ) = {s′2 } implies s7 ∼ s′2 ;

• s5 ∼ s′5 , s5 ∼ s′6 , P ostc (s5 ) = {s7 }, P ostc (s′5 ) = {s′3 }, and P ostc (s′6 ) = {s′2 } implies s7 ∼ s′3 , and
s7 ∼ s′2 ;

• s3 ∼ s′7 , P ostc (s3 ) = {s7 }, and P ostc (s′7 ) = {s′3 } implies s7 ∼ s′3 ;

• s7 ∼ s′2 , s7 ∼ s′3 , P osta (s7 ) = P ostb (s7 ) = {s5 }, and P osta (s′2 ) = {s′4 }, P ostb (s′2 ) = {s′5 },
P osta (s′3 ) = {s′6 }, and P ostb (s′3 ) = {s′7 }, we obtain s5 ∼ s′4 , s5 ∼ s′5 , s5 ∼ s′6 , and s5 ∼ s′7 .

• s5 ∼ s′4 , s5 ∼ s′7 , P ostc (s5 ) = {s7 }, P ostc (s′4 ) = {s′2 }, and P ostc (s′7 ) = {s′3 } implies s7 ∼ s′2 , and
s7 ∼ s′3 ;

Altogether, we obtain S ∪ S ′ /∼ = {s1 , s′1 }, {s2 , s4 , s7 , s′2 , s′3 }, {s3 , s5 , s6 , s′4 , s′5 , s′6 , s′7 } . It is now
straightforward to verify that ∼ is indeed a bisimulation.

S OLUTION 12.2
We start with the initial partition S/∼ = {P1 = {s1 , s2 , s3 , s4 , s5 , s6 }, P2 = {s7 , s8 }}.
Following the bisimulation algorithm, we repeatedly and systematically look for σ ∈ Σ, and P, P ′ ∈ S/∼ ,
6 P ∩ P reσ (P ′ ) 6= P
such that ∅ =

• We easily realize that for the action a, and sets P = P ′ = P1 , such a condition holds: P rea (P1 ) =
{s1 , s3 , s2 , s5 }, and ∅ =
6 P rea (P1 ) 6= P1 . Thus, P1 is split into P11 = P1 ∩ P rea (P1 ) = {s1 , s2 , s3 , s5 },
and P12 = P1 \ P rea (P1 ) = {s4 , s6 }. The partition is now S/∼ = {P11 = {s1 , s2 , s3 , s5 }, P12 =
{s4 , s6 }, P2 = {s7 , s8 }}.

• Now, for the action b, and sets P = P11 and P ′ = P11 , we have P reb (P11 ) = {s1 } =
6 P1 , and therefore
P11 has to be split into P111 = {s1 }, and P112 = {s2 , s3 , s5 }. The new partition is S/∼ = {P111 =
{s1 }, P112 = {s2 , s3 , s5 }, P12 = {s4 , s6 }, P2 = {s7 , s8 }}.

• For the action a, and sets P = P112 and P ′ = P112 , we have P rea (P112 ) = {s1 , s2 }, P rea (P112 ) ∩
P112 6= P112 , and therefore P112 has to be split into P1121 = {s2 }, and P1122 = {s3 , s5 }. The new
partition is S/∼ = {P111 = {s1 }, P1121 = {s2 }, P1122 = {s3 , s5 }, P12 = {s4 , s6 }, P2 = {s7 , s8 }}.
Note, that the same splitting would be induced if we consider the action c and sets P = P112 , and
P ′ = P2 , because P rec (P2 ) = {s3 , s5 }, and P112 ∩ P rec (P2 ) 6= P112 .

• At this point it looks like there is no other action σ ∈ Σ, and P, P ′ ∈ S/∼ , such that ∅ =
6 P ∩P reσ (P ′ ) 6=
P . Let us verify this is indeed true:

– P rea (P111 ) = P reb (P111 ) = P rec (P111 ) = P rea (P1121 ) = P reb (P1121 ) = P rec (P1121 ) = ∅

134
– P rea (P1122 ) = {s1 , s2 } = P111 ∪P1121 , which means that P rea (P1122 )∩P111 = P111 , P rea (P1122 )∩
P1121 = P1121 , and P rea (P1122 ) ∩ P1122 = P rea (P1122 ) ∩ P12 = P rea (P1122 ) ∩ P2 = ∅.
– P reb (P1122 ) = {s1 } = P111 , which means that P reb (P1122 ) ∩ P111 = P111 , and P reb (P1122 ) ∩
P1121 = P reb (P1122 ) ∩ P1122 = P reb (P1122 ) ∩ P12 = P rea (P21 ) ∩ P2 = ∅.
– P rec (P1122 ) = ∅
– P rea (P12 ) = {s3 , s5 } = P1122
– P reb (P12 ) = {s1 , s2 } = P111 ∪ P1121
– P rec (P12 ) = ∅
– P rea (P2 ) = {s7 , s8 } = P2
– P reb (P2 ) = {s3 , s5 } = P1122
– P rec (P2 ) = {s4 , s6 } = P12

Thus, we can conclude that S/∼ = {P111 = {s1 }, P1121 = {s2 }, P1122 = {s3 , s5 }, P12 = {s4 , s6 }, P2 =
{s7 , s8 }} is the final partition of the set of states S and the coarsest quotient transition system
T /∼ = ({P111 , P1121 , P1122 , P12 , P2 }, Σ, → /∼ , {P111 , P1121 }, {P2 }), where → /∼ is defined by the
following figure

P1111 = {s1 } P1121 = {s2 }


b a
a, b b

P1122 = {s3 , s5 } P12 = {s4 , s6 }


a

b c

P2 = {s7 , s8 }

Figure 12.2.1: The transition systems T /∼ .

S OLUTION 12.3
Let us follow the hint and find a transition system T that is not equal to its coarsest bisimulation quotient. For
that, it is actually enough to put in T two different initial states s and s′ , such that P ostσ (s) = P ostσ (s′ ), for
all σ ∈ Σ. A simple example of such system together with its bisimulation quotient is given in the following
figure.
Now, consider the following T ′ , which is in fact the same as T except for the state names.
The first bisimulation straightforwardly maps the states of T to the states of T ′ :
S ∪ S ′ /∼1 = {{s1 , s′1 }, {s2 , s′2 }, {s3 , s′3 }}. The second bisimulations builds on the bisimulation quotients
of T and T ′ :
S ∪ S ′ /∼2 = {{s1 , s′1 , s2 , s′2 }, {s3 , s′3 }}.

S OLUTION 12.4

135
P1 = {s1 , s2 }
s1 s2

a a a

s3 P2 = {s3 }

a a

Figure 12.3.1: The transition system T and its coarsest bisimulation quotient T /∼ .

s′1 s′2

a a

s′3

Figure 12.3.2: The transition system T ′ .

a) – T is not bisimilar with T1 . The proof is lead by contradiction: Let us assume that there exists a
bisimulation relation ∼⊆ S × Q. For each initial state s in T there must exist an initial state q
in T1 , such that s ∼ q. Furthermore, because P ostc (s1 ) = ∅ and P ostc (q2 ) 6= ∅, it must be that
s1 ∼ q1 . Similarly, we obtain that s2 ∼ q2 . From these, we deduce that s3 ∼ q3 and s4 ∼ q4 ,
however P osta (s3 ) 6= ∅, while P osta (q3 ) = ∅. Thus s3 6∼ q3 , which is a contradiction.
– T is bisimilar with T2 . The bisimulation relation is defined as follows:
S ∪ R/∼ = {{s1 , r1 }, {s2 , r2 }, {s2i+1 , r3 | i ≥ 1}, {s2i , r4 | i ≥ 2}}.
– T is bisimilar with T3 . The bisimulation relation is defined as follows: S ∪ U/∼ = {{s2i+1 , u1 |
i ≥ 0}, {s2i , u2 | i ≥ 1}}.
– T is not bisimilar with T4 . Let us assume that there exists a bisimulation relation ∼⊆ S × V .
Necessarily, si ∼ v1 , for all i ≥ 1. However P ostc (s1 ) = ∅, while P ostc (v1 ) = {v1 }, and
therefore s1 6∼ v1 . Contradiction.

b) From the above, we have that T simulates T2 , T3 and that T2 , T3 simulates T . Intuitively, T1 allows for
strictly less behavior than T , however, T covers all behaviors that are allowed in T1 , meaning that T
simulates T1 . The simulation relation is ∼⊆ Q × S, where q1 ∼ s1 , q2 ∼ s2 , q3 ∼ s3 , and q4 ∼ s4 .
Because T, T2 , T3 are bisimilar, then also T2 , T3 simulate T1 .
Dually, T4 simulates T with simulation relation ∼⊆ S × V , where si ∼ v1 , for all i ≥ 1. T4 thus
simulates also T1 , T2 , and T3 .

S OLUTION 12.5

136
...

turn_lef t
q2 , −1, 3 q1 , −1, 3 ...

...
t t
t
t

t turn_lef t
q1 , 0, 0 q1 , 1.3, −1.3 q2 , 0, 2 q1 , 0, 2 ...

t t t
...
t t ... t t

q1 , 7.6, −7.6 t t q2 , 5.5, 1.1


t
...
t t
...
t ... t
t t

turn_right turn_lef t
t q1 , 10, −10 q2 , 10, 0 q2 , 0, 3 q1 , 0, 3 ...

t t t ...
t ... t

turn_right
q1 , 15, −15 q2 , 15, 0

t t t

...

Figure 12.5.1: Informal illustration of transition system TH . The time-driven transitions are illustrated in
dashed lines and the even-driven in full lines.

The transition system TH = (S = Q × X × Y, Σ, →, S0 ) looks as follows:


It can be seen that while in TH , there has to be a time-driven transition labeled with t between successive
event-driven transitions turn_right and turn_lef t, in T1 , turn_lef t may follow turn_right immediately.
Therefore, they are not bisimilar.
In T2 , we can again find behavior that is not admitted in TH . Namely, in TH , it is not possible to perform
sequence of transitions generated by the sequence t, turn_right, t, turn_right.
T3 is bisimilar with TH . The bisimulation relation is s1 ∼ (q1 , x, y), where x < 10, s2 ∼ (q1 , x, y), where
x ≥ 10, s3 ∼ (q2 , x, y), where x > 0, and finally s3 ∼ (q2 , x, y), where x ≤ 0.
T4 clearly allows only for behavior that is allowed in TH . However, it does not allow for all of the behavior.
Namely, at least two t-transitions are required before turn_right can happen, whereas in TH only one t-
transition has to happen.

137
Solutions to reachability, timed automata and rectangular automata
S OLUTION 13.1
a) In state (qoff,off , x = 175, y ∈ [100, 200]), only time-driven transitions are available and no event-driven
ones. Furthermore, in state qoff,off , the value of x cannot exceed 200. Assuming that the value of x
is fixed, the value of y can be computed as follows: ymin = 100 − 18 · (x − 175)/5, and xmax =
200 − 18 · (x − 175)/5. Thus, P ost(qoff,off , x = 175, y ∈ [100, 200]) =

{(qoff,off , x, y | x ∈ [175, 200], y ∈ [100 − 18 · (x − 175)/5, 200 − 18 · (x − 175)/5].

b) In state (qoff,off , x = 200, y = 200), the only time-driven transition enabled is with time = 0, as x
cannot exceed 200 in qoff,off . There is however an event-driven transition enabled, leading into the state
qoff,on , x = 200, y = 200. Thus, P ost(qoff,off , x = 200, y = 200) =

{(qoff,off , x = 200, y = 200), (qoff,on , x = 200, y = 200)}.

c) The only time-driven transition that leads to (qon,on , x = 150, y = 200) is with t = 0 as x cannot
exceed 150 in qon,on while at the same time the value of x is decreasing in there. Any event-driven
transition from (qoff,on , x = 150, y), where y is arbitrary leads to (qon,on , x = 150, y = 200). Thus,
P re(qon,on , x = 150, y = 200) =

{(qon,on , x = 150, y = 200)} ∪ {(qoff,on , x = 150, y) | y ∈ R}.

d) Let us follow the hint:


P ost(qoff,off , x = 190, y = 200) = {(qoff,off , x ∈ [190, 200], y = 200 − 18 · (x − 190)/5}. Note that
y ∈ [200 − 18 ∗ 2, 200], i.e. y ∈ [164, 200] and is within safe bounds.
P ost(qoff,off , x = 200, y = 164) = {qoff,on , x = 200, y = 164)}
P ost(qoff,on , x = 200, y = 164) = {(qoff,on , x ∈ [150, 200], y = 164 + 12 · (x − 200)/ − 25}. Note that
y ∈ [164, 164 + 12 · 2], i.e. y ∈ [164, 188] and is within safe bounds.
P ost(qoff,on , x = 150, y = 188) = {qon,on , x = 150, y = 188)}
P ost(qon,on , x = 150, y = 188) = {(qon,on , x ∈ [100, 150], y = 188 + 12 · (x − 150)/ − 5}. Note that
y ∈ [188, 188 + 12 · 10], i.e. y ∈ [164, 308] and is within safe bounds.
P ost(qon,on , x = 100, y = 308) = {qon,off , x = 100, y = 308)}
P ost(qon,off , x = 100, y = 308) = {(qoff,on , x ∈ [100, 175], y = 308 − 18 · (x − 100)/25}. Note that
y ∈ [308 − 18 · 3, 308] = [308 − 54, 308], i.e. y ∈ [254, 308] and is within safe bounds.
P ost(qon,off , x = 175, y = 254) = {qoff,off , x = 175, y = 254)}
P ost(qon,off , x = 175, y = 182) = {(qoff,on , x ∈ [175, 200], y = 254 − 18 · (x − 175)/5}. Note that
y ∈ [254, 254 − 18 · 5] = [254, 254 − 90], i.e. y ∈ [164, 254] and is within safe bounds. Now, we realize
that the system reached once again the state (qoff,off , x = 200, y = 164). As the system is deterministic,
we conclude that we have found all reachable states.

e) No, such state is not reachable as we see from the above reachability analysis.

S OLUTION 13.2
For simplicity, we show only the states reachable from the initial state.

S OLUTION 13.3

138

x=0

qk
x=0

reset
reset
qke
reset
x=0

qkey
x=0
#

# #

q# q# q#
x=0 x ∈ (0, 1) x=1

Figure 13.2.1: Region automaton Reg(A).

Yes, it is initialized. We first make the translation to a multi-rate automaton and from then we continue with the
translation to a timed automaton.

S OLUTION 13.4
The trajectories of the linear system are shown in Figure 13.4.1. It easy to prove that no trajectories intersects
the Bad set.

139
xlow ≥ 30
xhigh ≤ 120
xlow := 0
xhigh := 0
y := 0

ẋlow = 2 ẋlow = 3
ẋhigh = 3 ẋhigh = 5
y ≥ 100
ẏ = 1 ẏ = 1
xlow := 0
xhigh := 0
xlow ≥ 60 ∨ y ≥ 50
y := 0
xlow ≥ 150
ẋlow = 2 xlow := 0
ẋhigh = 3 xhigh := 0
ẏ = 2 y := 0

Figure 13.3.1: Multi-rate automaton Â.

xlow ≥ 15
xhigh ≤ 40
xlow := 0
xhigh := 0
y := 0

ẋlow = 1 ẋlow = 1
ẋhigh = 1 ẋhigh = 1
y ≥ 50
ẏ = 1 ẏ = 1
xlow := 0
xhigh := 0
xlow ≥ 30 ∨ y ≥ 50
y := 0
xlow ≥ 75
ẋlow = 1 xlow := 0
ẋhigh = 1 xhigh = 0
ẏ = 2 y := 0

Figure 13.3.2: Timed automaton A′ .

S OLUTION 13.6
The partition of the state-space is shown in Figure 13.6.1.

(a) Let us consider the system in the region Ω3 . It is easy to determine a linear state feedback g3 (x) = K3 x
which places the poles in p1 = p2 = −1, since the system is controllable. The feedback control gain is
in this case  
4
K3 = − −3 .
3

140
10

−2

−4

−6

−8

−10
−10 −8 −6 −4 −2 0 2 4 6 8 10

Figure 13.4.1: Trajectories of the linear system in Problem 13.4.

Ω3

Ω2

Ω1
1 3

Figure 13.6.1: State-space partition for the system in Problem 13.6

141
Figure 13.6.2: Trajectories of the linear system in Problem 13.6.

Notice that if we use g3 as controller for the system 2 we have a closed loop stable system with poles
1
p1 = −3 p2 = − .
3
Notice moreover that the closed loop system for system 3 has the equilibrium point in (−1/3, 0) since
the closed loop system is     
0 −1 x1 0
ẋ = +
1 2 x2 − 31
This means that all trajectories starting in Ω3 will go towards (−1/3, 0), which is in Ω1 . Hence they will
enter Ω2 , which also is stable with (0, 0) as equilibrium. Thus the trajectory will enter Ω1 . An example
of trajectories is shown in Figure 13.6.2.

(b) If B1 = (0, 1)T then as you can notice the dynamics in the set Ω3 = {kxk > 3} is the negative of the
dynamics in Ω1 (modulo a bias term that only makes the equilibrium different). Therefore is we use
the linear controller g3 computed in (a), this will not make the system 1 stable. Thus, we need to use a
different controller for the two regions.

(c) A simple choice to stabilize the system 1 is a feedback controller that places the poles in p1 = p2 = −1.
Such controller for the system 1 is
  
4 x1
g1 (x1 , x2 ) = − −7
3 x2

Notice that that controllers g1 and g3 place the poles of the systems 1 and 3 in p1 = p2 = −1. Both
controllers stabilizes the system in Ω2 , since the closed loop for the system 2, when g3 is used, has poles
in
p1 = −3 p2 = −1/3
and using g1 the poles are √
− 11 ± 4 7
p1,2 = .
3

142
x = 510, c1 = c2 = 0

x ≥ 550 and c1 ≥ 20 ẋ = 0.1x − 50 x ≥ 550 and c2 ≥ 20


ċ1 = ċ2 = 1

ẋ = 0.1x − 56 x ≤ 510 and c1 := 0 ẋ = 0.1x − 60


x ≤ 510 and c2 := 0
ċ1 = ċ2 = 1 ċ1 = ċ2 = 1

Figure 13.7.1: The hybrid system for the nuclear reactor

thus a possible control strategy is


(
g1 (x) if kxk ≤ α
g(x) =
g3 (x) otherwise

with 1 ≤ α ≤ 3, which means that we can use either g1 or g3 in any subset of the region Ω2 .

(d) If the bad region corresponds to Ω1 then we need to avoid the trajectory to go enter in Ω1 at all. One
possible solution is to design a control law g1 such that the eigenvalues of the closed loop system for
system 1 are placed in p1,2 = ±i. In this way we have that the trajectories are confined in a circle with
radius 1.

S OLUTION 13.7
A hybrid system system modeling the nuclear reactor is shown in Figure 13.7.1.

143
BIBLIOGRAPHY

[1] Wittenmark, B. Åström, K.J. and Årzén K-H., “Computer Control: An Overview", IFAC Professional
Brief, 2003.

[2] Wittenmark, B. and Åström, K.J. , “Computer Controlled Systems”, Prentice Hall, 3rd Edition.

[3] C. Baier and J.-P. Katoen. Principles of Model Checking. MIT Press, 2008.

[4] Buttazzo, G.C., “Hard Real-Time Computing", Kluwer Academic Publisher, 4th edition, 2002. Systems"

[5] Cassandras, C.G. and Lafortune, S., “Introduction to Discrete Event Systems", Kluwer Academic Pub-
lisher, 1999.

[6] Hopcroft, J.E. and Ullman, J.D. “Introduction to automata theory, languages, and computation”, Addison-
Wesley, 1979.

[7] Murata, T., “Petri nets: Properties, analysis and applications", Proceedings of the IEEE, Volume: 77,
Issue: 4, 1989

[8] David, R. and Alla, H., “Petri Nets and Grafcet”, Prentice Hall, 1992.

[9] Johansson, K.H. , “Hybrid Control System”, in Control Systems, Robotics, and Automation , from Ency-
clopedia of Life Support Systems (EOLSS)

[10] Johansson, K.H. and Lygeros, J. and Sastry, S., “ Modeling of Hybrid Systems”, in Control Systems,
Robotics and Automation, from Encyclopedia of Life Support Systems (EOLSS)

[11] Branicky, M.S. “Stability of Hybrid Systems”, in Control Systems, Robotics and Automation, from En-
cyclopedia of Life Support Systems (EOLSS)

[12] Schaft, A. van der and Schumacher, H., “An Introduction to Hybrid Dynamical Systems”, Lecture Notes
in Control and Information Sciences, 251, Springer, 2000

[13] Liberzon, D., “Control Using Logic and Switching, Part I: Switching in Systems and Control”, Handout
notes at Workshop CDC 2001.
http://www.ece.ucsb.edu/~hespanha/cdc01/switchingcdc01.htm

[14] Oppenheim, A. V. and Willsky, A. S. and Nawab, S. H. “Signals & Systems”, Prentice Hall, 1997.

[15] Hespanha, João P. Lecture notes on Hybrid Control and Switched Systems, Reachability, University of
California at Santa Barbara, 2005.

144

You might also like