0% found this document useful (0 votes)
13 views

Control Theory Tutorial (Basic Concept)

Uploaded by

Syafiq Rafi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

Control Theory Tutorial (Basic Concept)

Uploaded by

Syafiq Rafi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 75

Contro

l
Theor
y
SPRINGERBRI
EFSIN Tutoria
APPLIED SCIENCES
AND TECHNOLOGY
l
Basic
Steven A. Fran
k Conce
pts re
Illustra Exam
ted by ples
Softwa
SpringerBriefs in Applied
Sciences and Technology
SpringerBriefs present concise summaries of cutting-edge research and
practical applications across a wide spectrum of fields. Featuring compact
volumes of 50– 125 pages, the series covers a range of content from
professional to academic. Typical publications can be:
• A timely report of state-of-the art methods
• An introduction to or a manual for the application of mathematical or
computer techniques
• A bridge between new research results, as published in journal
articles • A snapshot of a hot or emerging topic
• An in-depth case study
• A presentation of core concepts that students must understand in order
to make independent contributions
SpringerBriefs are characterized by fast, global electronic dissemination,
standard publishing contracts, standardized manuscript preparation and
formatting guidelines, and expedited production schedules.
On the one hand, SpringerBriefs in Applied Sciences and Technology
are devoted to the publication of fundamentals and applications within the
different classical engineering disciplines as well as in interdisciplinary
fields that recently emerged between these areas. On the other hand, as
the boundary separating fundamental research and applied technology is
more and more dissolving, this series is particularly open to
trans-disciplinary topics between fundamental science and engineering.
Indexed by EI-Compendex, SCOPUS and Springerlink.

More information about this series at http://www.springer.com/series/8884


Steven A. Frank

Control Theory Tutorial


Basic Concepts Illustrated by
Software Examples
Steven A. Frank
Department of Ecology and Evolutionary
Biology
University of California, Irvine
Irvine, CA
USA

Mathematica® is a registered trademark of Wolfram Research, Inc., 100 Trade


Center Drive, Champaign, IL 61820-7237, USA, http://www.wolfram.com

and

MATLAB® is a registered trademark of The MathWorks, Inc., 1 Apple Hill Drive,


Natick, MA 01760-2098, USA, http://www.mathworks.com.

Additional material to this book can be downloaded from http://extras.springer.com.

ISSN 2191-530X ISSN 2191-5318 (electronic)


SpringerBriefs in Applied Sciences and Technology
ISBN 978-3-319-91706-1 ISBN 978-3-319-91707-8 (eBook)
https://doi.org/10.1007/978-3-319-91707-8

Library of Congress Control Number: 2018941971

Mathematics Subject Classification (2010): 49-01, 93-01, 93C05, 93C10, 93C40

© The Editor(s) (if applicable) and The Author(s) 2018. This book is an open access
publication. Open Access This book is licensed under the terms of the Creative Commons
Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which
permits use, sharing, adap tation, distribution and reproduction in any medium or format, as
long as you give appropriate credit to the original author(s) and the source, provide a link to
the Creative Commons license and indicate if changes were made.
The images or other third party material in this book are included in the book’s Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the book’s Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly
from the copyright holder.
The use of general descriptive names, registered names, trademarks, service marks, etc. in
this publication does not imply, even in the absence of a specific statement, that such names
are exempt from the relevant protective laws and regulations and therefore free for general
use. The publisher, the authors and the editors are safe to assume that the advice and
information in this book are believed to be true and accurate at the date of publication. Neither
the publisher nor the authors or the editors give a warranty, express or implied, with respect to
the material contained herein or for any errors or omissions that may have been made. The
publisher remains neutral with regard to jurisdictional claims in published maps and
institutional affiliations.

Printed on acid-free paper

This Springer imprint is published by the registered company Springer International


Publishing AG part of Springer Nature
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Précis

This book introduces the basic principles of control theory in a concise


self-study tutorial. The chapters build the foundation of control systems
design based on feedback, robustness, tradeoffs, and optimization. The
approach focuses on how to think clearly about control and why the key
principles are important. Each principle is illustrated with examples and
graphics developed by software coded in Wolfram Mathematica. All of the
software is freely available for download. The software provides the
starting point for further exploration of the concepts and for devel opment
of new theoretical studies and applications.
v

Preface

I study how natural biological processes shape the design of organisms.


Like many biologists, I have often turned to the rich theory of engineering
feedback control to gain insight into biology.
The task of learning control theory for a biologist or for an outsider from
another scientific field is not easy. I read and reread the classic introductory
texts of control theory. I learned the basic principles and gained the ability
to analyze simple models of control. The core of the engineering theory
shares many features with my own closest interests in design tradeoffs in
biology. How much cost is it worth paying to enhance performance? What
is the most efficient investment in improved design given the inherent
limitation on time, energy, and other resources?
Yet, for all of the conceptual similarities to my own research and for all of
my hours of study with the classic introductory texts, I knew that I had not
mastered the broad principles of engineering control theory design. How
should I think simply and clearly about a basic control theory principle such
as integral control in terms of how a biological system actually builds an
error-correcting feedback loop? What is the relation between various
adaptive engineering control systems and the ways in which organisms
build hard-wired versus flexible control responses? How do the classic
cost-benefit analyses of engineering quadratic control models relate to the
commonly used notions of costs and benefits in models of organismal
design?
After several years of minor raiding around the periphery of engineering
control theory, I decided it was time to settle down and make a carefully
planned attack. I lined up the classic texts, from the basic introductions to
the more advanced treatises on nonlinear control, adaptive control, model
predictive control, modern robust analysis, and the various metrics used to
analyze uncertainty. I could already solve a wide range of problems, but I
had never fully internalized the basic prin ciples that unified the subject in a
simple and natural way.
This book is the tutorial that I developed for myself. This tutorial can
guide you toward broad understanding of the principles of control in a way
that cannot be obtained from the standard introductory books. Those
classic texts are brilliant compilations of knowledge with excellent drills to
improve technical skill. But those texts cannot teach you to understand the
principles of control, how to

vii
viii Preface

internalize the concepts and make them your own. You must ultimately
learn to think simply and clearly about problems of control and how such
problems relate to the broad corpus of existing knowledge.
At every stage of learning, this tutorial provides the next natural step to
move ahead. I present each step in the quickest and most illustrative
manner. If that quick step works for you, then you can move along. If not,
then you should turn to the broad resources provided by the classic texts.
In this way, you can build your understanding rapidly, with emphasis on
how the pieces fit together to make a rich and beautiful conceptual whole.
Throughout your study, you can take advantage of other sources to fill in
technical gaps, practical exercises, and basic principles of applied
mathematics.
You will have to build your own course of study, which can be
challenging. But with this tutorial guide, you can do it with the confidence
that you are working toward the broad conceptual understanding that can
be applied to a wide range of real-world problems. Although the size of this
tutorial guide is small, it will lead you toward the key concepts in standard
first courses plus many of the principles in the next tier of advanced topics.
For scientists outside of engineering, I cannot think of another source that
can guide your study in such a simple and direct way. For engineering
students, this tutorial supplements the usual courses and books to unify the
conceptual understanding of the individual tools and skills that you learn in
your routine studies.
This tutorial is built around an extensive core of software tools and
examples. I designed that software to illustrate fundamental concepts, to
teach you how to do analyses of your own problems, and to provide tools
that can be used to develop your own research projects. I provide all of the
software code used to analyze the examples in the text and to generate
the figures that illustrate the concepts.
The software is written in Wolfram Mathematica. I used Mathematica
rather than the standard MATLAB tools commonly used in engineering
courses. Those two systems are similar for analyzing numerical problems.
However, Mathematica provides much richer tools for symbolic analysis
and for graphic presentation of complex results from numerical analysis.
The symbolic tools are particularly valuable, because the Mathematica
code provides clear documentation of assumptions and mathematical
analysis along with the series of steps used in derivations. The symbolic
analysis also allows easy coupling of mathematical derivations to
numerical examples and graphical illustrations. All of the software code
used in this tutorial is freely available at
http://extras.springer.com/2018/978-3-319-91707-8.
The US National Science Foundation and the Donald Bren Foundation
support my research.

Irvine, USA Steven A. Frank March 2018


Contents

1 Introduction .......................................... 1 1.1 Control Systems and Design


.......................... 1 1.2 Overview ........................................ 2

Part I Basic Principles


2 Control Theory Dynamics ................................ 9 2.1 Transfer Functions
and State Space ..................... 9 2.2 Nonlinearity and Other Problems
....................... 12 2.3 Exponential Decay and Oscillations .....................
13 2.4 Frequency, Gain, and Phase .......................... 14 2.5 Bode Plots
of Gain and Phase ......................... 16
3 Basic Control Architecture ............................... 19 3.1 Open-Loop Control
................................. 19 3.2 Feedback Control .................................. 21
3.3 Proportional, Integral, and Derivative Control .............. 23 3.4
Sensitivities and Design Tradeoffs ...................... 25
4 PID Design Example .................................... 29 4.1 Output Response to
Step Input ........................ 30 4.2 Error Response to Noise and
Disturbance ................. 31 4.3 Output Response to Fluctuating Input
................... 33 4.4 Insights from Bode Gain and Phase Plots .................
33 4.5 Sensitivities in Bode Gain Plots........................ 35
5 Performance and Robustness Measures ..................... 37 5.1
Performance and Cost: J ............................ 38 5.2 Performance Metrics:
Energy and H2 .................... 38 5.3 Technical Aspects of Energy and H2
Norms .............. 40 5.4 Robustness and Stability: H1 ......................... 41

ix
x Contents

Part II Design Tradeoffs


6 Regulation ............................................ 45 6.1 Cost Function
..................................... 45 6.2 Optimization Method ............................... 47
6.3 Resonance Peak Example ............................ 48 6.4 Frequency
Weighting ............................... 50
7 Stabilization .......................................... 55 7.1 Small Gain Theorem
................................ 56 7.2 Uncertainty: Distance Between Systems
.................. 57 7.3 Robust Stability and Robust Performance .................
59 7.4 Examples of Distance and Stability ..................... 60 7.5
Controller Design for Robust Stabilization ................ 61
8 Tracking ............................................. 63 8.1 Varying Input
Frequencies............................ 64 8.2 Stability Margins
.................................. 67

9 State Feedback ........................................ 69 9.1 Regulation Example


................................ 70 9.2 Tracking Example .................................. 72

Part III Common Challenges


10 Nonlinearity .......................................... 79 10.1 Linear Approximation
............................... 80 10.2 Regulation ....................................... 80 10.3
Piecewise Linear Analysis and Gain Scheduling ............ 82 10.4
Feedback Linearization .............................. 83
11 Adaptive Control ...................................... 85 11.1 General Model
.................................... 85 11.2 Example of Nonlinear Process Dynamics
................. 87 11.3 Unknown Process Dynamics .......................... 88

12 Model Predictive Control ................................ 91 12.1 Tracking a Chaotic


Reference ......................... 92 12.2 Quick Calculation Heuristics
.......................... 93 12.3 Mixed Feedforward and Feedback
...................... 94 12.4 Nonlinearity or Unknown Parameters ....................
94
13 Time Delays .......................................... 95 13.1 Background
...................................... 95 13.2 Sensor Delay ..................................... 96
13.3 Process Delay ..................................... 97 13.4 Delays Destabilize
Simple Exponential Decay ............. 97
Contents xi

13.5 Smith Predictor .................................... 99 13.6 Derivation of the


Smith Predictor....................... 101
14 Summary ............................................ 103 14.1 Feedback
........................................ 103 14.2 Robust Control ....................................
103 14.3 Design Tradeoffs and Optimization ..................... 104 14.4
Future Directions .................................. 104
References .................................................. 107 Index
...................................................... 109

Chapter 1
Introduction

I introduce the basic principles of control theory in a concise self-study guide. I


wrote this guide because I could not find a simple, brief introduction to the
foundational concepts. I needed to understand those key concepts before I could
read the standard introductory texts on control or read the more advanced
literature. Ultimately, I wanted to achieve sufficient understanding so that I could
develop my own line of research on control in biological systems.
This tutorial does not replicate the many excellent introductory texts on control
theory. Instead, I present each key principle in a simple and natural progression
through the subject.
The principles build on each other to fill out the basic foundation. I leave all the
detail to those excellent texts and instead focus on how to think clearly about
control. I emphasize why the key principles are important, and how to make them
your own to provide a basis on which to develop your own understanding.
I illustrate each principle with examples and graphics that highlight key aspects.
I include, in a freely available file, all of the Wolfram Mathematica software code
that I used to develop the examples and graphics (see Preface). The code provides
the start ing point for your own exploration of the concepts and the subsequent
development of your own theoretical studies and applications.

1.1 Control Systems and Design

An incoming gust of wind tips a plane. The plane’s sensors measure orientation.
The measured orientation feeds into the plane’s control systems, which send
signals to the plane’s mechanical components. The mechanics reorient the plane.

© The Author(s) 2018


1
S. A. Frank, Control Theory Tutorial, SpringerBriefs in Applied Sciences
and Technology, https://doi.org/10.1007/978-3-319-91707-8_1
2 1 Introduction

An organism’s sensors transform light and temperature into chemical signals.


Those chemical signals become inputs for further chemical reactions. The chain of
chemical reactions feeds into physical systems that regulate motion.
How should components be designed to modulate system response? Different
goals lead to design tradeoffs. For example, a system that responds rapidly to
chang ing input signals may be prone to overshooting design targets. The tradeoff
between performance and stability forms one key dimension of design.
Control theory provides rich insights into the inevitable tradeoffs in design.
Biolo gists have long recognized the analogies between engineering design and the
analysis of biological systems. Biology is, in essence, the science of reverse
engineering the design of organisms.

1.2 Overview

I emphasize the broad themes of feedback, robustness, design tradeoffs, and opti
mization. I weave those themes through the three parts of the presentation.

1.2.1 Part I: Basic Principles

The first part develops the basic principles of dynamics and control. This part
begins with alternative ways in which to study dynamics. A system changes over
time, the standard description of dynamics. One can often describe changes over
time as a combination of the different frequencies at which those changes occur.
The duality between temporal and frequency perspectives sets the classical
perspective in the study of control.
The first part continues by applying the tools of temporal and frequency
analysis to basic control structures. Open-loop control directly alters how a system
transforms inputs into outputs. Prior knowledge of the system’s intrinsic dynamics
allows one to design a control process that modulates the input–output relation to
meet one’s goals.
By contrast, closed-loop feedback control allows a system to correct for lack of
complete knowledge about intrinsic system dynamics and for unpredictable pertur
bations to the system. Feedback alters the input to be the error difference between
the system’s output and the system’s desired target output.
By feeding back the error into the system, one can modulate the process to
move in the direction that reduces error. Such self-correction by feedback is the
single greatest principle of design in both human-engineered systems and naturally
evolved biological systems.
1.2 Overview 3

I present a full example of feedback control. I emphasize the classic


proportional, integral, derivative (PID) controller. A controller is a designed
component of the system that modulates the system’s intrinsic input–output
response dynamics.
In a PID controller, the proportional component reduces or amplifies an input
signal to improve the way in which feedback drives a system toward its target. The
integral component strengthens error correction when moving toward a fixed target
value. The derivative component anticipates how the target moves, providing a
more rapid system response to changing conditions.
The PID example illustrates how to use the basic tools of control analysis and
design, including the frequency interpretation of dynamics. PID control also intro
duces key tradeoffs in design. For example, a more rapid response toward the
target setpoint often makes a system more susceptible to perturbations and more
likely to become unstable.
This first part concludes by introducing essential measures of performance and
robustness. Performance can be measured by how quickly a system moves toward
its target or, over time, how far the system tends to be from its target. The cost of
driving a system toward its target is also a measurable aspect of performance.
Robustness can be measured by how likely it is that a system becomes unstable or
how sensitive a system is to perturbations. With explicit measures of performance
and robustness, one can choose designs that optimally balance tradeoffs.

1.2.2 Part II: Design Tradeoffs

The second part applies measures of performance and robustness to analyze


tradeoffs in various design scenarios.
Regulation concerns how quickly a system moves toward a fixed setpoint. I
present techniques that optimize controllers for regulation. Optimal means the best
balance between design tradeoffs. One finds an optimum by minimizing a cost
function that combines the various quantitative measures of performance and
robustness.
Stabilization considers controller design for robust stability. A robust system
maintains its stability even when the intrinsic system dynamics differ significantly
from that assumed during analysis. Equivalently, the system maintains stability if
the intrinsic dynamics change or if the system experiences various unpredictable
pertur bations. Changes in system dynamics or unpredicted perturbations can be
thought of as uncertainties in intrinsic dynamics.
The stabilization chapter presents a measure of system stability when a
controller modulates intrinsic system dynamics. The stability measure provides
insight into the set of uncertainties for which the system will remain stable. The
stability analysis is based on a measure of the distance between dynamical
systems, a powerful way in which to compare performance and robustness
between systems.
4 1 Introduction

Tracking concerns the ability of a system to follow a changing environmental


setpoint. For example, a system may benefit by altering its response as the environ
mental temperature changes. How closely can the system track the optimal
response to the changing environmental input? Once again, the analysis of
performance and robustness may be developed by considering explicit measures of
system charac teristics. With explicit measures, one can analyze the tradeoffs
between competing goals and how alternative assumptions lead to alternative
optimal designs.
All of these topics build on the essential benefits of feedback control. The par
ticular information that can be measured and used for feedback plays a key role in
control design.

1.2.3 Part III: Common Challenges

The third part presents challenges in control design. Challenges include


nonlinearity and uncertainty of system dynamics.
Classical control theory assumes linear dynamics, whereas essentially all pro
cesses are nonlinear. One defense of linear theory is that it often works for real
prob lems. Feedback provides powerful error correction, often compensating for
unknown nonlinearities. Robust linear design methods gracefully handle
uncertainties in sys tem dynamics, including nonlinearities.
One can also consider the nonlinearity explicitly. With assumptions about the
form of nonlinearity, one can develop designs for nonlinear control. Other general
design approaches work well for uncertainties in intrinsic system dynamics,
including nonlinearity. Adaptive control adjusts estimates for the unknown
parameters of intrinsic system dynamics. Feedback gives a measure of error in the
current parameter estimates. That error is used to learn better parameter values.
Adaptive control can often be used to adjust a controller with respect to nonlinear
intrinsic dynamics.
Model predictive control uses the current system state and extrinsic inputs to
calculate an optimal sequence of future control steps. Those future control steps
ideally move the system toward the desired trajectory at the lowest possible cost.
At each control point in time, the first control step in the ideal sequence is applied.
Then, at the next update, the ideal control steps are recalculated, and the first new
step is applied.
By using multiple lines of information and recalculating the optimal response,
the system corrects for perturbations and for uncertainties in system dynamics.
Those uncertainties can include nonlinearities, providing another strong approach
for non linear control.
1.2 Overview 5
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give
appropriate credit to the original author(s) and the source, provide a link to the Creative
Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter’s Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter’s Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly
from the copyright holder.

Part I
Basic Principles
Chapter 2
Control Theory Dynamics
The mathematics of classical control theory depends on linear ordinary differential
equations, which commonly arise in all scientific disciplines. Control theory
empha sizes a powerful Laplace transform expression of linear differential
equations. The Laplace expression may be less familiar in particular disciplines,
such as theoretical biology.

2.1 Transfer Functions and State Space

Here, I show how and why control applications use the Laplace form. I
recommend an introductory text on control theory for additional background and
many example applications (e.g., Åström and Murray 2008; Ogata 2009; Dorf and
Bishop 2016).
Suppose we have a process, P, that transforms a command input, u, into an
output, y. Figure 2.1a shows the input–output flow. Typically, we write the process
as a differential equation, for example

x¨ + a1x˙ + a2x = ˙u + bu, (2.1)

in which x(t) is an internal state variable of the process that depends on time, u(t)
is the forcing command input signal, and overdots denote derivatives with respect
to time. Here, for simplicity, we let the output be equivalent to the internal state, y
≡ x.
The dynamics of the input signal, u, may be described by another differential
equation, driven by reference input, r (Fig. 2.1b). Mathematically, there is no prob
lem cascading sequences of differential equations in this manner. However, the
rapid growth of various symbols and interactions make such cascades of
differential equa tions difficult to analyze and impossible to understand intuitively.

© The Author(s) 2018


9
S. A. Frank, Control Theory Tutorial, SpringerBriefs in Applied Sciences
and Technology, https://doi.org/10.1007/978-3-319-91707-8_2
10 2 Control Theory Dynamics
(a)
y
P(s) u U(s) Y(s)

(b)
y
C(s) P(s) r u

(c)
y
C(s) P(s) r e u

Fig. 2.1 Basic process and control flow. a The input–output flow in Eq. 2.2. The input, U(s), is
itself a transfer function. However, for convenience in diagramming, lowercase letters are
typically used along pathways to denote inputs and outputs. For example, in a, u can be used in
place of U(s). In b, only lowercase letters are used for inputs and outputs. Panel b illustrates the
input–output flow of Eq. 2.3. These diagrams represent open-loop pathways because no
closed-loop feedback pathway sends a downstream output back as an input to an earlier step. c A
basic closed-loop process and control flow with negative feedback. The circle between r and e
denotes addition of the inputs to produce the output. In this figure, e = r − y

We can use a much simpler way to trace input–output pathways through a


system. If the dynamics of P follow Eq. 2.1, we can transform P from an
expression of temporal dynamics in the variable t to an expression in the complex
Laplace variable
s as

U(s) = s + b
s2 + a1s + a2. (2.2)
P(s) = Y (s)
The numerator simply uses the coefficients of the differential equation in u from
the right side of Eq. 2.1 to make a polynomial in s. Similarly, the denominator uses
the coefficients of the differential equation in x from the left side of Eq. 2.1 to
make a polynomial in s. The eigenvalues for the process, P, are the roots of s for
the polynomial in the denominator. Control theory refers to the eigenvalues as the
poles of the system.
From this equation and the matching picture in Fig. 2.1, we may write Y (s) =
U(s)P(s). In words, the output signal, Y (s), is the input signal, U(s), multiplied by
the transformation of the signal by the process, P(s). Because P(s) multiplies the
signal, we may think of P(s) as the signal gain, the ratio of output to input, Y /U.
The signal gain is zero at the roots of the numerator’s polynomial in s. Control
theory refers to those numerator roots as the zeros of the system.
2.1 Transfer Functions and State Space 11

The simple multiplication of the signal by a process means that we can easily
cascade multiple input–output processes. For example, Fig. 2.1b shows a system
with extended input processing. The cascade begins with an initial reference input,
r, which is transformed into the command input, u, by a preprocessing controller,
C, and then finally into the output, y, by the intrinsic process, P. The input–output
calculation for the entire cascade follows easily by noting that C(s) = U(s)/R(s),
yielding
functions.
Y (s) = R(s)C(s)P(s) = R(s)U(s) Y (s)

R(s) U(s). (2.3)

These functions of s are called transfer

Each transfer function in a cascade can express any general system of ordinary
linear differential equations for vectors of state variables, x, and inputs, u, with
dynamics given by

x(n) + a1x(n−1) +···+ an−1x(1) + an x


= b0u(m) + b1u(m−1) +···+ bm−1u(1) + bmu, (2.4)

in which parenthetical superscripts denote the order of differentiation. By analogy


with Eq. 2.2, the associated general expression for transfer functions is

m m−1
P(s) = b0s + b1s +···+ bm−1s + bm
sn + a1sn−1 +···+ an−1s + an. (2.5)
The actual biological or physical process does not have to include higher-order
derivatives. Instead, the dynamics of Eq. 2.4 and its associated transfer function
can always be expressed by a system of first-order processes of the form
ai j x j + j bi ju j, (2.6)
x˙i = j

which allows for multiple inputs, u j . This system describes the first-order rate of
change in the state variables, x˙i , in terms of the current states and inputs. This
state space description for the dynamics is usually written in vector notation as

x˙ = Ax + Bu
y = Cx + Du,

which potentially has multiple inputs and outputs, u and y.


For example, the single input–output dynamics in Eq. 2.1 translate into the state
space model
12 2 Control Theory Dynamics

x˙1 = −a2x2 + bu
x˙2 = x1 − a1x2 + u
y = x2,

in which the rates of change in the states depend only on the current states and the
current input.

2.2 Nonlinearity and Other Problems

Classical control theory focuses on transfer functions. Those functions apply only
to linear, time-invariant dynamics. By contrast, state-space models can be extended
to any type of nonlinear, time-varying process.
Real systems are typically nonlinear. Nonetheless, four reasons justify the study
of linear theory.
First, linear analysis clarifies fundamental principles of dynamics and control.
For example, feedback often leads to complex, nonintuitive pathways of causation.
Linear analysis has clarified the costs and benefits of feedback in terms of trade
offs between performance, stability, and robustness. Those principles carry over to
nonlinear systems, although the quantitative details may differ.
Second, many insights into nonlinear aspects of control come from linear the
ory (Isidori 1995; Khalil 2002; Astolfi et al. 2008). In addition to feedback, other
principles include how to filter out disturbances at particular frequencies, how time
delays alter dynamics and the potential for control, how to track external setpoints,
and how to evaluate the costs and benefits of adding sensors to monitor state and
adjust dynamics.
Third, linear theory includes methods to analyze departures from model assump
tions. Those linear methods of robustness often apply to nonlinear departures from
assumed linearity. One can often analyze the bounds on a system’s performance,
stability, and robustness to specific types of nonlinear dynamics.
Fourth, analysis of particular nonlinear systems often comes down to studying
an approximately linearized version of the system. If the system state remains near
an equilibrium point, then the system will be approximately linear near that point.
If the system varies more widely, one can sometimes consider a series of changing
linear models that characterize the system in each region. Alternatively, a rescaling
of a nonlinear system may transform the dynamics into a nearly linear system.
Given a particular nonlinear system, one can always simulate the dynamics
explic itly. The methods one uses to understand and to control a simulated system
arise mostly from the core linear theory and from the ways that particular
nonlinearities depart from that core theory.
2.3 Exponential Decay and Oscillations 13 2.3 Exponential Decay and

Oscillations

Two simple examples illustrate the match between standard models of dynamics
and the transfer function expressions. First, the simplest first-order differential
equation in x(t) forced by the input u(t), with initial condition x(0) = 0, is given by

x˙ + ax = u, (2.7)
−aτ
e u(t − τ )dτ. (2.8)
which has the solution x(t) =
t0
This process describes how x accumulates over time, as inputs arrive at each time
point with intensity u, and x decays at rate a.
If the input into this system is the impulse or Dirac delta function, u(t)dt = 1 at t
= 0 and u(t) = 0 for all other times, then

x(t) = e−at .

If the input is the unit step function, u(t) = 1 for t ≥ 0 and u(t) = 0 for t < 0, then

1 −at
x(t) = a 1 − e .

Many processes follow the basic exponential decay in Eq. 2.8. For example, a
quantity u of a molecule may arrive in a compartment at each point in time and
then decay at rate a within the compartment. At any time, the total amount of the
molecule in the compartment is the sum of the amounts that arrived at each time in
the past, u(t − τ ), weighted by the fraction that remains after decay, e−aτ .
The process in Eq. 2.7 corresponds exactly to the transfer

function P(s) = 1

s + a, (2.9)
in which the output is equivalent to the internal state, y ≡ x.
In the second example, an intrinsic process may oscillate at a particular frequency,
ω0, described by

x¨ + ω20 x = u.

This system produces output x = sin(ω0t) for u = 0 and an initial condition along
the sine curve. The corresponding transfer function is

P(s) = ω0
s2 + ω20.
14 2 Control Theory Dynamics

We can combine processes by simply multiplying the transfer functions. For


example, suppose we have an intrinsic exponential decay process, P(s), that is
driven by oscillating inputs, U(s). That combination produces an output

Y (s) = U(s)P(s) = ω0
(s + a)(s2 + ω20), (2.10)

which describes a third-order differential equation, because the polynomial of s in


the denominator has a highest power of three.
We could have easily obtained that third-order process by combining the two
systems of differential equations given above. However, when systems include
many processes in cascades, including feedback loops, it becomes difficult to
combine the differential equations into very high-order systems.Multiplying the
transfer functions through the system cascade remains easy. That advantage was
nicely summarized by Bode (1964), one of the founders of classical control theory
The typical regulator system can frequently be described, in essentials, by differential equa
tions of no more than perhaps the second, third or fourth order. … In contrast, the order of
the set of differential equations describing the typical negative feedback amplifier used in
telephony is likely to be very much greater. As a matter of idle curiosity, I once counted to
find out what the order of the set of equations in an amplifier I had just designed would
have been, if I had worked with the differential equations directly. It turned out to be 55.

2.4 Frequency, Gain, and Phase

How do systems perform when parameters vary or when there are external environ
mental perturbations? We can analyze robustness by using the differential
equations to calculate the dynamics for many combinations of parameters and
perturbations. However, such calculations are tedious and difficult to evaluate for
more than a cou ple of parameters. Using transfer functions, we can study a wide
range of conditions by evaluating a function’s output response to various inputs.
This chapter uses the Bode plot method. That method provides an easy and
rapid way in which to analyze a system over various inputs. We can apply this
method to individual transfer functions or to cascades of transfer functions that
comprise entire systems.
This section illustrates the method with an example. The following section
describes the general concepts and benefits.
Consider the transfer function

G(s) = a

s + a, (2.11)
which matches the function for exponential decay in Eq. 2.9. Here, I multiplied the
function by a so that the value would be one when s = 0.
2.4 Frequency, Gain, and Phase 15 (a) (d)

(b) (e)
(c) (f)

Fig. 2.2 Dynamics, gain, and phase of the low-pass filter in Eq. 2.11 in response to sine wave
inputs at varying frequencies, ω. Details provided in the text. a–c Dynamics given by a multiplied
by the transfer function on the right-hand side of Eq. 2.10. d Response of Eq. 2.11 to unit step
input. e The scaling of the Bode gain plot is 20 log10(gain). That scaling arises from the relation
2
between the magnitude, M = |G(jω)|, and power, P = M , of a signal at a particular frequency, ω,
or

equivalently M = P. If we consider gain as the magnitude of the output signal, then the scale for

the gain is given as 20 log10( P) = 10 log10(P), the standard decibel scaling for the relative power
of a signal. f Bode phase plot

We can learn about a system by studying how it responds to different kinds of


fluctuating environmental inputs. In particular, how does a system respond to
different frequencies of sine wave inputs?
Figure 2.2 shows the response of the transfer function in Eq. 2.11 to sine wave
inputs of frequency, ω. The left column of panels illustrates the fluctuating output
in response to the green sine wave input. The blue (slow) and gold (fast) responses
16 2 Control Theory Dynamics

correspond to parameter values in Eq. 2.11 of a = 1 and a = 10. All calculations


and plots in this book are available in the accompanying Mathematica code
(Wolfram Research 2017) at the site listed in the Preface.
In the top-left panel, at input frequency ω = 1, the fast (gold) response output

closely tracks the input. The slow (blue) response reduces the input by 2 ≈ 0.7.
This output–input ratio is called the transfer function’s gain. The slow response
output also lags the input by approximately 0.11 of one complete sine wave cycle
of 2π = 6.28 radians, thus the shift to the right of 0.11 × 6.28 ≈ 0.7 radians along
the x-axis.
We may also consider the lagging shift in angular units, in which 2π radians is
equivalent to 360◦. The lag in angular units is called the phase. In this case, the
phase is written as −0.11 × 360◦ ≈ −40◦, in which the negative sign refers to a
lagging response.
A transfer function always transforms a sine wave input into a sine wave output
modulated by the gain and phase. Thus, the values of gain and phase completely
describe the transfer function response.
Figure 2.2b shows the same process but driven at a higher input frequency of ω
= 10. The fast response is equivalent to the slow response of the upper panel. The
slow response has been reduced to a gain of approximately 0.1, with a phase of
approximately −80◦. At the higher frequency of ω = 100 in the bottom panel, the
fast response again matches the slow response of the panel above, and the slow
response’s gain is reduced to approximately 0.01.
Both the slow and fast transfer functions pass low-frequency inputs into nearly
unchanged outputs. At higher frequencies, they filter the inputs to produce greatly
reduced, phase-shifted outputs. The transfer function form of Eq. 2.11 is therefore
called a low-pass filter, passing low frequencies and blocking high frequencies.
The two filters in this example differ in the frequencies at which they switch from
passing low-frequency inputs to blocking high-frequency inputs.

2.5 Bode Plots of Gain and Phase

A Bode plot shows a transfer function’s gain and phase at various input
frequencies. The Bode gain plot in Fig. 2.2e presents the gain on a log scale, so
that a value of zero corresponds to a gain of one, log(1) = 0.
For the system with the slower response, a = 1 in blue, the gain is nearly one for
frequencies less than a and then drops off quickly for frequencies greater than a.
Similarly, the system with faster response, a = 10, transitions from a system that
passes low frequencies to one that blocks high frequencies at a point near its a
value. Figure 2.2f shows the phase changes for these two low-pass filters. The
slower blue system begins to lag at lower input frequencies.
2.5 Bode Plots of Gain and Phase 17

Low-pass filters are very important because low-frequency inputs are often
exter nal signals that the system benefits by tracking, whereas high-frequency
inputs are often noisy disturbances that the system benefits by ignoring.
In engineering, a designer can attach a low-pass filter with a particular
transition parameter a to obtain the benefits of filtering an input signal. In biology,
natural selection must often favor appending biochemical processes or physical
responses that act as low-pass filters. In this example, the low-pass filter is simply
a basic exponential decay process.
Figure 2.2d shows a key tradeoff between the fast and slow responses. In that
panel, the system input is increased in a step from zero to one at time zero. The
fast system responds quickly by increasing its state to a matching value of one,
whereas the slow system takes much longer to increase to a matching value. Thus,
the fast system may benefit from its quick response to environmental changes, but
it may lose by its greater sensitivity to high-frequency noise. That tradeoff between
responsiveness and noise rejection forms a common theme in the overall
performance of systems.
To make the Bode plot, we must calculate the gain and phase of a transfer
function’s response to a sinusoidal input of frequency ω. Most control theory
textbooks show the details (e.g.,Ogata 2009). Here, I briefly describe the
calculations, which will be helpful later.
Transfer functions express linear dynamical systems in terms of the complex
Laplace variable s = σ + jω. I use j for the imaginary number to match the control
theory literature.
The gain of a transfer function describes how much the function multiplies its
input to produce its output. The gain of a transfer function G(s) varies with the
input value, s. For complex-valued numbers, we use magnitudes to analyze gain, in
which
√ 2 2
the magnitude of a complex value is |s| = σ + ω .
It turns out that the gain of a transfer function in response to a sinusoidal input
at frequency ω is simply |G(jω)|, the magnitude of the transfer function at s = jω.
The phase angle is the arctangent of the ratio of the imaginary to the real parts of
G(jω).
For the exponential decay dynamics that form the low-pass filter of Eq. 2.11,
the gain magnitude, M, and phase angle, φ, are
a√ 2
M = |G(jω)| = ω + a2
−1 ω
φ = ∠G(jω) = − tan a.
Any stable transfer function’s long-term steady-state response to a sine wave input
at frequency ω is a sine wave output at the same frequency, multiplied by the gain
magnitude, M, and shifted by the phase angle, φ, as
G
18 2 Control Theory Dynamics sin(ωt)

−−→ M sin(ωt + φ), (2.12)

in which the angle is given in radians. For example, if the phase lags by one-half of
a cycle, φ = −π ≡ −180◦, then M sin(ωt + φ) = −M sin(ωt).
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give
appropriate credit to the original author(s) and the source, provide a link to the Creative
Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter’s Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter’s Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly
from the copyright holder.
Chapter 3
Basic Control Architecture
3.1 Open-Loop Control

Suppose a system benefits by tracking relatively slow oscillatory environmental


fluc tuations at frequency ωe and ignoring much faster noisy environmental
fluctuations at frequency ωn. Assume that the system has an intrinsic daily
oscillator at frequency ω0 = 1, with time measured in days. How can a system
build a control circuit that uses its intrinsic daily oscillator to track slower
environmental signals and ignore faster noisy signals?
We can begin by considering circuit designs that follow the cascade in Fig.
2.1b. That cascade is a single direct path from input to output, matching the
cascade in Eq. 2.3. That path is an open loop because there is no closed-loop
feedback. Using the components in Fig. 2.1b, the internal oscillator is given by

P(s) = ω0
s2 + ω20,
and the external reference signal is given by

R(s) = ωe
s2 + ω2e+ ωn
s2 + ω2n,
the sum of one low- and one high-frequency sine wave. From Fig. 2.1b, the design
goal seeks to create a preprocess controlling filter, C(s), that combines with the
intrinsic internal oscillator, P(s), to transform the reference input, R(s), into an
output, Y (s) ≈ ωe/(s2 + ω2e ), that fluctuates at ωe and ignores ωn.
In this case, we know exactly the intrinsic dynamics, P(s). Thus, we can use the
open-loop path in Fig. 2.1b to find a controller, C(s), such that the transfer function
C(s)P(s) gives approximately the input–output relation that we seek between R(s)
and Y (s). For example, by using the controller

© The Author(s) 2018


19
S. A. Frank, Control Theory Tutorial, SpringerBriefs in Applied Sciences
and Technology, https://doi.org/10.1007/978-3-319-91707-8_3
20 3 Basic Control Architecture
matches the design -25
Fig. 3.1 Bode plot of frequency, ω˜ 0 = ω0 = -50
an intrinsic oscillator, 1 -75
P(s), modulated by a C(s) =
controller, C(s), in an
open loop ni
0 the open-loop
L(s) = C(s)P(s). The a

gold curves follow Eq.G


system becomes
-100 -200
3.3, in which the actual
frequency of the
ω0
0.1 1 10
internal oscillator is ω˜ s + ω0
0 = 1.2 rather than the
3 s2 + ω20 ω0
value ω0 = 1 that set
the design of the es

controller. The
a
, (3.1)
underlying blue curveshP

show the outcome 50


when the internal 25 0.1 1 10 Frequency
oscillator frequency 0

ω0
s + ω0
3
L(s) = C(s)P(s) = , (3.2)

because the second term in C(s) cancels P(s). The system L(s) is the low-pass filter
in Eq. 2.11 raised the third power. With ω0 = 1, this system has a Bode plot similar
to the blue curve in Fig. 2.2e, f, but because of the exponent in L(s), the gain falls
more quickly at high frequencies and the phase lag is greater.
As with the low-pass filter illustrated in Fig. 2.2, this open-loop system, L(s),
tracks environmental signals at frequency ωe ω0 and suppresses noisy signals at
frequencyωn ω0. However, even if we could create this controller over the
required range of frequencies, it might turn out that this system is fragile to
variations in the parameters.
We could study robustness by using the differential equations to calculate the
dynamics for many combinations of parameters. However, such calculations are
tedious, and the analysis can be difficult to evaluate for more than a couple of
param eters. Using Bode plots provides a much easier way to analyze system
response under various conditions.
Suppose, for example, that in the absence of inputs, the internal oscillator, P(s),
actually fluctuates at the frequency ω˜ 0 = ω0. Then, the open-loop system
becomes
3.1 Open-Loop Control 21

ω0 3
2
s + ω20 ω˜ 0 s2 + , (3.3)
L(s) = ω˜ 0 s + ω0 ω0 ˜ω20
ω0

in which the first term adjusts the gain to be one at s = 0.


The gold curves in Fig. 3.1 show the Bode plot for this open loop, using ω0 = 1
and ω˜ 0 = 1.2. Note the resonant peak in the upper magnitude plot. That peak
occurs when the input frequency matches the natural frequency of the intrinsic
oscillator, ω˜ 0. Near that resonant frequency, the system “blows up,” because the
denominator in the last term, s2 + ˜ω20, goes to zero as s = jω → jω˜ 0 and s2
→−˜ω20.
In summary, open-loop control works well when one has accurate information.
Successful open-loop control is simple and has relatively low cost. However, small
variations in the intrinsic process or the modulating controller can cause poor per
formance or instabilities, leading to system failure.

3.2 Feedback Control

Feedback and feedforward have different properties. Feedforward action is obtained by


matching two transfer functions, requiring precise knowledge of the process dynamics,
while feedback attempts to make the error small by dividing it by a large quantity.
—Åström and Murray (2008, p. 320)

Feedback often solves problems of uncertainty or noise. Human-designed systems


and natural biological systems frequently use feedback control. Figure 2.1c shows
a common form of negative feedback. The output, y, is returned to the input. The
output is then subtracted from the environmental reference signal, r. The new
system input becomes the error between the reference signal and the output, e = r
− y.
In closed-loop feedback, the system tracks its target reference signal by reduc
ing the error. Any perturbations or uncertainties can often be corrected by system
dynamics that tend to move the error toward zero. By contrast, a feedforward open
loop has no opportunity for correction. Feedforward perturbations or uncertainties
lead to uncorrected errors.
In the simple negative feedback of Fig. 2.1c, the key relation between the open
loop system, L(s) = C(s)P(s), and the full closed-loop system, G(s), is

G(s) = L(s)
1 + L(s). (3.4)
This relation can be derived from Fig. 2.1c by noting that, from the error input,
E(s), to the output, Y (s), we have Y = L E and that E = R − Y . Substituting the
second equation into the first yields Y = L (R − Y ). Solving for the output Y
relative to the input R, which is G = Y/R, yields Eq. 3.4.
22 3 Basic Control Architecture

The error, E, in response to the environmental reference input, R, can be


obtained by a similar approach, yielding

E(s) = 1
1 + L(s)R(s). (3.5)
If the open loop, L(s), has a large gain, that gain will divide the error by a large
number and cause the system to track closely to the reference signal. A large gain
for L = C P can be achieved by multiplying the controller, C, by a large constant,
k. The large gain causes the system to respond rapidly to deviations from the
reference signal.
Feedback, with its powerful error correction, typically provides good
performance even when the actual system process, P, or controller, C, differs from
the assumed dynamics. Feedback also tends to correct for various types of
disturbances and noise, and can also stabilize an unstable open-loop system.
Feedback has two potential drawbacks. First, implementing feedback may
require significant costs for the sensors to detect the output and for the processes
that effec tively subtract the output value from the reference signal. In electronics,
the imple mentation may be relatively simple. In biology, feedback may require
various addi tional molecules and biochemical reactions to implement sensors and
the flow of information through the system. Simple open-loop feedforward systems
may be more efficient for some problems.
Second, feedback can create instabilities. For example, when L(s) → −1, the
denominator of the closed-loop system in Eq. 3.4 approaches zero, and the system
blows up. For a sinusoidal input, if there is a frequency, ω, at which the
magnitude, |L(jω)|, is one and the phase is shifted by one-half of a cycle, φ = ±π =
±180◦, then L(jω) = −1.
The problem of phase arises from the time lag (or lead) between input and feed
back. When the sinusoidal input is at a peak value of one, the output is shifted to a
sinusoidal trough value of minus one. The difference between input and output
combines in an additive, expansionary way rather than providing an error signal
that can shrink toward an accurate tracking process. In general, time delays in
feedback can create instabilities.
Instabilities do not require an exact half cycle phase shift. Suppose, for example,
that the open loop is 3
(s + 1) .

L(s) = k
This system is stable, because its eigenvalues are the roots of the polynomial in the
denominator, in this case s = −1, corresponding to a strongly stable system. The
closed loop has the transfer function

G(s) = L(s)
1 + L(s) = k
3
k + (s + 1) ,
3.2 Feedback Control 23

which has an eigenvalue with real part greater than zero for k > 8, causing the
system to be unstable. An unstable system tends to explode in magnitude, leading
to system failure or death.

3.3 Proportional, Integral, and Derivative Control


Open loop systems cannot use information about the error difference between the
target reference input and the actual output. Controllers must be designed based on
information about the intrinsic process and the likely inputs.
By contrast, feedback provides information about errors, and controller design
focuses primarily on using the error input. Given the error, the controller outputs a
new command reference input to the intrinsic system process. Precise knowledge
about the intrinsic system dynamics is much less important with feedback because
the feedback loop can self-correct.
This section discusses controller design for feedback systems. A controller is a
process that modulates system dynamics. For the simplest feedback shown in Fig.
2.1c, we start with an intrinsic process, P(s), and end up with feedback system
dynamics

1 + C(s)P(s) = L(s)

G(s) = C(s)P(s)

1 + L(s),
in which C(s) is the controller. The problem is how to choose a process, C(s), that
balances the tradeoffs between various measures of success, such as tracking the
reference input and robustness to perturbations and uncertainties.
Figure 3.2a includes two kinds of perturbations. The input d describes the load
disturbance, representing uncertainties about the internal process, P(s), and distur
bances to that internal process. Traditionally, one thinks of d as a relatively low
frequency perturbation that alters the intrinsic process. The input n describes
pertur bations that add noise to the sensor that measures the process output, η, to
yield the final output, y. That measured output, y, is used for feedback into the
system.
To analyze alternative controller designs, it is useful to consider how different
controllers alter the open-loop dynamics, L(s) = C(s)P(s). How does a particular
change in the controller, C(s), modulate the intrinsic dynamics, P(s)?
First, we can simply increase the gain by letting C(s) = k p > 1, a method called
proportional control. The system becomes G = k p P/(1 + k p P). For large k p and
positive P(s), the system transfer function is G(s) → 1, which means that the sys
tem output tracks very closely to the system input. Proportional control can greatly
improve tracking at all frequencies. However, best performance often requires
track ing low-frequency environmental inputs and ignoring noisy high-frequency
inputs from the reference signal. In addition, large k p values can cause instabilities,
and it may be that P(s) < 0 for some inputs.
24 3 Basic Control Architecture
d
n
(a)

y u
C(s) P(s) r F(s) e

Controller Process
z (b) (c)
w
P
Pd yC

Cuyn
Fig. 3.2 Closed-loop feedback. a An extended feedback loop with inputs for disturbance, d, and
noise, n. The function F(s) may be used to filter the reference input, providing a second degree of
freedom in addition to the main controller, C(s). The system can be divided into intrinsic
processes that cannot be adjusted directly and designed processes of control that can be adjusted.
Note the inputs for each block: r and y for the controller, and u, d, and n for the process. b In this
panel, the blocks P and C represent the multicomponent process and control blocks from the
upper panel. The reference signal is assumed to be zero, allowing one to focus on the roles of
disturbance and noise in relation to system stability. c An abstraction of the feedback process, in
which the vector y includes all the signals from the process to the controller, u includes all the
control input signals to the process, w includes all the extrinsic inputs, and z includes any
additional signal outputs from the process. Redrawn from Åström and Murray (2008), ©
Princeton University Press

Second, we can add integral control by including the termki /s to the


controller.We can understand why this term is an integrator by considering a few
steps of analysis that extend earlier equations. Multiplying Eq. 2.5 by 1/s increases
the denominator’s order of its polynomial in s. That increase in the exponents of s
corresponds to an increase in the order of differentiation for each term on the left
side of Eq. 2.4, which is equivalent to integrating each term on the right side of
that equation. For example, if we start with x˙ = u and then increase the order of
differentiation on the left side, x¨ = u, this new expression corresponds to the
original expression with integration

of the input signal, x˙ = udt.


Integrating the input smooths out high-frequency fluctuations, acting as a filter
that passes low-frequency inputs and blocks high-frequency inputs. Integration
causes a slower, smoother, and often more accurate adjustment to the input signal.
A term such as a/(s + a) is an integrator for large s and a pass-through transfer
function with value approaching one for small s.
3.3 Proportional, Integral, and Derivative Control 25

Perfect tracking of a constant reference signal requires a pure integrator term,


1/s. A constant signal has zero frequency, s = 0. To track a signal perfectly, the
system transfer function’s gain must be one so that the output equals the input. For
the simple closed loop in Eq. 3.4, at zero frequency, G(0) must be one. The
tracking error is 1 − G = 1/(1 + L). The error goes to zero as the gain of the open
loop goes to infinity, L(0) → ∞. A transfer function requires a term 1/s to approach
infinity as s goes to zero. In general, high open loop gain leads to low tracking
error.
Third, we can add derivative control by including the term kd s. We can
understand why this term differentiates the input term by following the same steps
as for the analysis of integration. Multiplying Eq. 2.5 by s increases the
numerator’s order of its polynomial in s. That increase in the exponents of s
corresponds to an increase in the order of differentiation for each term on the right
side of Eq. 2.4. Thus, the original input term, u(t), becomes the derivative with
respect to time, u˙(t).
Differentiating the input causes the system to respond to the current rate of
change in the input. Thus, the system responds to a prediction of the future input,
based on a linear extrapolation of the recent trend.
This leading, predictive response enhances sensitivity to short-term, high
frequency fluctuations and tends to block slow, low-frequency input signals. Thus,
differentiation acts as a high-pass filter of the input signal. A term such as s + a
mul tiplies signals by a for low-frequency inputs and multiplies signals by the
increasing value of s + a for increasingly high-frequency inputs. Differentiators
make systems very responsive, but also enhance sensitivity to noisy
high-frequency perturbations and increase the tendency for instability.
A basic proportional, integral, and derivative (PID) controller has the form

i
C(s) = k p +k s+ kd s = kd s2 + k ps + ki

s . (3.6)
PID controllers are widely used across all engineering applications. They work rea
sonably well for many cases, they are relatively easy to understand, and their
param eters are relatively easy to tune for various tradeoffs in performance.

3.4 Sensitivities and Design Tradeoffs

Figure 3.2a shows a basic feedback loop with three inputs: the reference signal,r,
the load disturbance, d, and the sensor noise, n. How do these different signals
influence the error between the reference signal and the system output? In other
words, how sensitive is the system to these various inputs?
To derive the sensitivities, define the error in Fig. 3.2a as r − η, the difference
between the reference input, r, and the process output, η (Åström and Murray
2008, Sect. 11.1). To obtain the transfer function between each input and output,
we use the rule for negative feedback: The transfer function between the input and
output is the
26 3 Basic Control Architecture

open loop directly from the input to the output, L, divided by one plus the pathway
around the feedback loop, 1 + L.
If we assume in Fig. 3.2a that there is no feedforward filter, so that F = 1, and
we define the main open loop as L = C P, then the output η in response to the three
inputs is

η=L 1 + Ld − L

1 + L r +P 1 + L n, (3.7)
in which each term is the open loop between the input signal and the output, η,
divided by one plus the pathway around the full loop, L. If we define

1 + LT = L
S=1 1 + LS + T = 1, (3.8)
with S as the sensitivity function and T as the complementary sensitivity function,
then the error is
r − η = Sr − PSd + T n. (3.9)

This expression highlights the fundamental design tradeoffs in control that arise
because S + T = 1. If we reduce T and the sensitivity to noise, we increase S. An
increase in S raises the error in relation to the reference signal, r, and the error in
relation to the load disturbance, d. If we reduce S, we increase T and the sensitivity
to noise, n. These sensitivity tradeoffs suggest two approaches to design.
First, the sensitivities S(s) and T (s) depend on the input, s. Thus, we may adjust
the tradeoff at different inputs. For example, we may consider inputs, s = jω, at
various frequencies, ω. Sensor noise, n, often arises as a high frequency
disturbance, whereas the reference input, r, and the load disturbance, d, often
follow a low fre quency signal. If so, then we can adjust the sensitivity tradeoff to
match the common input frequencies of the signals. In particular, at low frequency
for which r and d dominate, we may choose low S values whereas, at high
frequency for which n dominates, we may choose low T values.
Second, we may add an additional control process that alters the sensitivity
trade off. For example, we may use the feedforward filter, F, in Fig. 3.2a, to
modulate the reference input signal. With that filter, the transfer function from the
input, r, to the error output, r − η, becomes 1 − F T . If we know the form of T
with sufficient precision, we can choose F T ≈ 1, and thus we can remove the
sensitivity of the error to the reference input.
Note that adjusting the tradeoff between S and T only requires an adjustment to
the loop gain, L, which usually does not require precise knowledge about the
system processes. By contrast, choosing F to cancel the reference input requires
precise information about the form of T and the associated system processes. In
other words, feedback is relatively easy and robust because it depends primarily on
adjusting gain magnitude, whereas feedforward requires precise knowledge and is
not robust to misinformation or perturbation.
3.4 Sensitivities and Design Tradeoffs 27
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give
appropriate credit to the original author(s) and the source, provide a link to the Creative
Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter’s Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter’s Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly
from the copyright holder.
Chapter 4
PID Design Example
I illustrate the principles of feedback control with an example. We start with an
intrinsic process
bs+ + b).
P(s) = s+a b = ab

a (s + a)(s

This process cascades two exponential decay systems, each with dynamics as in
Eq. 2.8 and associated transfer function as in Eq. 2.9. For example, if the input into
this system is a unit impulse at time zero, then the system output is
−bt
e−at − e ,
y(t) = ab b −
a

expressing the cascade of two exponentially decaying


processes. For this example, we use

P(s) = 1
(s + 0.1)(s + 10) (4.1)
as the process. We also consider an alternative process

˜
P (s) = 1
(s + 0.01)(s + 100). (4.2)
We assume during system analysis and design that Eq. 4.1 describes the process,
but in fact Eq. 4.2 is actually the true process. Put another way, the difference
between the two processes may reflect uncertain information about the true process
or unknown disturbances that alter the process. Thus, we may consider how a
system performs

© The Author(s) 2018


29
S. A. Frank, Control Theory Tutorial, SpringerBriefs in Applied Sciences
and Technology, https://doi.org/10.1007/978-3-319-91707-8_4
30 4 PID Design Example (a) (b) (c)

Process PID feedback PID w/filter 1.5

1.0 1.0 1.0

0.5 0.5 0.5

0 5 10 15 20 25 30 012345 012345
1.5 1.5

Fig. 4.1 Response of the system output, η = y, to a sudden unit step increase in the reference
input, r, in the absence of disturbance and noise inputs, d and n. The x-axis shows the time, and
the y-axis shows the system output. a Response of the original process, P(s), in Eq. 4.1 (blue
˜
curve) and of the process with altered parameters, P (s) in Eq. 4.2 (gold curve). b System with the
PID controller embedded in a negative feedback loop, with no feedforward filter, F(s) = 1, as in
Fig. 3.2a. c PID feedback loop with feedforward filter, F, in Eq. 4.4

when it was designed, or evolved, in response to a process, P, and the underlying


˜
system becomes P .
In this example, the problem concerns the design of a negative feedback loop,
as in Fig. 3.2a, that uses a controller with proportional, integral, and derivative
(PID) action. Many methods derive PID controllers by tuning the various
sensitivity and performance tradeoffs (Åström and Hägglund 2006; Garpinger et
al. 2014).
I obtained the parameters for the PID controller in Eq. 3.6 by using the Ziegler–
Nichols method in Mathematica, yielding

2
C(s) = 6s + 121s + 606
s . (4.3)
I also used Mathematica to calculate the feedforward filter in Fig. 3.2a,

yielding F(s) = s2 + 10.4s + 101

s2 + 20.2s + 101. (4.4)

4.1 Output Response to Step Input

Figure 4.1 illustrates various system responses to a unit step increase from zero to
one in the reference input signal,r. Panel (a) shows the response of the base
process, P, by itself. The blue curve is the double exponential decay process of Eq.
4.1. That process responds slowly because of the first exponential process with
time decay a = 0.1, which averages inputs over a time horizon with decay time 1/a
= 10, as in Eq. 2.8.
4.1 Output Response to Step Input 31

The gold curve, based on Eq. 4.2, rises even more slowly, because that alternative
˜
process, P , has an even longer time horizon for averaging inputs of 1/a = 100.
Panel (b) shows the response of the full feedback loop of Fig. 3.2a with the PID
controller in Eq. 4.3 and no feedforward filter, F = 1. Note that the system
responds much more rapidly, with a much shorter time span over the x-axis than in
(a). The rapid response follows from the very high gain of the PID controller,
which strongly amplifies low-frequency inputs.
The PID controller was designed to match the base process P in Eq. 4.1, with
˜
response in blue. When the actual base process deviates as in P of Eq. 4.2, the
response is still reasonably good, although the system has a greater overshoot upon
first response and takes longer to settle down and match the reference input. The
reasonably good response in the gold curve shows the robustness of the PID
feedback loop to variations in the underlying process.
Panel (c) shows the response of the system with a feedforward filter, F, from
Eq. 4.4. Note that the system in blue with the base process, P, improves
significantly, with lower overshoot and less oscillation when settling to match the
reference input. By contrast, the system in gold with the alternative base process,
˜
P , changes its response very little with the additional feedforward filter. This
difference reflects the fact that feedforward works well only when one has very
good knowledge of the underlying process, whereas feedback works broadly and
robustly with respect to many kinds of perturbations.

4.2 Error Response to Noise and Disturbance

Figure 4.2 illustrates the system error in response to sensor noise, n, and process
disturbance, d. Panel (a) shows the error in response to a unit step change in n, the
input noise to the sensor. That step input to the sensor creates a biased
measurement, y, of the system output, η. The biased measured value of y is fed
back into the control loop. A biased sensor produces an error response that is
equivalent to the output response for a reference signal. Thus, Fig. 4.2a matches
Fig. 4.1b.
Panel (b) shows the error response to an impulse input at the sensor. An impulse
causes a brief jolt to the system. The system briefly responds by a large deviation
from its setpoint, but then returns quickly to stable zero error, at which the output
matches the reference input. An impulse to the reference signal produces an equiv
alent deviation in the system output but with opposite sign.
The error response to process disturbance in panels (c) and (d) demonstrates
that the system strongly rejects disturbances or uncertainties to the intrinsic system
process.
32 4 PID Design Example

(a) (c)
0.020
Sensor noise Process disturbance 1.5
0.015
1.0
0.010
0.5
0.005

0.0
0.000

-0.005
-0.5

-0.010
0.0 0.5 1.0 1.5 2.0
012345

(b) (d)
0.04
6

4
0.02

2
0.00
0

-2 -0.02

-4
-0.04
-6
0.0 0.5 1.0 1.5 2.0
012345

Fig. 4.2 Error response,r − η, of the PID feedback loop to sensor noise, n, or process disturbance,

d, from Eq. 3.9. Blue curve for the process, P, in Eq. 4.1 and gold curve for the altered process,
˜
P , in Eq. 4.2. a Error response to sensor noise input, n, for a unit step input and b for an impulse
input. c Error response to process disturbance input, d, for a unit step input and d for an impulse
input. An impulse is u(t)dt = 1 at t = 0 and u(t) = 0 at all other times. The system responses in
gold curves reflect the slower dynamics of the altered process. If the altered process had faster
intrinsic dynamics, then the altered process would likely be more sensitive to noise and
disturbance

Fig. 4.3 System response output, η = y, to sine wave reference signal inputs, r. Each column
shows a different frequency, ω. The rows are (Pr) for reference inputs into the original process, P
˜
or P , without a modifying controller or feedback loop, and (Rf) for reference inputs into the
closed-loop feedback system with the PID controller in Eq. 4.3. The green curve shows the sine
wave input. The blue curve shows systems with the base process, P, from Eq. 4.1. The gold curve
˜
shows systems with the altered process, P , from Eq. 4.2. In the lower left panel, all curves
overlap. In the lower panel at ω = 1, the green and blue curves overlap. In the two upper right
panels, the blue and gold curves overlap near zero
4.3 Output Response to Fluctuating Input 33 4.3 Output Response to

Fluctuating Input
Figure 4.3 illustrates the system output in response to fluctuating input (green).
˜
The top row shows the output of the system process, either P (blue) or P (gold),
alone in an open loop. The system process is a cascade of two low-pass filters,
which pass low-frequency inputs and do not respond to high-frequency inputs.
The upper left panel shows the response to the (green) low-frequency input, ω
= 0.1, in which the base system P (blue) passes through the input with a slight
˜
reduction in amplitude and lag in phase. The altered system P (gold) responds
only weakly to the low frequency of ω = 0.1, because the altered system has
slower response characteristics than the base system. At a reduced input frequency
of ω = 0.01 (not shown), the gold curve would match the blue curve at ω = 0.1.
˜
As fre quency increases along the top row, the processes P and P block the
higher-frequency inputs.
The lower row shows the response of the full PID feedback loop system. At a
low frequency of ω ≤ 0.1, the output tracks the input nearly perfectly. That close
tracking arises because of the very high gain amplification of the PID controller at
low frequency, which reduces the system tracking error to zero, as in Eq. 3.5.
At a higher frequency ofω = 10, the system with the base process P responds
with a resonant increase in amplitude and a lag in phase. The slower altered
˜
process, P , responds only weakly to input at this frequency. As frequency
continues to increase, both systems respond weakly or not at all.
The system response to sensor noise would be of equal magnitude but altered
sign and phase, as shown in Eq. 3.7.
Low-frequency tracking and high-frequency rejection typically provide the
great est performance benefit. The environmental references that it pays to track
often change relatively slowly, whereas the noisy inputs in both the reference
signal and in the sensors often fluctuate relatively rapidly.

4.4 Insights from Bode Gain and Phase Plots

Figure 4.4 provides more general insight into the ways in which PID control, feed
back, and input filtering alter system response.
Panels (a) and (b) show the Bode gain and phase responses for the intrinsic
˜
system process, P (blue), and the altered process, P (gold). Low-frequency inputs
pass through. High-frequency inputs cause little response. The phase plot shows
that these processes respond slowly, lagging the input. The lag increases with
frequency.
Panels (c) and (d) show the responses for the open loop with the PID controller,
˜
C, combined with the process, P or P , as in Fig. 2.1b. Note the very high gain in
panel (c) at lower frequencies and the low gain at high frequencies.
34 4 PID Design Example
-40
75 (e)
(a) -60 50
0 (c) 25 0

-20 0 -10
-20 -10 -20
0
(g)
Process PID open loop PID feedback PID w/filter -80
-25
0.01 0.1 1 10 100 0.01 0.1 1 10 100 0.01 0.1 1 10 100 0.01 0.1 1 10 100

(b) (d) (f) (h)


0 0 0 0

-50 -50 -50 -50

-100 -100 -100 -100

-150 -150 -150 -150

0.01 0.1 1 10 100 0.01 0.1 1 10 100 0.01 0.1 1 10 100 0.01 0.1 1 10 100

Fig. 4.4 Bode gain (top) and phase (bottom) plots for system output, η = y, in response to
reference input, r, in the absence of load disturbance and sensor noise. Blue curves for systems
˜
with the base process, P, in Eq. 4.1. Gold curves for systems with the altered process, P , in Eq.
˜
4.2. a, b The original unmodified process, P or P , with no controller or feedback. c, d The open
˜
loop with no feedback,C P orC P , with the PID controller,C, in Eq. 4.3. e, f The closed loop with
no feedforward filter, F = 1. g, h The closed loop with the feedforward filter, F, in Eq. 4.4

PID controllers are typically designed to be used in closed-loop feedback


systems, as in Fig. 2.1c. Panels (e) and (f) illustrate the closed-loop response. The
high open loop gain of the PID controller at low frequency causes the feedback
system to track the reference input closely. That close tracking matches the log(1)
= 0 gain at low frequency in panel (e). Note also the low-frequency phase
matching, or zero phase lag, shown in panel (f), further demonstrating the close
tracking of reference inputs. At high frequency, the low gain of the open-loop PID
controller shown in panel (c) results in the closed-loop rejection of high-frequency
inputs, shown as the low gain at high frequency in panel (e).
Note the resonant peak of the closed-loop system in panel (e) near ω = 10 for
the blue curve and at a lower frequency for the altered process in the gold curve.
˜
Note also that the altered process, P , in gold, retains the excellent low-frequency
tracking and high-frequency input rejection, even though the controller was
designed for the base process, P, shown in blue. The PID feedback loop is robust
to differences in the underlying process that varies from the assumed form of P.
Panels (g) and (h) show the PID closed-loop system with a feedforward filter, F,
as in Fig. 3.2a. The feedforward filter smooths out the resonant peak for the blue
curve, so that system does not amplify inputs at resonant frequencies. Amplified
resonant inputs may lead to instabilities or poor system performance. Note that the
feedforward filter does not have much effect on the altered process in gold.
Feedforward modifiers of a process typically work well only for a specific process.
They often do not work robustly over a variant range of processes.
4.5 Sensitivities in Bode Gain Plots 35 4.5 Sensitivities in Bode Gain Plots

Figure 4.5 illustrates the sensitivities of the system error output,r − η, to inputs
from the reference, r, sensor noise, n, and load disturbance, d, signals, calculated
from Eq. 3.9. Figure 3.2a shows the inputs and loop structure.
The blue curve of panel (a) shows the error sensitivity to the reference input.
That sensitivity is approximately the mirror image of the system output response to
the reference input, as shown in Fig. 4.4e (note the different scale). The duality of
the error response and the system response arises from the fact that the error is r −
η, and the system response is η.
Perfect tracking means that the output matches the input, r = η. Thus, a small
error corresponds to a low gain of the error in response to input, as occurs at low
frequency for the blue curve of Fig. 4.5a. In the same way, a small error
corresponds to a gain of one for the relation between the reference input,r, and the
system output, η, as occurs at low frequency for the blue curve of Fig. 4.4e.
The noise sensitivity in the green curve of Fig. 4.5a shows that the system error
is sensitive to low-frequency bias in the sensor measurements, y, of the system
output, η. When the sensor produces a low-frequency bias, that bias feeds back
into the system and creates a bias in the error estimate, thus causing an error
mismatch between the reference input and the system output. In other words, the
system is sensitive to errors when the sensor suffers low-frequency perturbations.
The PID system rejects high-frequency sensor noise, leading to the reduced gain at
high frequency illustrated by the green curve.
The disturbance load sensitivity in the red curve of Fig. 4.5a shows the low sen
sitivity of this PID feedback system to process variations.
This PID feedback system is very robust to an altered underlying process, as
shown in earlier figures. Here, Fig. 4.5b illustrates that robustness by showing the
relatively minor changes in system sensitivities when the underlying process
changes

(a) (b)
0 0

-25 -25

-50 -50

-75 -75

-100 -100

0.01 0.1 1 10 100 1000 0.01 0.1 1 10 100 1000

Fig. 4.5 Bode gain plots for the error output, r − η, in response to reference input, r (blue), sensor
noise, n (green), and load disturbance, d (red), from Eq. 3.9. The systems are the full
PID-controlled feedback loops as in Fig. 3.2a, with no feedforward filter. The PID controller is
given in Eq. 4.3. a System with the base process, P, from Eq. 4.1. b System with the altered
˜
process, P , from Eq. 4.2
36 4 PID Design Example

˜
from P to P . However, other types of change to the underlying process may cause
greater changes in system performance. Robustness depends on both the amount of
change and the kinds of change to a system.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give
appropriate credit to the original author(s) and the source, provide a link to the Creative
Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter’s Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter’s Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly
from the copyright holder.
Chapter 5
Performance and Robustness Measures
A theory of design tradeoffs requires broadly applicable measures of cost, per
formance, stability, and robustness. For example, the PID controller in the pre
vious example performs reasonably well, but we ignored costs. That PID con
troller achieved good tracking performance by using high gain amplification of low
frequency input signals. High gain in a negative feedback loop quickly drives the
error to zero.
High gain has two potential problems. First, high signal amplification may
require excessive energy in physical or biological systems. We must consider those
costs for a high gain controller.
Second, high gain can cause system instability, with potential for system failure.
We must consider the tradeoff between the benefits of high gain and the loss of
robustness against perturbations or uncertainties in system dynamics.
Beyond the simple PID example, we must consider a variety of tradeoffs in per
formance and robustness (Zhou and Doyle 1998; Qiu and Zhou 2010). Earlier, I
discussed tradeoffs in system sensitivities to disturbance and noise. I also
presented qualitative descriptions of system performance in terms of response time
and tracking performance.
To advance the theory, we need specific measures of cost, performance, stability
and robustness. We also need techniques to find optimal designs in relation to
those conflicting measures of system attributes.
We will never find a perfect universal approach. There are too many dimensions
of costs and benefits, and too many alternative ways to measure system attributes.
Nonetheless, basic measures and simple optimization methods provide consider
able insight into the nature of design. Those insights apply both to the building of
human-designed systems to achieve engineering goals and to the interpretation and
understanding of naturally designed biological systems built by evolutionary
processes.

© The Author(s) 2018


37
S. A. Frank, Control Theory Tutorial, SpringerBriefs in Applied Sciences
and Technology, https://doi.org/10.1007/978-3-319-91707-8_5
38 5 Performance and Robustness Measures 5.1 Performance and Cost: J

To analyze performance, we must measure the costs and benefits associated with a
particular system. We often measure those costs and benefits by the distance
between a system’s trajectory and some idealized trajectory with zero cost and
perfect per formance.
Squared deviations provide a distance measure between the actual trajectory
and the idealized trajectory. Consider, for example, the control signal, u(t), which
the controller produces to feed into the system process, as in Fig. 2.1c.
The value of |u(t)|2 = u2 measures the magnitude of the signal as a squared
distance from zero. We can think of u2 as the instantaneous power of the control
signal. Typically, the power requirements for control are a cost to be minimized.
The square of the error output signal, |e(t)|2 = e2, measures the distance of the
system from the ideal performance of e = 0. Minimizing the squared error
maximizes performance. Thus, we may think of performance at any particular
instant, t, in terms of the cost function
J (t) = u2 + ρ2e2,

for which minimum cost corresponds to maximum performance. Here, ρ is a


weight ing factor that determines the relative value of minimizing the control
signal power, u2, versus minimizing the tracking error, e2.
Typically, we measure the cost function over a time interval. Summing up J (t)
continuously from t = 0 to T yields
T0 (u2 + ρ2e2)dt.
J= (5.1)

Most squared distance or quadratic performance analyses arise from extensions of


this basic equation. Given this measure, optimal design trades off minimizing the
energy cost to drive the system versus maximizing the benefit of tracking a target
goal.

5.2 Performance Metrics: Energy and H2

The cost measure in Eq. 5.1 analyzes signals with respect to time. It is natural to
think of inputs and outputs as changing over time. With temporal dynamics, we
can easily incorporate multivariate signals and nonlinearities. In spite of those
advantages, we often obtain greater insight by switching to a frequency analysis of
signals, as in the previous chapters.
In this section, I present alternative measures of cost and performance in terms
of transfer functions and complex signals. Those alternative measures emphasize
frequencies of fluctuations rather than changes through time. Frequency and
complex
5.2 Performance Metrics: Energy and H2 39

analysis allow us to take advantage of transfer functions, Bode plots, and other
powerful analytical tools that arise when we assume linear dynamics. The
assumption of linearity does not mean that we think the actual dynamics of
physical and biological processes are linear. Instead, starting with the linear case
provides a powerful way in which to gain insight about dynamics. In the previous
section, we considered how to measure the magnitude of fluctu ating control and
error signals. A magnitude that summarizes some key measure is often called a
norm. In the prior section, we chose the sum of squared deviations from zero,
which is related to the 2–norm of a signal
∞0
|u(t)|2dt . (5.2)
u(t) 2 =
1/2

The energy of the signal is the square of the 2–norm, u(t) 22. When the time
period in the cost function of Eq. 5.1 goes to infinity, T → ∞, we can write the cost
function as
J = u(t) 22 + ρ2 e(t) 22. (5.3)

The signal u(t) is a function of time. The associated transfer function U(s)
describes exactly the same signal, but as a function of the complex number, s,
rather than of time, t.
It is often much easier to work with the transfer function for analysis, noting
that we can go back and forth between time and transfer function descriptions. For
the analysis of squared distance metrics, the 2–norm of the transfer function
expression
is ∞ −∞
1 2π 1/2
2 . (5.4)
|U(jω)| dω
U(s) 2 =

This transfer function 2–norm is often referred to as the H2 norm. The term
|U(jω)|2 is the square of the Bode gain or magnitude, as in Fig. 2.2e. That gain
describes the amplification of a sinusoidal input at frequencyω. TheH2 norm
expresses the average amplification of input signals over all input frequencies.
If the goal is to minimize the control input signal, u, or the error deviation from
zero, e, then the greater the amplification of a signal, the greater the cost. Thus, we
can use the H2 norm to define an alternative cost function as

J = U(s) 22 + ρ2 E(s) 22, (5.5)

which leads to methods that are often called H2 analysis. This cost describes the
amplification of input signals with respect to control and error outputs when
averaged overall input frequencies. Minimizing this cost reduces the average
amplification of input signals.
40 5 Performance and Robustness Measures

If the energy 2–norm in Eq. 5.2 is finite, then the energy 2–norm and the H2
norm are equivalent, u(t) 2 = U(s) 2, and we can use Eqs. 5.3 and 5.5
interchangeably. Often, it is more convenient to work with the transfer function
form of the H2 norm.
We can use any combination of signals in the cost functions. And we can use
different weightings for the relative importance of various signals. Thus, the cost
functions provide a method to analyze a variety of tradeoffs.
5.3 Technical Aspects of Energy and H2 Norms

I have given three different cost functions. The first in Eq 5.1 analyzes temporal
changes in signals, such as u(t), over a finite time interval. That cost function is the
most general, in the sense that we can apply it to any finite signals. We do not
require assumptions about linearity or other special attributes of the processes that
create the signals.
The second function in Eq. 5.3 measures cost over an infinite time interval and
is otherwise identical to the first measure. Why consider the unrealistic case of
infinite time?
Often, analysis focuses on a perturbation that moves a stable system away from
its equilibrium state. As the system returns to equilibrium, the error and control
signals go to zero. Thus, the signals have positive magnitude only over a finite
time period, and the signal energy remains finite. As noted above, if the energy
2–norm is finite, then the energy 2–norm and the H2 norm are equivalent, and the
third cost function in Eq. 5.5 is equivalent to the second cost function in Eq. 5.3.
If the signal energy of the second cost function in Eq. 5.3 is infinite, then that
cost function is not useful. In an unstable system, the error often grows with time,
leading to infinite energy of the error signal. For example, the transfer function 1/(s
− 1) has temporal dynamics given by y(t) = y(0)et, growing exponentially with
time. The system continuously amplifies an input signal, creating instability and an
output signal with infinite energy.
When the energy is infinite, the H2 norm may remain finite. For the transfer func

tion 1/(s − 1), the H2 norm is 1/ 2. The average amplification of signals remains
finite. In general, for a transfer function, G(s), the H2 norm remains finite as long
as G(jω) does not go to infinity for any value of ω, and G(jω) → 0 as ω → ±∞.
Thus, the H2 norm cost in Eq. 5.5 can be used in a wider range of applications.
The H2 norm is related to many common aspects of signal processing and time
series analysis, such as Fourier analysis, spectral density, and autocorrelation.
5.4 Robustness and Stability: H∞ 41 5.4 Robustness and Stability: H∞

A transfer function for a system, G(s), defines the system’s amplification of input
signals. For a sinusoidal input at frequency ω, the amplification, or gain, is the
absolute value of the transfer function at that frequency, |G(jω)|.
Often, the smaller a system’s amplification of inputs, the more robust the system
is against perturbations. Thus, one common optimization method for designing con
trollers seeks to minimize a system’s greatest amplification of inputs. Minimizing
the greatest amplification guarantees a certain level of protection against the worst
case perturbation. In some situations, one can also guarantee that a system is stable
if its maximum signal amplification is held below a key threshold.
A system’s maximum amplification of sinusoidal inputs over all input
frequencies, ω, is called its H∞ norm. For a system G(s), the H∞ norm is written as
G(s) ∞. The norm describes the maximum of |G(jω)| over all ω. The maximum is
also the peak gain on a Bode magnitude plot, which is equivalent to the resonance
peak.
System stability and protection against perturbations set two fundamental
criteria for system design. Thus, H∞ methods are widely used in the engineering
design of controllers and system architectures (Zhou and Doyle 1998).

Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give
appropriate credit to the original author(s) and the source, provide a link to the Creative
Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter’s Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter’s Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly
from the copyright holder.

Part II
Design Tradeoffs

Many performance tradeoffs occur. A system that responds quickly to control


signals often suffers from sensitivity to perturbations. A more rapid response also
associates with a greater tendency toward instability.
Design of a control system by an engineer must balance the competing
dimensions of performance. Similarly, design of biological systems by
evolutionary processes implicitly balances the different dimensions of success. In
engineering, we can spec ify performance criteria. In biology, we must figure out
how natural processes set the relative importance of different performance
measures.
Once we have a set of performance criteria, how do we find the control architec
tures and parameters that perform well? If we do not have formal design methods,
then we end up with ad hoc solutions. Such solutions may perform well. But we do
not have any way to know if there are better solutions or better ways to formulate
the design criteria.
Ideally, we would have an optimization method that provided the best solution
for a given problem and a given set of performance criteria. Optimization forces us
to specify the problem with clarity. We must write down exactly the performance
criteria, the nature of the problem, and all associated assumptions. We then get an
answer about whether there is a best design for the given assumptions or a set of
comparable alternative designs.
Optimization is, of course, only as good as the assumptions that we make. In
engineering, we may be able to specify design criteria clearly. Or, at least, we can
experiment with various criteria and examine the alternative optimal designs.
In biology, figuring out the appropriate assumptions and constraints that express
natural evolutionary processes can be very difficult. We may make some progress
by trying different assumptions as hypotheses about the natural design process. We
can then test the match between the optimal solutions and what we actually see in
nature (Parker and Maynard Smith 1990).
Design by optimization must begin with performance criteria. Three kinds of
performance criteria dominate in typical engineering applications. Regulation, or
homeostasis, concerns aspects of design that return a system to its setpoint. Good
regulation requires insensitivity to perturbations. If the system does get pushed
away from its setpoint, a well regulated system rapidly returns to its equilibrium.
Tradeoffs arise between the responses to different kinds of perturbations.
44 Design Tradeoffs

Stabilization concerns aspects of design that protect against instability. An unsta


ble system may lead to failure or death. Often, the primary design goal is to protect
against instability.
Tracking concerns how well the system follows changes in environmental or
reference input signals. A system that rapidly adjusts to changes may track closely
to reference inputs but may suffer from sensitivity to perturbations or instability.
The next sections briefly illustrate these concepts. I use modified examples from
the excellent article by Qiu and Zhou (2013).
Chapter 6
Regulation
The regulation problem analyzes how quickly a perturbed system returns to its
equi librium setpoint. For this problem, we assume that the setpoint does not
change. We can, without loss of generality, assume that the external reference
signal is r = 0.
With no external reference signal, we can express the general form of the
regulation problem as in Fig. 6.1. We take the process, P, as given, subject to
uncertainties or disturbances represented by the input, d. We seek an optimal
controller, C, with respect to particular design tradeoffs.

6.1 Cost Function

The cost function summarizes the design tradeoffs. We use a cost function based
on the H2 norm, similar to Eq. 5.5. The H2 norm describes the response of the
system to perturbations when averaged over all input frequencies. Minimizing the
H2 norm minimizes the extent to which the system responds to perturbations.
Recall that the H2 norm is often equivalent to the signal energy, which is the total
squared deviation of a signal from zero when measured from the time of an initial
perturbation until the time when the signal returns to zero.
From Fig. 6.1, the two inputs are the load disturbance, d, and the sensor noise,
n. The two outputs are the process output, η, and the control signal, u. We can
write the outputs as transfer functions, η(s) and U(s), and the cost function in Eq.
5.5 as

J = ||U(s)||22 + ρ2||η(s)||22.

In this case, we need to relate each of the two outputs to each of the two inputs. We
require four transfer functions to describe all of the input–output connections. For
the transfer function between the input d and the output η, we write Gηd (s), for

© The Author(s) 2018


45
S. A. Frank, Control Theory Tutorial, SpringerBriefs in Applied Sciences
and Technology, https://doi.org/10.1007/978-3-319-91707-8_6
46 6 Regulation P d

Cuyn

Fig. 6.1 Classic regulation problem illustrated by closed-loop feedback with a constant reference
input signal,r = 0. The disturbance input, d, perturbs the system process. Such perturbations can
be considered as stochasticity in the process, or as uncertainty with regard to the true process
dynamics relative to the assumed dynamics. The noise input, n, perturbs the sensor that produces
the output measurement, y, based on the actual process output, η. See Fig. 3.2 for context

which we assume that the other input, n, is zero. Using our usual rule for the
transfer functions of a closed loop, the four functions are

Gud = −PC

1 + PCGηd = P
1 + PC

Gun = −C

1 + PCGηn = −PC

1 + PC . (6.1)
We can express these transfer functions in terms of the sensitivities in Eq. 3.8 by
defining the open loop as L = PC, the sensitivity as S = 1/(1 + L), and the com
plementary sensitivity as T = L/(1 + L), yielding

Gud = −T Gηd = P S

Gun = −CS Gηn = −T. (6.2)

Because S + T = 1 at any input, s, these transfer functions highlight the intrinsic


design tradeoffs.
We can now consider the total cost as the sum of the response with respect to
the input d, holding n at zero, plus the response with respect to the input n, holding
d at zero

J = ||Gud (s)||22 + ρ2||Gηd (s)||22


+ ||Gun(s)||22 + ρ2||Gηn(s)||22. (6.3)

For this example, we use impulse function inputs, δ(t), which provide a strong
instantaneous shock to the system, as defined in the caption of Fig. 4.2. We can
design the system to be relatively more or less sensitive to disturbance inputs
relative to noise inputs by weighting the disturbance input byμ, so that d(t) = μδ(t)
and n(t) = δ(t). Larger μ causes design by optimization to yield better disturbance
regulation at the expense of worse noise regulation.
6.1 Cost Function 47

The transfer function for an impulse is equal to one. Thus, the transfer functions
for disturbance and noise inputs are, respectively, D(s) = μ and N(s) = 1. A
system’s response to an input is simply the product of the input and the system
transfer function. For example, the first term in Eq. 6.3 becomes

||D(s)Gud (s)||22 = μ2||Gud (s)||22,

and the full cost function becomes

J = μ2||Gud (s)||22 + μ2ρ2||Gηd (s)||22


+ ||Gun(s)||22 + ρ2||Gηn(s)||22. (6.4)

Using the sensitivity expressions in Eq. 6.2, we can write this expression more
simply as
J = ||C S||22 + (μ2 + ρ2)||T ||22 + μ2ρ2||P S||22. (6.5)

6.2 Optimization Method

This section follows Qiu and Zhou’s (2013) optimization algorithm. Their cost
func tion in the final equation on page 31 of their book is equivalent to my cost
function in Eq. 6.4.
Optimization finds the controller, C(s), that minimizes the cost function. We
search for optimal controllers subject to the constraint that all transfer functions in
Eq. 6.1 are stable. Stability requires that the real component be negative for all
eigenvalues of each transfer function.
A transfer function’s eigenvalues are the roots of the denominator’s polynomial
in s. For each transfer function in Eq. 6.1, the eigenvalues, s, are obtained by
solution of 1 + P(s)C(s) = 0.
We assume a fixed process, P, and weighting coefficients, μ and ρ. To find the
optimal controller, we begin with a general form for the controller, such as

C(s) = q1s + q2
p0s2 + p1s + p2. (6.6)
We seek the coefficients p and q that minimize the cost function. Qiu and Zhou
(2013) solve the example in which P(s) = 1/s2, for arbitrary values of μ and ρ. The
accompanying Mathematica code describes the steps in the solution algorithm.
Here, I simply state the solution. Check the article by Qiu and Zhou (2013) and my
Mathematica code for the details and for a starting point to apply the optimization
algorithm to other problems. The following section applies this method to another
example and illustrates the optimized system’s response to various inputs. For P =
1/s2, Qiu and Zhou (2013) give the optimal controller
48 6 Regulation
√ √ √ √
s2 + 2 ρ + μ s + ρ
C(s) = √
√ √ √ + μ2,
2ρμ ρ + μ s + ρμ

with associated minimized cost,


√ 2√ 2√ √ √
J ∗ = 2 μ ρ + ρ μ + 2ρμ μ + ρ .

For ρ = 1, the controller becomes


√ √
2μ 1 + μ s + μ
C(s) = (6.7)
√ √ √
s2 + 2 1 + μ s + 1 + μ 2 ,

with associated minimized cost,


√ √ √
J ∗ = 2 μ2 + μ + 2μ( μ + 1) .

We can see the tradeoffs in design most clearly from the controller with ρ = 1.
When μ is small, load disturbance inputs are smaller than sensor noise inputs. An
optimal system should therefore tolerate greater sensitivity to load disturbances in
return for reduced sensitivity to sensor noise.
In the optimal controller described by Eq. 6.7, a small value of μ produces low
gain, because C(s) becomes smaller as μ declines. We can see from Eq. 6.1 that a
small gain for the controller, C, reduces the sensitivity to noise inputs by lowering
Gun and Gηn. Similarly, a small gain for C raises the sensitivity of the system
output, η, to disturbance inputs by raising Gηd .
The optimal system achieves the prescribed rise in sensitivity to disturbance in
order to achieve lower sensitivity to noise.

6.3 Resonance Peak Example

This section applies the previous section’s H2 optimization method to the process

P(s) = 1

s2 + 0.1s + 1. (6.8)

This process has a resonance peak near ω = 1.


My supplemental Mathematica code derives the optimal controller of the form
in Eq. 6.6. The optimal controller is expressed in terms of the cost weightings μ
and ρ. The solution has many terms, so there is no benefit in showing it here.
6.3 Resonance Peak Example 49

1.0
0.8
2

v
0.2
it

a
le

R
0.0
0.6
01234ρ

0.4

Fig. 6.2 Relative H2 values for the transfer functions in Eq. 6.1, with Gud = Gηn in red, Gηd in
gold, and Gun in green. The H2 value for each transfer function is divided by the total H2 values
over all four functions. The transfer functions were derived from the process in Eq. 6.8 and the
associated optimal controller. The weighting parameters in the cost function of Eq. 6.4 are μ = 1
and ρ varying along the x-axis of the plot. Swapping values of μ and ρ gives identical results,
because of the symmetries in Eqs. 6.1 and 6.4

The general solution in terms of μ and ρ provides a simple way in which to


obtain the optimal controller for particular values of μ and ρ. For example, when μ
= ρ = 1, the optimal controller is

C(s) ≈ 0.609(s − 0.81)


s2 + 1.73s + 2.49.

Similar controller expressions arise for other values of μ and ρ. Those controllers
may be used in the closed loop of Fig. 6.1 to form a complete system. Figure 6.2
shows the relative H2 values of the four input–output transfer functions in Eq. 6.1.
The H2 values express sensitivity over all frequencies. To interpret this figure, look
at Eq. 6.4. As the product of the weightings, μρ, increases, the output of Gηd (gold
curve) plays an increasingly important role in the total cost relative to the output of
Gun (green curve).
As the relative cost weighting of Gηd increases, its H2 value declines. Similarly,
as the relative cost weighting of Gun decreases, its H2 value increases. Once again,
we see the sensitivity tradeoffs in response to the relative importance of different
perturbations.
The top row of Fig. 6.3 compares the Bode plots for the process, P, and the
input– output transfer functions in Eq. 6.1. As ρ increases in the columns from left
to right, the rise in the green curve for Gun is the strongest change. We can
understand that change by examining the cost function in Eq. 6.4. Because Gud =
Gηn, a rise in ρ reduces the weighting of Gun relative to all other terms.
The strongest increase in relative weighting as ρ rises occurs for Gηd , shown in
gold. The mild decline in the gold curve with increasing ρ is consistent with the
increased relative cost weighting of that signal.
50 6 Regulation
-20 -40 -60 -80 0.5 0.0 -0.5 -1.0

20 (a)
0
1.0
0.5 0.0 -0.5 -1.0 0.5 0.0 -0.5 -1.0

(c) (e)

0 2 4 6 8 10 0 2 4 6 8 10 0 2 4 6 8 10
0.1 1 10 (b)
20 20
0 0
-20 -40 -60 -80 0.1 1 10 (d) -20 -40 -60 -80 0.1 1 10 (f)

1.0 1.0

Fig. 6.3 Response of the process in Eq. 6.8 in blue and the transfer functions in Eq. 6.1, with Gud
= Gηn in red, Gηd in gold, and Gun in green. Top row shows Bode magnitude plots. Bottom row
shows impulse responses. The input signal weights in Eq. 6.4 are μ = 1 and, for the three columns
from left to right, ρ = 0.25, 1, 4. Swapping values of μ and ρ gives identical results, because of
the symmetries in Eq. 6.1 and 6.4

The bottom row shows the impulse responses. As with the Bode plots, an
increase in ρ favors reduced response of Gηd , in gold, causing a smaller impulse
response in the right plot with high ρ relative to the left plot with low ρ. Similarly,
an increase in ρ weakens the pressure on the Gun transfer function in green,
causing a larger impulse response with increasing ρ.

6.4 Frequency Weighting

The H2 norm sums a system’s gain over all input frequencies, as in Eq. 5.4. That
sum weights all input frequencies equally.
Often, we wish to protect against perturbations that occur primarily in a limited
band of frequencies. For example, disturbance loads, d, typically occur at low fre
quency, reflecting long-term fluctuations or misspecifications in the system’s
intrinsic processes. In that case, our optimization method should emphasize
reducing a sys tem’s gain at low frequency with respect to disturbance load inputs
and accepting a tradeoff that allows a greater gain at high frequency. By reducing
the gain at low frequency, we protect against the common frequencies for load
disturbances.
Tradeoffs between low- and high-frequency bands are common. If we start with
a process transfer function

G(s) = 10(s + 1)

s + 10 ,
then at zero frequency, s = jω = 0, the gain is one. As frequency increases, the
gain approaches ten.
6.4 Frequency Weighting 51

If we weight this process transfer function by W(s) = 1/(s + 1), then the new
system becomes W G = 10/(s + 10). Now, the gain declines with increasing fre
quency, from a maximum of one at zero frequency to a minimum of zero at infinite
frequency.
By weighting the original system, G, by the weighting function, W, we cause
the H2 norm of the combined system, W G, to be relatively more sensitive to low
frequency disturbances. When we design a controller by minimizing the H2 norm
associated with W G, we will typically find a system that is better at rejecting low
frequency load disturbances than a design minimizing the H2 norm associated with
G. For the weighted system, optimization will avoid controllers that reject high
frequency load disturbances, because the weighted system already rejects those
high frequency inputs.
Roughly speaking, a weighting function instructs the optimization method to
reduce the gain and sensitivity for certain frequencies and to ignore the gain for
other frequencies. The weighting functions do not alter the actual system. The
weighting functions are only used to alter the cost function and optimization
method that deter mine the optimal controller.
Figure 6.4 shows the regulation feedback system of Fig. 6.1 with additional
weightings for the disturbance and noise inputs. The weightings modify the four
sys tem transfer functions and associated sensitivities in Eq. 6.2 to be WdGud ,
WdGηd , WnGun, and WnGηn. The cost function in Eq. 6.5 becomes

J = μ2||WdT ||22 + μ2ρ2||Wd P S||22


+||WnC S||22 + ρ2||WnT ||22. (6.9)

Consider an example in which we begin with the process, P, in Eq. 6.8. To


emphasize low-frequency load disturbances, set Wd = 1/(s + 0.1) to be a low-pass
filter. That weighting filters out disturbances that are significantly greater than ω =
0.1. To emphasize high-frequency sensor noise, set Wn = s/(s + 10). That
weighting filters out noise that is significantly less than ω = 10. By using these
two filters, the optimization method puts very low weight on any disturbances in
midrange frequencies of ω = (0.1, 10).
n

Pd
~
Wd d

Cuyn

Wn~

Fig. 6.4 Basic regulation feedback loop in Fig. 6.1 with additional weightings for disturbance and
noise inputs. The weightings alter the cost function to emphasize particular frequency bands for
disturbance and noise, yielding a modified optimal controller
52 6 Regulation (a) (b)

10

-20
0 0.1 1 10
10

-10
0 -20
0.1 1 10

-10

Fig. 6.5 Role of frequency weighted inputs in the design of optimal H2 controllers for system
regulation, illustrated by Bode magnitude plots. a Plot for the unweighted case, matching the plot
in Fig. 6.3c. b Plot for the frequency weighted example in the text, which emphasizes the
regulation of low-frequency load disturbances, d, and high-frequency sensor noise, n

By minimizing the weighted H2 cost in Eq. 6.9, we obtain the optimal controller

C(s) = 2.02(s + 1.52)

s2 + 1.17s + 6.3.

I calculated the values for this controller by using the numerical minimization func
tion in Mathematica to minimize the H2 cost, subject to the constraint that all
transfer functions in Eq. 6.1 are stable. See the supplemental Mathematica code.
Figure 6.5 compares the optimized system response for the unweighted and
weighted cases. Panel a shows the Bode magnitude response of the optimized sys
tem for the unweighted case, equivalent to the plot in Fig. 6.3c. Panel b shows the
response of the optimized system for the weighted case in this section.
The weighted case emphasizes low-frequency load disturbances and high
frequency sensor noise, with low weight on midrange frequencies. Comparing the
unweighted case in (a) with the weighted case in (b), we see two key differences.
First, the weighted case allows a large rise in magnitudes and associated
sensitivity to perturbations for midrange frequencies. That rise occurs because the
particular weighting functions in this example discount midrange perturbations.
Second, the gold curve shows that the weighted case significantly reduces the
low-frequency sensitivity of system outputs, η, to load disturbances, d. The gold
curve describes the response of the transfer function, Gηd . Note that, because of
the log scaling for magnitude, almost all of the costs arise in the upper part of the
plot. The low relative magnitude for the lower part contributes little to the overall
cost.
6.4 Frequency Weighting 53
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give
appropriate credit to the original author(s) and the source, provide a link to the Creative
Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter’s Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter’s Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly
from the copyright holder.
Chapter 7
Stabilization
The previous chapter assumed that the intrinsic process, P, has a given unvarying
form. The actual process may differ from the given form or may fluctuate over
time. If a system is designed with respect to a particular form of P, then variation
in P away from the assumed form may cause the system to become unstable.
We can take into account the potential variation in P by altering the optimal
design problem. The new design problem includes enhanced stability guarantees
against certain kinds of variation in P.
Variation in an intrinsic process is an inevitable aspect of design problems. In
engineering, the process may differ from the assumed form because of limited
infor mation, variability in manufacturing, or fluctuating aspects of the
environment.
In biology, a particular set of chemical reactions within an individual may vary
stochastically over short time periods. That reaction set may also vary between
indi viduals because of genetic and environmental fluctuations. In all cases, actual
pro cesses typically follow nonlinear, time-varying dynamics that often differ from
the assumed form.
We may also have variation in the controller or other system processes. In
general, how much variability can be tolerated before a stable system becomes
unstable? In other words, how robust is a given system’s stability to perturbations?
We cannot answer those question for all types of systems and all types of
perturba tions. However, the H∞ norm introduced earlier provides insight for many
problems. Recall that the H∞ norm is the peak gain in a Bode plot, which is a
transfer function’s maximum gain over all frequencies of sinusoidal inputs. The
small gain theorem pro vides an example application of the H∞ norm.

© The Author(s) 2018


55
S. A. Frank, Control Theory Tutorial, SpringerBriefs in Applied Sciences
and Technology, https://doi.org/10.1007/978-3-319-91707-8_7
56 7 Stabilization 7.1 Small Gain Theorem
Suppose we have a stable system transfer function, G. That system may represent
a process, a controller, or a complex cascade with various feedback loops. To
express the mathematical form of G, we must know exactly the dynamical
processes of the system.
How much may the system deviate from our assumptions about dynamics and
still remain stable? For example, if the uncertainties may be expressed by a
positive feedback loop, as in Fig. 7.1, then we can analyze whether a particular
system, G, is stably robust against those uncertainties.
In Fig. 7.1, the stable transfer function, Δ, may represent the upper bound on
our uncertainty. The feedback loop shows how the nominal unperturbed system, G,
˜
responds to an input and becomes a new system, G , that accounts for the perturba
˜
tions. The system, G , represents the entire loop shown in Fig. 7.1.
˜
The small gain theorem states that the new system, G , is stable if the product
of the H∞ norms of the original system, G, and the perturbations, Δ, is less than
one

||G||∞||Δ||∞ < 1. (7.1)

Here, we interpret G as a given system with a known H∞ norm. By contrast, we


assume that Δ represents the set of all stable systems that have an H∞ norm below
˜
some upper bound, ||Δ||∞. For the perturbed system, G , to be stable, the upper
bound for the H∞ norm of Δ must satisfy

||Δ||∞ <1
||G||∞. (7.2)

If G is a system that we can design or control, then the smaller we can make ||G||∞,
the greater the upper bound on uncertainty, ||Δ||∞, that can be tolerated by the
perturbed system. Put another way, smaller ||G||∞ corresponds to greater robust
stability.

wG

Fig. 7.1 System uncertainty represented by a feedback loop. The transfer function, Δ, describes
˜
an upper bound on the extent to which the actual system, G = G/(1 − GΔ), deviates from the
nominal system, G. Here, G may represent a process, a controller, or an entire feedback system
7.1 Small Gain Theorem 57
A full discussion of the small gain theorem can be found in textbooks (e.g.,
Zhou and Doyle 1998; Liu and Yao 2016). I present a brief intuitive summary. The
positive feedback loop in Fig. 7.1 has transfer function

˜
G =G
1 − GΔ. (7.3)
We derive that result by the following steps. Assume that the input to G is w + ν,
which is the sum of the external input, w, and the feedback input, ν. Thus, the
system output is η = G(w + ν).
We can write the feedback input as the output of the uncertainty process, ν =
Δη. Substituting into the system output expression, we have

η = G(w + ν) = Gw + GΔη.

˜
The new system transfer function is the ratio of its output to its external input, G
= η/w, which we can solve for to obtain Eq. 7.3.
˜
The new system, G , is unstable if any eigenvalue has real part greater than or
equal to zero, in which the eigenvalues are the roots of s of the denominator, 1 −
G(s)Δ(s) = 0.
˜
Intuitively, we can see that G (s) blows up unstably if the denominator becomes
zero at some input frequency, ω, for s = jω. The denominator will be greater than
zero as long as the product of the maximum values of G(jω) and Δ(jω) are less
than one, as in Eq. 7.1. That condition expresses the key idea. The mathematical
pre sentations in the textbooks show that Eq. 7.1 is necessary and sufficient for
stability.
Reducing the H∞ norm of G increases its robustness with respect to stability. In
Eq. 7.2, a smaller ||G||∞ corresponds to a larger upper bound on the perturbations
that can be tolerated.
A lower maximum gain also associates with a smaller response to
perturbations, improving the robust performance of the system with respect to
disturbances and noise. Thus, robust design methods often consider reduction of
the H∞ norm.

7.2 Uncertainty: Distance Between Systems

Suppose we assume a nominal form for a process, P. We can design a controller,


C, in a feedback loop to improve system stability and performance. If we design
our controller for the process, P, then how robust is the feedback system to
alternative forms of P?
The real process, P , may differ from P because of inherent stochasticity, or
because of our simple model for P misspecified the true underlying process. What
is the appropriate set of alternative forms to describe uncertainty with respect to P?
Suppose we defined a distance between P and an alternative process, P . Then
58 7 Stabilization

a set of alternatives could be specified as all processes, P , for which the distance
from the nominal process, P, is less than some upper bound.
We will write the distance between two processes when measured at input fre
quency ω as
δ[P(jω), P (jω)] = distance at frequency ω, (7.4)

for which δ is defined below. The maximum distance between processes over all
frequencies is
δν (P, P ) = maxω δ[P(jω), P (jω)], (7.5)

subject to conditions that define whether P and P are comparable (Vinnicombe


2001; Qiu and Zhou 2013). This distance has values 0 ≤ δν ≤ 1, providing a
standardized measure of separation.
To develop measures of distance, we focus on how perturbations may alter
system stability. Suppose we start with a process, P, and controller, C, in a
feedback system. How far can an alternative process, P , be from P and still
maintain stability in the feedback loop with C? In other words, what is the stability
margin of safety for a feedback system with P and C?
Robust control theory provides an extensive analysis of the distances between
systems with respect to stability margins (Vinnicombe 2001; Zhou and Doyle
1998; Qiu and Zhou 2010, 2013). Here, I present a rough intuitive description of
the key ideas.
For a negative feedback loop with P and C, the various input–output pathways
all have transfer functions with denominator 1 + PC, as in Eq. 6.1. These systems
become unstable when the denominator goes to zero, which happens if P = −1/C.
Thus, the stability margin is the distance between P and −1/C.
The values of these transfer functions, P(jω) and C(jω), vary with frequency,
ω. The worst case with regard to stability occurs when P and −1/C are closest; that
is, when the distance between these functions is a minimum with respect to
varying frequency. Thus, we may define the stability margin as the minimum
distance over frequency
bP,C = minω δ[P(jω), −1/C(jω)]. (7.6)

Here is the key idea. Start with a nominal process, P1, and a controller, C. If an
alternative or perturbed process, P2, is close to P1, then the stability margin for P2
should not be much worse than for P1.
In other words, a controller that stabilizes P1 should also stabilize all processes
that are reasonably close to P1. Thus, by designing a good stability margin for P1,
we guarantee robust stabilization for all processes sufficiently near P1.
We can express these ideas quantitatively, allowing the potential to design for a
targeted level of robustness. For example,

bP2,C ≥ bP1,C − δν (P1, P2).


7.2 Uncertainty: Distance Between Systems 59

Read this as the guaranteed stability margin for the alternative process is at least as
good as the stability margin for nominal process minus the distance between the
nom inal and alternative processes. A small distance between processes, δν ,
guarantees that the alternative process is nearly as robustly stable as the original
process.
The definitions in this section depend on the distance measure, expressed

as δ(c1, c2) = |c1 − c2|

2 2
1 + |c1| 1 + |c2| .

Here, c1 and c2 are complex numbers. Transfer functions return complex numbers.
Thus, we can use this function to evaluate δ[P1(jω), P2(jω)].

7.3 Robust Stability and Robust Performance

The stability margin bP,C measures the amount by which P may be altered and still
allow the system to remain stable. Note that bP,C in Eq. 7.6 expresses a minimum
value of δ over all frequencies. Thus, we may also think of bP,C as the maximum
value of 1/δ over all frequencies.
The maximum value of magnitude over all frequencies matches the definition of
the H∞ norm, suggesting that maximizing the stability margin corresponds to
minimizing some expression for an H∞ norm. Indeed, there is such an H∞ norm
expression for bP,C. However, the particular form is beyond our scope. The point
here is that robust stability via maximization of bP,C falls within the H∞ norm
theory, as in the small gain theorem.
Stability is just one aspect of design. Typically, a stable system must also meet
other objectives, such as rejection of disturbance and noise perturbations. This
section shows that increasing the stability margin has the associated benefit of
improving a system’s rejection of disturbance and noise. Often, a design that
targets reduction of the H∞ norm gains the benefits of an increased stability margin
and better regulation through rejection of disturbance and noise.
The previous section on regulation showed that a feedback loop reduces its
response to perturbations by lowering its various sensitivities, as in Eqs. 6.2 and
6.5. A feedback loop’s sensitivity is S = 1/(1 + PC) and its complementary sensi
tivity is T = PC/(1 + PC).
Increasing the stability margin, bP,C, reduces a system’s overall sensitivity. We
can see the relation between stability and sensitivity by rewriting the expression for
bP,C as maxω |S|2 + |C S|2 + |P S|2 + |T
2
bP,C = | −1

This expression shows that increasing bP,C reduces the total magnitude of the four
key sensitivity measures for negative feedback loops.
60 7 Stabilization
(c)
(a) 400
100
300
80

60 200

40 100

20

0.0 0.5 1.0 1.5 2.0 (d)


0 1 2 3 4 5 (b)
1.0
3
0.8

2 0.6

1 0.4

0.2
0

0.00 0.02 0.04 0.06 0.08 0.10


0.0 0.1 0.2 0.3 0.4 0.5

Fig. 7.2 Comparison between the responses of two systems to a unit step input, r = 1. The blue
curves show P1 and the gold curves show P2. a, b Systems in Eq. 7.7, with k = 100 and T =
0.025. The top plot shows the open-loop response for each system. The bottom plot shows the
closed-loop feedback response with unit feedback, P/(1 + P), in which the error signal into the
system, P, is 1 − y for system output, y. c, d Open (top) and closed (bottom) loop responses for
the systems in Eq. 7.8, with k = 100. Redrawn from Fig. 12.3 of Åström and Murray (2008),
©Princeton University Press

7.4 Examples of Distance and Stability

The measure, δν (P1, P2), describes the distance between processes with respect to
their response characteristics in a negative feedback loop. The idea is that P1 and
P2 may have different response characteristics when by themselves in an open loop,
yet have very similar responses in a feedback loop. Or P1 and P2 may have similar
response characteristics when by themselves, yet have very different responses in a
feedback loop.
Thus, we cannot simply use the response characteristics among a set of
alternative systems to understand how variations in a process influence stability or
performance. Instead, we must use a measure, such as δν , that expresses how
variations in a process affect feedback loop characteristics.
This section presents two examples from Sect. 12.1 of Åström and Murray
(2008). In the first case, the following two systems have very similar response
characteristics by themselves in an open loop, yet have very different responses in
a closed feedback loop
7.4 Examples of Distance and Stability 61 P1 = k
s + 1P2 = k
2
(s + 1)(T s + 1) , (7.7)

when evaluated at k = 100 and T = 0.025, as shown in Fig. 7.2a, b. The distance
between these systems is δν (P1, P2) = 0.89. That large distance corresponds to the
very different response characteristics of the two systems when in a closed
feedback loop. (Åström and Murray (2008) report a value of δν = 0.98. The reason
for the discrepancy is not clear. See the supplemental Mathematica code for my
calculations, derivations, and graphics here and throughout the book.)
In the second case, the following two systems have very different response char
acteristics by themselves in an open loop, yet have very similar responses in a
closed
feedback loop
P1 = k
s − 1, (7.8)
s + 1P2 = k
when evaluated at k = 100, as shown in Fig. 7.2c, d. The distance between these
systems is δν (P1, P2) = 0.02. That small distance corresponds to the very similar
response characteristics of the two systems when in a closed feedback loop.

7.5 Controller Design for Robust Stabilization

The measure bP,C describes the stability margin for a feedback loop with process P
and controller C. A larger margin means that the system remains robustly stable to
variant processes, P , with greater distance from the nominal process, P. In other
words, a larger margin corresponds to robust stability against a broader range of
uncertainty.
For a given process, we can often calculate the controller that provides the
greatest stability margin. That optimal controller minimizes an H∞ norm, so in this
case we may consider controller design to be an H∞ optimization method.
Often, we also wish to keep the H2 norm small. Minimizing that norm improves
a system’s regulation by reducing response to perturbations. Jointly optimizing the
stability margin and rejection of disturbances leads to mixed H∞ and H2 design.
Mixed H∞ and H2 optimization is an active area of research (Chen and Zhou
2001; Chang 2017). Here, I briefly summarize an example presented in Qiu and
Zhou (2013). That article provides an algorithm for mixed optimization that can be
applied to other systems.
Qiu and Zhou (2013) start with the process, P = 1/s2. They consider three cases.
First, what controller provides the minimum H∞ norm and associated maximum
stability margin, b, while ignoring the H2 norm? Second, what controller provides
the minimum H2 norm, while ignoring the stability margin and H∞ norm? Third,
what controller optimizes a combination of the H∞ and H2 norms?
62 7 Stabilization For the first case, the controller

margin
√ √
C(s) = 1+ 2s+1s+1+ 2

has the maximum stability

√ −1/2
b∗P,C = 4 + 2 2 = 0.38.

The cost associated with the H2 norm from Eq. 6.5 is J = ∞, because the sensitivity
function C S has nonzero gain at infinite frequency.
For the second case, the controller

C(s) = 2 2s + 1

s2 + 2 2s + 4

has the minimum H2 cost, J ∗ = 6 2 = 8.49, with associated stability margin bP,C =
0.24. This controller and associated cost match the earlier example of H2 norm
minimization in Eq. 6.7 with μ = 1.
For the third case, we constrain the minimum stability margin to be at least bP,C >

1/ 10 = 0.316, and then find the controller that minimizes the H2 norm cost subject
to the minimum stability margin constraint, yielding the controller

C(s) = 2.5456s + 1
0.28s2 + 1.5274s + 2.88,

which has the cost J = 13.9 and stability margin bP,C = 0.327. In these examples, a
larger stability margin corresponds to a greater H2 cost. That relation illustrates the
tradeoff between robust stability and performance measured by the rejection of
disturbance and noise perturbations.

Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give
appropriate credit to the original author(s) and the source, provide a link to the Creative
Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter’s Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter’s Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly
from the copyright holder.
Chapter 8
Tracking

The previous chapters on regulation and stabilization ignored the reference input,
r. In those cases, we focused on a system’s ability to reject perturbations and to
remain stable with respect to uncertainties. However, a system’s performance often
depends strongly on its ability to track external environmental or reference signals.
To study tracking of a reference input, let us return to the basic feedback loop
structure in Fig. 2.1c, shown again in Fig. 8.1. Good tracking performance means
minimizing the error, e = r − y, the difference between the reference input and the
system output.
Typically, we can reduce tracking error by increasing the control signal, u,
which increases the speed at which the system changes its output to be closer to
the input. However, in a real system, a larger control signal requires more energy.
Thus, we must consider the tradeoff between minimizing the error and reducing
the cost of control.
I previously introduced a cost function that combines the control and error signals
in Eq. 5.1 as T0

(u2 + ρ2e2)dt, (8.1)


J=

in which u(t) and e(t) are functions of time, and ρ is a weighting for the relative
importance of the error signal relative to the control signal.
I noted in Eq. 5.2 that the square of the H2 norm is equal to the energy of a signal,
for example,
∞0

e(t) 22 = |e(t)|2dt.

In this chapter, we will consider reference signals that change over time. A system
will typically not track a changing reference perfectly. Thus, the error will not go
to zero over time, and the energy will be infinite. For infinite energy, we typically
cannot use the H2 norm. Instead, we may consider the average of the squared
signal per unit time, which is the power. Or we may analyze the error over a finite
time period, as in Eq. 8.1.
© The Author(s) 2018
63
S. A. Frank, Control Theory Tutorial, SpringerBriefs in Applied Sciences
and Technology, https://doi.org/10.1007/978-3-319-91707-8_8
64 8 Tracking
y
reu
Fig. 8.1 System with a basic feedback loop in C(s) P(s)
response to a reference input, r

To analyze particular problems, we begin by expressing the transfer function for


the error from Eq. 3.5 as

E(s) = R(s) − Y (s) = 1


1 + C(s)P(s)R(s).
We may write the transfer function for the control signal as

U(s) = C(s)E(s) = C(s)


1 + C(s)P(s)R(s).
These equations express the key tradeoff between the error signal and the control
signal. A controller, C, that outputs a large control signal reduces the error, E, and
increases the control signal, U. The following example illustrates this tradeoff and
the potential consequences for instability.

8.1 Varying Input Frequencies

To analyze the cost over a particular time period, as in Eq. 8.1, we must express
the transfer functions as differential equations that describe change over time. We
can use the basic relation between transfer functions in Eq. 2.5 and differential
equations in Eq. 2.6.
In this example, I use the process in Eq. 4.1 that I analyzed in earlier

chapters P(s) = 1

(s + 0.1)(s + 10).
p0s2 + p1s + p2. (8.2)
I use the controller
2
C(s) = q0s + q1s + q2
Our goal is to find a controller of this form that minimizes the cost function in Eq.
8.1. I use a reference signal that is the sum of three sine waves with frequencies
ωi = ψ−1, 1, ψ . I weight each frequency by κi = (1, 1, 0.2), such that the high
frequency may be considered a rapid, relatively low-amplitude disturbance. Thus,
8.1 Varying Input Frequencies 65 2

20 40 60 80 100 120

-1

-2

Fig. 8.2 Reference signal, r, in gold, from Eq. 8.3, and the filtered signal, rw, in blue, from the
filter in Eq. 8.4 applied to the reference signal. The blue curves in Fig. 8.3 show the filtered signal
more clearly
κiωi
R(s) = i s2 + ω2i, (8.3)

in which each of the three terms in the sum expresses a sine wave with frequency
ωi . Here, I use ψ = 10.
Often, low-frequency signals represent true changes in the external
environment. By contrast, high-frequency inputs represent noise or signals that
change too rapidly to track effectively. Thus, we may wish to optimize the system
with respect to low frequency inputs and to ignore high-frequency inputs.
We can accomplish frequency weighting by using a filtered error signal in the
cost function, EW (s) = R(s)W(s) − Y (s), for a weighting function W that passes
low frequencies and reduces the gain of high frequencies. The weighted error
signal as a function of time is ew(t).
In our example, the function
√ √
ψs+ ψ
(8.4)
W(s) = 3

will reduce the relative weighting of the high-frequency input at frequency ψ. I use
the filtered error signal, ew, for the cost function in Eq. 8.1, yielding
T0 (u2 + ρ2e2w)dt.
J= (8.5)

The gold curve in Fig. 8.2 shows the environmental reference signal, r, for the asso
ciated transfer function, R(s). The blue curve shows the filtered reference signal,rw,
for the filtered system, R(s)W(s). The filtered curve removes the high-frequency
66 8 Tracking

2
1.5

1 -2
20 40 60 80 100 120 -1
2
1

15
20 40 60 80 100 120 -1 10
5
-2

2 -5
-10 -15
20 40 60 80 100 120 20 40 60 80
1 20 40 60 80 100 120

20 40 60 80 100 120 -1

-2
1.0
0.5

-0.5 -1.0 -1.5 100 120

-5
Fig. 8.3 Optimization of the cost function in Eq. 8.5 for the controller in Eq. 8.2. The left column
shows the tracking performance. The blue curve traces the filtered reference signal, rw, associated
with R(s)W(s). The gold curve traces the system output, y, associated with Y (s). The difference
between the curves is the error, ew = rw − y. The right column shows the error, ew, in red, and the
control signal, u for U(s), in green. The rows show, from top to bottom, an increased weighting of
the error versus the control signal in the cost, J , in Eq. 8.5, with ρ = (1, 10, 100). The optimized
controllers may represent local rather than global optima. See the supplemental Mathematica
code

noise of the reference signal and closely matches the fluctuations from the two
lower frequency sine wave inputs.
Figure 8.3 illustrates the tradeoff between the tracking performance and the cost
of the control signal energy to drive the system. The cost function in Eq. 8.5
describes the tradeoff between tracking, measured by the squared error between the
filtered reference signal and the system output, e2w, and the control signal energy,
u2.
The parameter ρ sets the relative balance between these opposing costs. A
higher ρ value favors closer tracking and smaller error because a high value of ρ
puts less weight on the cost of the control signal. With a lower cost for control, the
controller can output a stronger signal to drive the system toward a closer match
with the target reference signal.
8.2 Stability Margins 67 8.2 Stability Margins

Minimizing a quadratic cost function or an H2 norm may lead to a poor stability


margin. For example, close tracking of a reference signal may require a large
control signal from the controller. Such high gain feedback creates rapidly
responding system dynamics, which can be sensitive to uncertainties.
In Fig. 8.3, the stability margins for the three rows associated with ρ = (1, 10,
100) are bP,C = (0.285, 0.023, 0.038). A robust stability margin typically requires a
value greater than approximately 1/3 or perhaps 1/4.
In this case, the system associated with ρ = 1 has a reasonable stability margin,
whereas the systems associated with higher ρ have very poor stability margins.
The poor stability margins suggest that those systems could easily be destabilized
by perturbations of the underlying process or controller dynamics.
We could minimize the cost function subject to a constraint on the lower bound
of the stability margin. However, numerical minimization for that problem can be
challenging. See the supplemental Mathematica code for an example.

Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give
appropriate credit to the original author(s) and the source, provide a link to the Creative
Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter’s Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter’s Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly
from the copyright holder.
Chapter 9
State Feedback
A transfer function corresponds to a time-invariant, linear system of ordinary dif
ferential equations. In an earlier chapter, I showed the general form of a transfer
function in Eq. 2.5 and the underlying differential equations in Eq. 2.6.
For example, the transfer function P(s) = 1/(s + a) with input u and output y
corresponds to the differential equation x˙ = −ax + u, with output y = x. Here, x is
the internal state of the process. Models that work directly with internal states are
called state-space models.
Transfer functions provide significant conceptual and analytical advantages. For
example, the multiplication of transfer functions and the simple rules for creating
feedback loops allow easy creation of complex process cascades. With regard to
system response, a Bode plot summarizes many aspects in a simple, visual way.
However, it often makes sense to analyze the underlying states directly.
Consider, for example, the regulation of an organism’s body temperature. We
could model performance and cost in terms of body temperature. Alternatively, the
underlying states may include the burning of stored energy, the rise and fall of
various signaling molecules, the dilation of blood vessels, and so on.
Direct analysis of those internal states provides advantages. The individual
states may have associated costs, which we could study directly in our cost
function. We could consider the regulatory control of the individual states rather
than temperature because temperature is an aggregate outcome of the underlying
states. For example, each state could be regulated through feedback, in which the
feedback into one state may depend on the values of all of the states, allowing
more refined control of costs and performance.
When we use a state-space analysis, we do not have to give up all of the tools of
frequency analysis that we developed for transfer functions. For example, we can
consider the response of a system to different input frequencies.
State-space models can also describe time-varying, nonlinear dynamics. The
response of a nonlinear system will change with its underlying state, whereas
transfer function systems have a constant frequency response.
© The Author(s) 2018
69
S. A. Frank, Control Theory Tutorial, SpringerBriefs in Applied Sciences
and Technology, https://doi.org/10.1007/978-3-319-91707-8_9
70 9 State Feedback 9.1 Regulation Example

In the prior chapter on regulation, I analyzed the process in Eq. 6.8


as P(s) = 1

s2 + αs + β , (9.1)

with α = 0.1 and β = 1. This process has a resonance peak near ω = 1. The state
space model for this process is

x˙1 = x2
x˙2 = −βx1 − αx2 + u (9.2)
y = x1,

in which the dynamics are equivalent to a second-order differential equation, x¨ +


αx˙ + βx = u, with y = x.
For a state-space regulation problem, the design seeks to keep the states close to
their equilibrium values. We can use equilibrium values of zero without loss of
generality. When the states are perturbed away from their equilibrium, we adjust
the input control signal, u, to drive the states back to their equilibrium.
The cost function combines the distance from equilibrium with regard to the
state vector, x, and the energy required for the control signal, u. Distances and
energies are squared deviations from zero, which we can write in a general way in
vector
notation as T0 u Ru + x Qx dt, (9.3)

J=

in which R and Q are matrices that give the cost weightings for components of the
state vector, x = x1, x2,... , and components of the input vector, u = u1, u2,... . In
the example here, there is only one input. However, state-space models easily
extend to handle multiple inputs.
For the regulation problem in Fig. 9.1, the goal is to find the feedback gains for
the states given in the matrix K that minimize the cost function. The full
specification of the problem requires the state equation matrices for use in Eq. 2.6,
which we have from Eq. 9.2 as
0 1 −β −α B = 01 C= 10,
A= (9.4)

and the cost matrices, R and Q. In this case, we have a single input, so the cost
matrix for inputs, R, can be set to one, yielding an input cost term, u2. For the state
costs, we could ignore the second state, x2, leaving only x1 = y, so that the state
cost would be proportional to the squared output, y2 = e2. Here, y is
9.1 Regulation Example 71

.
y
x = Ax + Bu C d x
u
u*

Fig. 9.1 State feedback model of regulation. The process and output describe the state equations

in Eq. 2.6. The control input signal, u = Kx, is obtained by minimizing the cost function in Eq.
9.3 to derive the optimal state gains. A disturbance, d, is added to the input signal

equivalent to the error, e = y − r, because the reference input isr = 0. A cost based
on u2 and e2 matches the earlier cost function in Eq. 8.1.
In this case, I weight the costs for each state equally by letting Q = ρ2I2, in
which In is the identity matrix of dimension n, and ρ is the cost weighting for
states relative to inputs. With those definitions, the cost becomes
T0 2
x 22
1+
J= 2 dt,
u2 + ρ x

in which x 21 + x 22 measures the distance of the state vector from the target
equilibrium of zero.
We obtain the gain matrix for state feedback models, K, by solving a matrix
Riccati equation. Introductory texts on control theory derive the Riccati equation.
For our purposes, we can simply use a software package, such as Mathematica, to
obtain the solution for particular problems. See the supplemental software code for
an example.
Figure 9.2 shows the response of the state feedback system in Fig. 9.1 with the
Riccati solution for the feedback gain values, K. Within each panel, the different
curves show different values of ρ, the ratio of the state costs for x relative to the
input costs for u. In the figure, the blue curves show ρ = 1/4, which penalizes the
input costs four times more than the state costs. In that case, the control inputs tend
to be costly and weaker, allowing the state values to be larger.
At the other extreme, the green curves show ρ = 4. That value penalizes states
more heavily and allows greater control input values. The larger input controls
drive the states back toward zero much more quickly. The figure caption provides
details about each panel.
In this example, the underlying equations for the dynamics do not vary with
time. Time-invariant dynamics correspond to constant values in the state matrices,
A, B, and C. A time-invariant system typically leads to constant values in the
optimal gain matrix, K, obtained by solving the Riccati equation.
The Riccati solution also works when those coefficient matrices have
time-varying values, leading to time-varying control inputs in the optimal gain
matrix, K. The general approach can also be extended to nonlinear systems.
However, the Riccati equation is not sufficient to solve nonlinear problems.
72 9 State Feedback (a) (b)

0.8 0.6 0.4 0.2 0.8 0.6 0.4 0.2

-0.2 -0.4 -0.2 -0.4

2 4 6 8 10 2 4 6 8 10
(c) (d)
1.0 2 4 6 8 10
1.0

0.5 -0.5
0.5 -0.5

2 4 6 8 10

Fig. 9.2 Response to impulse perturbations of systems with state feedback, as in Fig. 9.1. a
Response of the state-space system in Eq. 9.2. Curves show x1 = y for cost ratio ρ = (0.25, 1, 4)
in blue, gold, and green, respectively. In this case, the impulse perturbation enters the system
through u in Eq. 9.2, affecting x˙2. b Modified state-space model that has two inputs, one each
into x˙1 and x˙2, associated with the state matrix B = I2. Impulse perturbation comes into x˙2 as in
the original model. In this case, there are two control inputs for feedback via the gain matrix, K.
The opti mization uses both inputs, allowing the feedback to control each state separately. That
extension of control directly to all states allows the feedback system to bring the state responses
back to zero more quickly than in the original system with only one state feedback. c and d
Response of the second state, x2. Systems for each panel match to the corresponding panels
above. Note in d that the second input for feedback drives the state to zero more quickly than in c,
which has only one input

Methods that minimize quadratic costs or H2 norms can produce systems with
poor stability margins. To obtain guaranteed stability margins, one can minimize
costs subject to a constraint on the minimum stability margin.

9.2 Tracking Example

Consider the tracking example from the previous chapter. That example began
with the process in Eq. 4.1 as

(s + a)(s + b) = 1
P(s) = 1

s2 + αs + β ,
9.2 Tracking Example 73

with α = a + b = 10.1 and β = ab = 1. The state-space model is given in Eq. 9.2,


expressed in matrix form in Eq. 9.4. The state-space model describes the process
output over time, y(t), which we abbreviate as y.
Here, I describe a state-space design of tracking control for this process. For
this example, I use the tracking reference signal in Eq. 8.3, ignoring
high-frequency noise (κ2 = 0). The reference signal is the sum of low-frequency
(ω0 = 0.1) and mid-frequency (ω1 = 1) sine waves. The transfer function for the
reference signal is

R(s) = ω0
s2 + ω20+ ω1
s2 + ω21.
In state-space form, the reference signal, r(t), is
⎜⎜ −ω20 −
⎝01 0 0 2
00 1 0 00 ω 1 0
Ar = ⎟⎟
01 ⎞ ⎠
⎛ 2 2
−ω 0ω 1 0

Br = 0001 T

Cr = ω20ω1 + ω0ω21 0 ω0 + ω1 0 .

We can transform a tracking problem into a regulator problem and then use the
methods from the previous chapter (Anderson and Moore 1989). In the regulator
problem, we minimized a combination of the squared inputs and states. For a
tracking problem, we use the error, e = y − r, instead of the state values, and
express the cost
as T0 2
u Ru + e dt. (9.5)
J=

We can combine the state-space expressions for y and r into a single state-space
model with output e. That combined model allows us to apply the regulator theory
to solve the tracking problem with state feedback.
The combined model for the tracking problem is
B00
At = Br
A00
Bt =
Ar

Ct = C −Cr ,

which has output determined by Ct as e = y − r (Anderson and Moore 1989). In


this form, we can apply the regulator theory to find the optimal state feedback
matrix, K,
74 9 State Feedback (a) (b)

0
1
- 0

1 -

- 1

2 -

2 2

(c) (d)
2

2
Fig. 9.3 Tracking a reference input signal with state feedback. The blue curve shows the input
signal,r(t), which is the sum of two sine waves with frequencies ω0 = 0.1 and ω1 = 1. The system
responds to the input by producing an output, y(t). The output is determined by the process, P(s),
and the optimal state feedback, K, as presented in the text. The gold curves show the system
error, which is e = y − r, the difference between the output and the reference signal. a Squared
input values are weighted by R = wIn, with w = 0.1 and n as the number of inputs to the process.
In this case, we fix the input to the embedded reference signal in the state-space model to zero
and have one input into the process given by B in Eq. 9.4. The error curve shows that this system
closely tracks the low-frequency reference sine wave but does not track the high-frequency
reference component. b This case allows feedback inputs into both states of the process,
augmenting x˙1 in Eq. 9.2 with a separate input and letting B = I2. Other aspects as in the prior
panel. c As in panel a, with w = 0.01. The weaker cost for inputs allows stronger feedback inputs
and closer tracking of the high-frequency component of the reference signal, thus shrinking the
tracking error in the gold curve. d Nearly perfect tracking with w = 0.01 and inputs directly into
both process states. See supplemental Mathematica code for details about assumptions and
calculations

that minimizes the costs, J , in Eq. 9.5. Figure 9.3 presents an example and
mentions some technical issues in the caption.
The example illustrates two key points. First, as the relative cost weighting of
the inputs declines, the system applies stronger feedback inputs and improves
tracking performance.
Second, the state equations for the intrinsic process, P(s), in Eq. 9.4 provide
input only into the second state of the process, as can be seen in the equation for
x˙2 in Eq. 9.2. When we allow a second input into the intrinsic process, P(s), by
allowing feedback directly into both x˙1 and x˙2, we obtain much better tracking
performance, as shown in Fig. 9.3.
9.2 Tracking Example 75

Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,
adaptation, distribution and reproduction in any medium or format, as long as you give
appropriate credit to the original author(s) and the source, provide a link to the Creative
Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter’s Creative
Commons license, unless indicated otherwise in a credit line to the material. If material is not
included in the chapter’s Creative Commons license and your intended use is not permitted by
statutory regulation or exceeds the permitted use, you will need to obtain permission directly
from the copyright holder.

You might also like