Chapter9-18

Download as pdf or txt
Download as pdf or txt
You are on page 1of 23

Chapter 9

Nyquist Theory

We’ve come full circle, but conditions have


changed.
Bob Peterson

In this final chapter, we will derive the wonderful Nyquist criterion. To gain some under-
standing where it comes from one needs to delve a bit into analytic function theory (also
referred to as complex analysis). For this reason our starting point will be a brief intro-
duction to analytic function theory in Section 1. This will lead among other things to the
residue theorem. This theorem is definitely worthy to belong in the toolbox of every serious
engineering and science student. It is expected that the student will be able to apply this
theorem to compute integrals in the complex domain. A very important application for the
systems and controls student is of course the computation of the inverse Laplace transform
via the Bromwich integral formula obtained in Chapter 3.
In Section 2 the excursion into complex analysis is continued with the discussion of the
principle of the argument, the second corner stone in the applications of complex analysis in
systems and controls.
All this culminates in the Nyquist Criteria discussed in Section 3. We note that some
knowledge about Bode plots is required in order to successfully draw the Nyquist plots re-
quired by the criterion.

1. Analytic Function Theory

One cannot do justice to analytic function theory (or complex analysis) in a few lectures.
Every student who is serious about control theory should delve deeper into this topic. At
an elementary level, [1, 2, 3] are good starting points. Back in time, I learned it from [4].

171
172

Another interesting summary, but falling short of the principle of the argument is [5]. If you
plan graduate studies in control, a course in complex analysis is desirable and recommended.
A few things were already mentioned when I talked about the (inverse) Laplace transform.

Analytic Functions
Consider a function F (s) of a complex variable s. In general, such a function is complex
valued. If we represent s by its real and imaginary parts, i.e., s = σ + jω, then

f (s) = f (σ + jω) = u(σ, ω) + jv(σ, ω),

where u and v are two real functions of two real variables σ and ω. For example:
(
u(σ, ω) = σ 2 − ω 2
f (s) = s2 ⇒
v(σ, ω) = 2σω,
(
u(σ, ω) = σ
f (s) = Re s ⇒
v(σ, ω) = 0.

Recall how the derivative of a real function, F , at x0 was defined via:


F (x + ∆x) − F (x)
lim .
|∆x|→0 ∆x
There are two ways in which |∆x| can go to zero, along ∆x > 0 and along ∆x < 0. Only
when the two limits are equal, at a point x0 say, do we define this limit as F ′ (x0 ). In the
other cases a right or a left limit may exist (in which cases a right or left derivative could be
defined), although it is also possible that neither limit converges. The function F (x) = x2
has F ′ (x) = 2x at every point x. The function F (x) = |x| has F ′ (x) = 1, when x > 0, and
F ′ (x) = −1 at x < 0. At x = 0, the derivative is not defined, but the right derivative is 1,
the left derivative is −1. The function sin x1 is not differentiable at zero, and neither left nor
right derivative exist. If a function F is differentiable at all points of an (open) interval I,
we say that F is differentiable in I.

Likewise we define a derivative of a complex function. Let w = f (z) be a complex valued


function of the complex variable z. If w = f (z) is defined in a neighborhood of the point z0 ,
then consider
∆w f (z0 + ∆z) − f (z0 )
lim = lim . (9.1)
|∆z|→0 ∆z |∆z|→0 ∆z
Whereas in the real case there were only two different ways for |∆x| to approach zero,
in the complex case there are infinitely many ways for |∆z| → 0. In principle the limit
along each (fixed) direction may be different. Check this out for the function f (z) = Re z.
Erik I. Verriest Chapter 9: Nyquist Theory 173

If however this limit happens to be same for every argument of ∆z, we say that the above
limit exists, and denote it as f ′ (z) or
dw
.
dz
If the complex function f is differentiable at z0 , we say that it is analytic at z0 . If D is an
open domain in C, and f is analytic at every point in D, then we call f analytic in D. Some
authors prefer the term holomorphic instead of analytic. These are synonymous.

Characterization of Analytic Functions


It would really be a nuisance if we had to check analyticity of a function by the above
definition. A necessary and sufficient conditions for the differentiability of a function of a
complex variable exists: Let z = x + jy, then w = f (z) = u(x, y) + jv(x, y) is differentiable
at a point (x0 , y0 ) if and only u(x, y) and v(x, y) are each differentiable at that point, (i.e.,
the partial derivatives of u and v with respect to x and y are well defined) and the Cauchy-
Riemann relations
∂u ∂v
=
∂x ∂y
∂v ∂u
= − (9.2)
∂x ∂y
are satisfied at that point.

Examples: The function w = z 2 satisfies

w = (x + jy)2 = x2 + 2jxy − y 2

Thus u(x, y) = x2 − y 2 and v(x, y) = 2xy. Consequently

∂u ∂v
= 2x =
∂x ∂y
∂v ∂u
= 2y = − .
∂x ∂y

On the other hand, consider the function w = Re z = x. Now u(x, y) = x, v(x, y) = 0,


and obviously the Cauchy-Riemann equations are not satisfied.

It follows from the Cauchy Riemann conditions that the real part u(x, y) and the imagi-
nary part v(x, y) of an analytic function satisfy the PDE

∂2ξ ∂2ξ
+ = 0.
∂x2 ∂y 2
174

Any function ξ(x, y) satisfying this equation is called a harmonic function.


The partial differential operator
∂2 ∂2
+ .
∂x2 ∂y 2
is called the (two-dimensional) Laplacian, and also denoted by ∇2 or ∆. Its significance in
(electromagnetic) potential theory should be well known.

Integral of a Function of a Complex Variable


Let f be a piecewise continuous function of the complex variable z, defined on a piecewise
smooth arc AB. Choose a partition on AB: {zi ; i = 0, . . . , n}. Form the sum
n−1
X
f (ξk )∆zk (9.3)
k=0

where ∆zk = zk+1 − zk . Letting n → ∞ such that max |∆zk | → 0, the sum tends to the
path integral of f along AB., i.e., Z
f (z) dz.
AB
With f (z) = u(z) + jv(z), where z = x + jy,
Z Z Z
f (z) dz = (u dx − v dy) + j (v dx + u dy). (9.4)
AB AB AB

Examples:
R
1. AB z dz, where AB is the straight line segment from 0 to 1 + j. The line segment AB is
characterized by the equation x = y. Hence on AB:

z = x + jy = x + jx = (1 + j)x

Likewise dz = (1 + j)dx for 0 ≤ x ≤ 1 and


Z 1 1
x2
Z
z dz = (1 + j)2 x dx = (1 + j)2 = j.
AB 0 2 0

2. The function z1 is not analytic at 0 (Why?), but is analytic in the domain D = C \ {0}.
Consider the circular path Cleft (with radius R, and center O) in D.
On Cleft: z = Rejθ , where θ decreases from 3π
2
to π2 . Now

dz = jRejθ dθ
dz
giving z
= j dθ. So:
π
dz
Z Z
2
= j dθ = −jπ.
Cleft z 3π
2
Erik I. Verriest Chapter 9: Nyquist Theory 175

Note that this integral is independent of R.


The same function, integrated over Cright yields: On Cright : z = Rejθ , where now θ increases
from − π2 to π2 .
Z π
dz
Z
2
= j dθ = jπ.
Cright z − π2

This is also independent of R, but differs from the integral over Cleft , although it has the
same initial and final points!)
This is not the case in Example 1. For instance taking a path ACB, where C is the point
z = 1, then
On AC: z = x and dz = dx, 0 ≤ x ≤ 1.
On CB: z = 1 + jy and dz = jdy, 0 ≤ y ≤ 1.
Thus
Z 1 Z 1
1 j
Z Z Z
z dz = z dz + z dz = x dx + j (1 + jy) dy = + j(1 + ) = j.
ACB AC CB 0 0 2 2

This will be explained later, when I’ll talk about Cauchy’s theorem.

Definition: A domain D is called connected if any two points in D can be connected by a


path (not necessarily a straight line) such that all its points lie in D.

Definition: A simply connected region is a connected region with the property that any
closed curve in it can be shrunk continuously to a point without passing outside of the do-
main.

Clearly, C \ {0} is not simply connected. A simply connected domain cannot have ‘holes’
in it. Note also that Z Z
f (z) dz = − f (z) dz.
AB BA

For a closed path, the direction is therefore important: The clockwise (CW) contour integral
H
cw
is Z I
f (z) dz = f (z) dz
ABCA

Note that thus I I


=− .
cw ccw

The direction of the contour needs to be indicated. However the convention is that the
H
symbol stands for the counterclockwise (CCW) contour.
176

Cauchy’s Theorem
Theorem: If w = f (z) is analytic in the simply-connected region D, and if Γ is a piecewise
smooth contour lying in D, then I
f (z) dz = 0. (9.5)
Γ
Note that this is independent of the particular form or location of Γ, as long as it lies in D!

Proof of Cauchy’s Theorem: Since f (z) is analytic in D, it is by definition differentiable in


D, and therefore the Cauchy-Riemann relations hold, with f = u + jv, z = x + jy:
∂u ∂v
=
∂x ∂y
∂v ∂u
= −
∂x ∂y
Hence u dx − v dy and v dx + u dy are total differentials1 in the simply connected region D.
H
It follows that their integral over a closed contour Γ is zero. Thus f (z) dz = 0. ✷

R
Corollary: The line integral Γ f (z) dz is independent of the path Γ joining the endpoints
A and B if Γ can be enclosed in a simply connected region D inside which f (z) is analytic.
Proof: From Cauchy’s Theorem:
I Z Z
f (z) dz = f (z) dz − f (z) dz.
AΓ1 BΓ2 A Γ1 Γ2

Cauchy’s Integral Formula


If Γ is a closed contour, inside which and along which f (z) is analytic, then for any point α
inside Γ:
1 f (z)
I
f (α) = dz. (9.6)
2πj z−α
f (z)
Proof: Let Cǫ be a small circle with radius ǫ and center α. Clearly z−α
is analytic between
Γ and Cǫ , thus
f (z) f (z)
I I
dz = dz
Γ z −α Cǫ z − α

On Cǫ we have z = α + ǫejθ with θ increasing from 0 to 2π. Also dz = jǫejθ dθ Thus


Z 2π
f (z)
I
dz = f (α + ǫejθ )j dθ.
Cǫ z − α 0
1
This means that there exist P and Q such that dP = u dx − v dy and dQ = v dx + u dy. Indeed,
∂2 P ∂2 P
dP = ∂P ∂P ∂u ∂v
∂x dx + ∂y dy then necessarily satisfies ∂y∂x = ∂x∂y , which means ∂y = − ∂x .
Erik I. Verriest Chapter 9: Nyquist Theory 177

Let ǫ → 0, then

f (z)
I Z
dz = lim f (α + ǫejθ )j dθ = 2πjf (α).
Γ z−α ǫ→0 0

Hence the theorem follows. ✷

Remark: The important fact that is learned from this formula is that if a function is ana-
lytic inside a certain boundary (Γ), then the value of this function at all points inside this
boundary is completely specified or determined by the values of the function at the boundary
(through the integral formula (6)). This is reminiscent of the result in the theory of elec-
tromagnetism that the potential in a cavity is completely determined by the distribution of
the sources on its boundary! This connection with electromagnetic theory is not surprising.
After all we have seen that the real and imaginary parts of an analytic function are harmonic
functions, i.e., satisfy the potential equation.

Another way to write (6) is

1 f (a)
I
f (z) = da.
2πj a−z

Differentiation with respect to z gives the interesting formula


 
df (z) d 1 f (a)
I
= da
dz dz 2πj a−z
1 f (a)
I
= da
2πj (a − z)2

The reason why I say that this is interesting is because it tells us that the derivative of an
analytic function is itself differentiable. Indeed, we can differentiate an analytic function as
many times as we want! Cauchy’s integral formula, successively differentiated leads to

1 f (a)
I
(n)
f (z) = n! da. (9.7)
2πj (a − z)n+1

This situation is very different from real function theory. The function |x|3 is twice differen-
tiable, but not three times in any interval containing the origin.

Taylor Series
Theorem: If f (z) is analytic in a neighborhood of the point z = a, then f (z) can be ex-
panded in a Taylor series of powers of z − a.
178

Example: The function exp(z) is differentiable for all (finite) z (Why?). Hence exp(z) is
analytic in the entire complex plane. This function has the well known expansion
z2 z3 z4 z5
exp(z) = 1 + z + + + + +···
2! 3! 4! 5!
Single out the point a, then we have also
(z − a) (z − a)2 (z − a)3 (z − a)4 (z − a)5
 
a
exp(z) = exp(a) exp(z−a) = e 1 + + + + + +··· .
1! 2! 3! 4! 5!
which is the Taylor expansion about the point a.

Proof of Taylor Series Theorem: First note that


∞  n
1 1 1 1 1 X z−a
= = z−α
= , (9.8)
α−z (α − a) − (z − a) (α − a) 1 − α−a α − a n=0 α − a

where the convergence of the infinite series is guaranteed by


z−a
< 1, or |z − a| < |α − a|.
α−a

Consider now a circle Γ with center at z = a and radius R = |α − a|. For points inside Γ:
|z −a| < |α −a|, thus the above geometric series converges. Using Cauchy’s Integral Formula
(6), interchanging the role of α and z, one finds:
1 f (α)
I
f (z) = dα.
2πj Γ α − z
Thus for |z − a| < R, substituting (7),
I "X ∞ n
# ∞ I 
1 f (α)(z − a) X 1 f (α)
f (z) = n+1
dα = n+1
dα (z − a)n ,
2πj Γ n=0 (α − a) n=0
2πj Γ (α − a)
or ∞
X
f (z) = An (z − a)n , (9.9)
n=0

with (9.7)
1 f (α) f (n) (a)
I
An = dα = . (9.10)
2πj Γ (α − a)n+1 n!

Laurent series
This is a generalization of the Taylor series. Assume f (z) is analytic in a circular ring or
Erik I. Verriest Chapter 9: Nyquist Theory 179

annulus, R1 < |z − a| < R2 , and on its boundaries. Note that this is not a simply connected
region, hence Cauchy’s theorem cannot be used directly. However, one can play a nice trick:
make a crosscut between the inner and the outer boundary, and apply Cauchy’s formula in
the simply connected region bounded by the resulting contour Γ (letting the width of the
cut tend to zero). Thus
1 f (α) 1 f (α)
I I
f (z) = dα − dα
2πj Cout α − z 2πj Cin α − z
where Cout and Cin are respectively the paths along the outer and the inner circle. Substitute
1 z−a
z−a
by the power series in α−a in the first integral over Cout and note that |z − a| < R2 .
Then ∞
1 f (α)
I X
dα = An (z − a)n
2πj Cout α − z n=0
1
H f (α)
with An = 2πj Cout (α−a)n+1
dα. In the second integral over Cin , use

X (α − a)n−1
1
− =
α−z n=1
(z − a)n

which converges for |z − a| > |α − a| = R1 . Then similarly,


∞ −1
1 f (α)
I X X
−n
− dα = B−n (z − a) = Bn (z − a)n ,
2πj Cin α−z n=1 n=−∞

1
H f (α)
with Bn = 2πj Cin (α−a)n+1
dα, for n = −1, −2, . . .. Combining these results in the ring
R1 < |z − a| < R2 gives the Laurent series:

X
f (z) = an (z − a)n
n=−∞
1 f (α)
I
an = dα, ∀n.
2πj C (α − a)n+1
C is a any closed curve, surrounding z = a in CCW direction, and lying in between Cin and
Cout .
Example:
ez 1 z z2 zn
= +1+ + + ...+ + ...
z z 2! 3! (n + 1)!
Here a = 0, R1 = 0 and R2 = ∞.

Isolated Singularities
Definition: If f (z) is analytic everywhere throughout some neighborhood of a point z = a,
180

except at the point a, then the point z = a is called an isolated singular point of f (z).
Example: The origin is a singular point for z1 .
Note that by the previous discussion, an analytic function cannot have a Tayor expansion
about an isolated singular point. However its Laurent series is valid (precisely because it
allows us to consider a domain where the singularity is punched out. There are three kinds
of isolated singularities:
• A removable singularity is one that can be “removed” by properly redefining the ana-
lytic function.
Example: f (z) = sinz z is undefined at zero, but approaches 1 as z → 0. Thus we can
redefine f (z) as (
sin z
z
if z 6= 0
f (z) =
1 if z = 0

• A pole is a singularity such that the Laurent series about it contains only finitely many
negative powers of z − z0 if z0 is this singularity.
In other words, if f (z) is not finite at z = z0 , but if for some positive integer m, the
function (z − z0 )m f (z) is analytic at z = z0 , then f (z) has a pole at z = z0 . The order
of the pole is the least integer for which the above holds.
2ez
Example: f (z) = z(z−1)(z−2) has three poles: at 0, 1, and 2. In this example, all poles
have order 1. The function f (z) = (z−1)sin z
2 (z+1)3 has the pole z = 1 of order 2, and the

pole z = −1 of order 3.

• An essential singularity is such that the Laurent series about it has infinitely many
nonzero negative powers of (z − z0 ) in it. For instance, f (z) = e1/z has an essential
singularity at z = 0, for the expansion f (z) = 1 + z1 + 2!1 z12 + 3!1 z13 + . . . holds.
Note that a rational function only has poles as singularities.
If D is a simply connected domain, and f has only poles as singularities inside D, then we
say that f is meromorphic in D.

Residues
Let f be meromorphic in a domain D with a pole at z = a. Then, by definition, if m is
the order of this pole, (z − a)m f (z) is analytic at a. Consequently, it can be expanded in a
Taylor series:

(z − a)m f (z) = A0 + A1 (z − a) + . . . + Am (z − a)m + . . .

where
1
Ak = {Dk [(s − a)m f (s)]} .
(k)! s=a
Erik I. Verriest Chapter 9: Nyquist Theory 181

Note that these Ai are not the same as defined in (9.10). It follows that

A0 A1
f (z) = + + . . . + Am + Am+1 (z − a) + . . .
(z − a)m (z − a)m−1

Let Ca be a circle with center at a and sufficiently small radius, so that the inside and the
circle itself contain no other singularity besides z = a. That this is possible stems from the
fact that we only considered isolated singularities. Then
I
f (z) dz = 2πjAm−1 .
Ca

Indeed term by term integration yields only one nonzero term. All other terms vanish because
of the periodicity (w.r.t. θ) of the integrand. We define this ‘remaining’ Am−1 as the residue
of f (z) at a, and write
Am−1 = Res [f (z), a].
Thus, for a pole at z = a of order m

1
Res [f (z), a] = {Dm−1 [(s − a)m f (s)]} . (9.11)
(m − 1)! s=a

Evaluation of the residue at a pole of multiplicity one is particularly simple:

Res [f (z), a] = [(s − a)f (s)]s=a .

Residue theorem
This theorem by itself makes this whole excursion in complex analysis worth our while.

Theorem: If f (z) is analytic inside and on a closed curve Γ, except at a finite number of
H
interior isolated poles, then Γ f (z) dz is given by 2πj times the sum of the residues of f (z)
at these poles:
I X
f (s) ds = 2πj Res [f (s), pk ],
Γ pk ∈D

where the summation only contains the poles inside the contour Γ, which is run counter
clock wise (CCW).

Recall that we already talked (in the chapter on the Laplace transforms) about the ap-
plication of the residue theorem in computing the coefficients in a partial fraction expansion.
Another important application is the computation of the inverse Laplace transform (after
182

closing the contour about a suitable arc). The path from σ − j∞ to σ + j∞ is closed by
a big semicircle to either the left or the right depending on which gives a zero contribu-
tion. That way the path integral becomes a contour integral, and is easily computed by the
residue method. It remains then to show that the contribution of the integral along either
the semicircle to the right or to the left is zero for an appropriate time interval.

Application to Bilateral Laplace Transform Inversion


See classnotes.
Erik I. Verriest Chapter 9: Nyquist Theory 183

2. Principle of the Argument

This hinges on the notion of the logarithmic derivative and the residue theorem shown in
section 1. The principle of the argument forms the basis for the important Nyquist theorems,
yet they are readily explained.

Logarithmic Derivative
Let us first have a look at the function f (s) = ln s. Expressing s in polar form, we get

ln s = ln |s|ej arg(s) = ln |s| + j arg(s)




Note that for s 6= 0, the first term is well defined and real valued. However note that since
for any integer k we have eφ+2kπ = eφ . It follows that ln s is actually multi-valued. There
are infinitely many logarithms of s, given by ln s = ln |s| + j2kπ, thus 2kπ apart. Let f be
analytic in a neighborhood of s.
Let f be meromorphic in D. The function
f ′ (s) d ln f (s)
=
f (s) ds
is called the logarithmic derivative of f . By expressing f (s) in polar form

f (s) = |f (s)|ej(arg f (s)+2πν) ,

with ν any integer, we find

ln f (s) = ln |f (s)| + j(arg f (s) + 2kπ).

Note all poles and zeros of f are poles of the logarithmic derivative with multiplicity 1.

Principle of the argument


It follows that for any contour Γ lying inside D, but not passing through any poles of f ′ /f ,
I ′
f (s)
I I
ds = d ln |f (s)| + j d argf (s)
Γ f (s) Γ Γ

where all contour integrals are taken in the same direction. The first term on the right hand
side vanishes, as it is the integral of a total differential, where initial and final boundary
coincide. The second contour integral computes the change in the argument of f as s follows
the contour Γ. Let us denote this by ∆Γ arg f . Hence
I ′
f (s)
ds = j∆Γ arg f.
Γ f (s)
184

x x
Now, ∆Γ arg f equals 2π n f (x) (a). Where we denote by n f (x) (a) the number of CCW en-
Γ Γ
circlements of the point a ∈ C by f (s) when s runs through the contour Γ (once) in CCW
direction. We shall also denote the mapping of the contour Γ by the mapping s → f (s) as
f (Γ).

On the other hand, all poles and zeros of the meromorphic function f are poles of order
one of its logarithmic derivative. In particular, if the zi and pj are respectively the zeros and
poles inside the contour Γ (we do not care about any other ones), then

Π(s − zi )αi
f (s) = g(s)
Π(s − pj )βj

where the αi and βj are all positive integers and g(s) is analytic, having no zeroes nor poles
inside the contour Γ. Thus,
d ln f (s) X αi X βj
= − + G(s)
ds s − zi s − pj

where G(s) is also analytic inside Γ. Hence, by the residue theorem, its contour integral over
P P P
Γ is equal to αi − βj . But αi = Zf,Γ , the number of zeros of f inside Γ (counting
P
multiplicities), and βj = Pf,Γ , the number of poles of f inside Γ. Note that a pole or zero
is counted as one, if it is encircled by Γ in CCW direction, but as −1 if encircled in CW
direction. If the contour is understood, we shall delete the argument Γ in Zf,Γ and Pf,Γ in
order not to overburden the notation. Combining these two partial results, we find now the
beautiful

Theorem: (Principle of the Argument)


x
n x (O) = Zf,Γ − Pf,Γ . (9.12)
f(Γ )

We discover that if s follows the closed path Γ, then the image f (s) of s by the meromor-
phic map, f , encircles the origin Zf,Γ − Pf,Γ times in the same direction as Γ is run through.
Indeed changing the direction of Γ to clockwise leads to −Zf,Γ + Pf,Γ on the right hand side.
By the residue theorem. every pole or zero that is encircled by Γ in counterclockwise direc-
tion is encircled −1 times in clockwise direction. So it does not matter what direction you
choose in the theorem, as long as you are consistent. We emphasize that Zf,Γ and Pf,Γ are
respectively the number of zeros and poles (counting their multiplicity) of f , that are inside
the contour Γ. I will indicate the direction CCW or CW respectively by the superscript “+”
+ −
and “−”. For instance Zf,Γ = −Zf,Γ etc.
Erik I. Verriest Chapter 9: Nyquist Theory 185

So we arrived at a cute result, but at this point the student may wonder what this all
has to do with control theory. There is a dead give away: The Principle of the Argument
(PoA) states a result about encircled poles. If the contour Γ is chosen so that the entire right
half plane is encircled, then Pf,Γ is the number of poles in the right half plane. We know
that these are unstable poles. Hence PoA gives a graphical way, just as the Routh-Hurwitz
test was an algebraic way to count the number of unstable poles. If Pf,Γ = 0, it implies the
stability of a system with transfer function f (s). However there are some issues we need to
take care of first. An important one is how to draw the f (Γ). If that would not be simple, the
method is useless. In the next section we shall investigate this issue, and derive two useful
applications of the PoA, known as Nyquist open loop and closed loop stability criterion.

x
Exercise: Show that n (f x (O) = Zf1 ,Γ + Zf2 ,Γ − Pf1 ,Γ − Pf2 ,Γ .
1 f2 )( Γ )

3. Nyquist Plot and Criteria

Let us start by defining the Nyquist contour, Γ. Assume first that f (·) has no poles or zeros
on the imaginary axis. Start the contour at the origin, and move along the imaginary axis
to the point jΩ, for some sufficiently large Ω. How large this ought to be will be clear later,
but the idea is that we will then let Ω → ∞ anyway. Then let’s connect the point jΩ to
−jΩ via a semicircle on the right, CR , obviously with radius R = Ω and center at the origin.
Finally, connect −jΩ back to the the staring point at O, with the straight line segment on
the negative imaginary axis. Note that this resulting closed path (parameterized by Ω) is
run in clockwise direction (the “−” direction).
Let us first map the straight line from O to jΩ by the mapping s → f (s) This is what
we will understand by the Nyquist plot. It is nothing but the representation of f (s) when
s = jω and ω ranges from 0 to Ω and ultimately infinity, i.e., we plot f (jω) via its gain and
phase versus frequency, i.e., via its polar coordinates. Knowing the Bode plots (magnitude
and phase) helps in the approximate construction of this. Recall that the Bode plots are a
representation (on a logarithmic scale) of the gain |f (jω)| and the phase shift arg f (jω) as
function of the frequency ω. Hence, if the Bode plots (magnitude and phase) are known, the
Nyquist or polar plot is constructed by (graphically) eliminating ω.
The Bode plots themselves are either obtained analytically (and that is facilitated with
a factorization of the transfer function) or can be obtained experimentally with the use of
an signal generator (at the input) and signal analyzer (at the output). In the latter case one
feeds a chirp signal (linearly increasing frequency with time) into the system, and measures
corresponding amplitude and phase at the output. Obviously, this straightforward exper-
186

imental analysis can only be performed if the system is stable, whereas both |f (jω)| and
arg f (jω) are well defined. One may wonder how useful a stability criterion might be if first
we need to obtain a factorization, in which case we already know the right half plane poles
anyway. The answer for now is: it is useful for the closed loop stability, but hold on.

In order to apply all the machinery of complex analysis, we need a closed contour. That
is why we added CR , and then connect back to the origin via the straight line path from
−jΩ to O. If f happens to be a rational transfer function (the objects we have been dealing
with), then it holds that f (−jω) = f (jω), so that to draw the mapping of the negative
imaginary axis by f , we simply take the complex conjugate of the Nyquist plot (but run in
reverse direction). That is simple to draw.

It remains to draw the mapping by CR . But along CR , we have s = Ωejθ with θ ranging
from π/2 to −π/2 via 0. This implies that the track of f (Ωejθ ) needs to be plot for Ω fixed,
and θ : π/2 ց 0 ց −π/2. Note that if f is strictly proper, a large semicircle will be mapped
to something small, in the neighborhood of O. The reason why we do not let Ω → ∞ directly
should now be clear. At the limit we cannot gauge how many encirclements will be made
about the origin, but it is clearly visible before the limit is reached. Once this nf (Γ) is clear
for large Ω, the number will not change if we take the limit Ω → ∞. We refer to the (finite,
before reaching the limit) contour (traced CW) as the Nyquist contour. Let me add to this
now that I consider Ω to be ”sufficiently large” if all poles and zeros of f that may lie in
RHP are encircled by the contour Γ.

The PoA then says that nf (Γ) = Zf,Γ − Pf,Γ , assuming consistent directions (i.e., now
measuring n− − −
f (Γ) = Zf,Γ − Pf,Γ .)

There is one detail left: How do we proceed if f has poles or zeros (both are singularities
for the logarithmic derivative) on the imaginary axis? Deform the Nyquist contour by small
semi circles bypassing these poles, and map this deformed contour by f . For each pole or
zero on the imaginary axis there are two distinct possible deformations: either passing the
pole or zero on the left, or pass it on the right. If the pole or zero is bypassed by a small
semicircle (radius r, eventually tending to zero), Cr on the right, then this imaginary axis
pole or zero will be excluded, and the path along Cr needs to be accounted for. If on the
other hand you decide to pass the pole or zero on the left, via a semicircle Cℓ , then that
pole or zero must be included in the count of encircled poles, i.e., in Pf,Γ . This of course
leaves the problem of having to know the poles and zeros of f on the imaginary axis. In
Erik I. Verriest Chapter 9: Nyquist Theory 187

practice you may not even have that information. However, mapping the imaginary axis by
f amounts to a one dimensional sweep. Graphical programs are good at that. A zero will
then be visible by the value f (jω) being zero (the Nyquist plot goes through the origin),
and a pole, obviously by an overflow. Hence, in practice, it will always be possible to detect
poles and zeros on the imaginary axis. Of course there is always the numerical inaccuracy
we have to deal with. But at least poles and zeros in the neighborhood of the imaginary axis
can be spotted and circumvented.

Example 1. Consider f (s) = 1/(s2 + 3s + 2). The segment ω from 0 to 5 is mapped to the

0.3

0.2

0.1

0 0.1 0.2 0.3 0.4 0.5

–0.1

–0.2

–0.3

Figure 9.1: Nyquist plot

arc starting at 0.5 and extending downward from there (the green line). The ocre arc is the
mapping of CR (radius of CR is here only 5). Then the red arc in the upper half plane is
the mapping of the segment ω = −5 to ω = 0 on the imaginary axis. We see that f (Γ)
does not encircle the origin, thus nf (Γ) = 0. Hence Zf,Γ = Pf,Γ . But f has no zeros, hence
surely Zf,Γ = 0 Consequently Pf,Γ = 0, and thus the function f has no right half plane poles.
Indeed the poles of f are readily found to be at −1 and −2, and are thus not encircled by Γ.

Example 2. Consider now f (s) = (s − 1)/(s2 + 3s + 2). The segment ω from 0 to 15 on the
imaginary axis is here mapped to the arc starting at −0.5 and extends downward from there.
The green arc is the mapping of CR (radius of CR is here 15). Then the symmetric arc (here
ocre colored) starting in the forth quadrant but then ending in the upper half plane is the
mapping of the segment ω = −15 back to ω = 0 on the imaginary axis. We see that f (Γ)
encircles the origin once in counterclockwise direction, thus n+ + +
f (Γ) = 1. Hence Zf,Γ − Pf,Γ = 1.
But f has no poles inside Γ, and one zero inside Γ, which verifies the PoA.
188

0.4

0.2

–0.4 –0.2 0 0.2

–0.2

–0.4

Figure 9.2: Nyquist plot

Example 3. Consider now f (s) = (s2 + 1). This has zeros at s = j and s = −j. Recall that
it follows that these are poles of the logarithmic derivative, and the contour Γ should bypass
these. Let’s go around via small semicircles, Cr , thus leaving these these zeros on our left side
as we pass. Clearly Zf,Γ = Pf,Γ = 0. Now let us map Γ by f . This is easily done analytically.

0 1 2 3 4 5

–2

–4

Figure 9.3: The contour for example 3

For points on the imaginary axis, we get f (jω) = −ω 2 + 1. If ω increases from 0 to 0.75,
f (jω) assumes positive real values. The origin is mapped to the point 1. As ω increases
towards 0.75, f (jω) moves along the positive real axis to 0.4375. Then the small semicircle
Erik I. Verriest Chapter 9: Nyquist Theory 189

Cr maps to the arc above the origin in the f (s)-plane. Indeed, letting s = j + rejθ gives
f (s) = (s + j)(s − j) = (2j + rejθ )rejθ ≈ 2jrejθ for small r. Furthermore, arg f (s) = θ + π2
and since θ : − π2 ր 0 ր π2 , we see that arg f (s) increases from 0 to π. Now letting again
s = jω, with ω from 1.25 to Ω = 3 maps to a segment along the negative real axis, arriving
at −8. The “big” semicircle CR , where s = 3ejθ with θ : π/2 ց 0 ց −π/2 maps to 1 + 9e2θ .
It follows that f (CR ) traces an arc making a complete contour: 2θ : π ց 0 ց −π, thus in
clockwise direction, arriving back at −8. The map of the negative imaginary axis with the
blip around −j is the complex conjugate of the segments along the real axis and the little arc.
Hence the little arc dips now to the third and fourth quadrant. For clarity we show in figure

0
–8 –6 –4 –2 0 2 4 6 8 10

Figure 9.4: Mapping of the upper half of Γ of Figure 9.

9 only the part of Γ lying in the first and second quadrant. It is seen that the net change
in the argument along this half contour is zero. Hence nf (Γ) = 0, while also Zf,Γ = 0, Pf,Γ = 0.

Two stability criteria can be deduced from the principle of the argument: An open loop
and a closed loop stability criterion. The latter of the two is the most practical one, the first
one not even being mentioned in many textbooks.

Open Loop Nyquist stability criterion

Suppose the characteristic polynomial of a system is known to be a(s). Stability requires


that this has no roots in the right half plane. We have seen that the Routh-Hurwitz criterion
can be used to determine the number of roots of a(s) with positive real part. Alternatively,
we can draw the Nyquist plot (as s moves along the D-shaped contour in the s-plane) of
a(s), and determine the number of times the origin is encircled in (say) clockwise direction.

We know that the number of roots in the right half plane is Za,Γ , which by the PoA is equal

to na(Γ) (0). A polynomial has no poles, so Pa,Γ = 0.
More generally, suppose the Nyquist plot of a system with transfer function H(s) is as
190

given below. If the number of RHP zeros of H(s) is known then, the number of unstable
poles is given by PH,Γ = ZH,Γ − nH(Γ) . Obviously this criterion has a serious drawback, it
is only useful when the number of RHP zeros is known. But that problem is of the same
caliber as the original problem. Thus only in a few special cases will the open loop Nyquist
crietrion be useful.

The following criterion, derived from the first is in fact much more interesting as it deals
with closed loop systems.

Closed Loop Nyquist stability criterion

b(s)
Suppose we have a feedback system with H(s) = a(s)
in the forward path and feedback
gain k > 0, then the closed loop is
b(s)
Hcl (s) = .
a(s) + kb(s)
By the principle of the argument for any path Γ,

n1+kH (0) = −[Z1+kH − P1+kH ]

The minus sign appears since we run the Nyquist contour in CW direction, and count
encirclements in CCW direction. Since 1 + kH(s) = a(s)+kb(s)
a(s)
, we have Z1+kH = Pcl and
P1+kH = Pol , were Pcl and Pol are respectively the number of unstable closed loop poles and
the number of unstable open loop poles. Now, convince yourself of the fact that
 
1
n1+kH (0) = nkH (−1) = nH − ,
k
and conclude (taking the same direction on Γ and H(Γ))
 
1
nH − = Pcl − Pol
k
That way, only one Nyquist plot (the plot for the open loop H) needs to be drawn for the
entire family of closed loop systems, parameterized by k.
Note that Γ is run in clockwise direction. Thus,
 
+ 1
nH − = −Pcl− + Pol−
k

Hence, if the Nyquist plot encircles the critical point, − k1 , n times CCW, then the number
of unstable closed loop poles (Pcl− ), for feedback gain k, is given by Pol− − n+
H . Consequently,
Erik I. Verriest Chapter 9: Nyquist Theory 191

the closed loop stability criterion is usually stated as follows:

The closed loop will be stable if the Nyquist plot encircles the critical point
a number of times equal to the number of unstable open loop poles.

Example 1: Discuss the stability of the closed loop system with H(s) = √ 1 in the
(s+ 3)2 (s+ √1 )2
3
forward path in standard feedback configuration with gain k in the feedback path.

1
Example 2: Repeat for H(s) = s
(the integrator or capacitor).

1
Example 3: Repeat for H(s) = s(1+sτ )
.

4. Applications: Gain and Phase Margin, Compensator Design

(To be continued).
192
Bibliography

[1] J. L. Schiff, The Laplace Transform: Theory and Applications, Springer-Verlag, 1999.

[2] T. Needham, Visual Complex Analysis, Oxford University Press, 2000.

[3] W.R. LePage, Complex Variables and the Laplace Transform for Engineers, Dover, 1980.

[4] P.I. Romanovskii, Mathematical Methods for Engineers and Technologists, Pergamon
Press, 1961.

[5] T.B.A. Senior, Mathematical Methods in Electrical Engineering, Cambridge University


Press, 1986.

193

You might also like